Datagrom | AI & Data Science Consulting

View Original

Top 3 Predictions for Enterprise Generative AI: What to Expect in 2025

See this content in the original post

Your browser doesn't support HTML5 audio

Listen to an AI-generated deep dive conversation about this article Google NotebookLM

OpenAI CEO Sam Altman predicts we may have AI superintelligence in a few thousand days in his Sep 2024 blog post The Intelligence Age. Meanwhile, Anthropic’s CEO Dario Amodei predicts we could have “powerful AI” as early as 2026 in his Oct 2024 essay Machines of Loving Grace. It’s been prediction season for generative AI startups who must sell a big vision to land big venture capital funding required to scale compute and their core product, Intelligence.

It should also be prediction season for decision-makers in large enterprises who want to keep pace with the fastest-evolving technology on the planet since the bigger the ship, the longer it takes to turn and keep up. Where the VC funding goes drives the rate of change, and right now, it’s going to generative AI. But what can we predict about this industry to inform our decision-making to keep pace?

It’s risky business to predict prediction machines. I’ll humbly accept the risk on your behalf, grounded in over a decade of experience as a data & AI strategist who’s helped decision-makers at the world’s largest enterprises convert their data gold into value with AI technologies. I’ll give you a different perspective than Sam and Dario, focused on helping decision-makers cut through the noise and inform your generative AI strategy for next year, illustrated with plenty of generative AI examples. Keep reading, and I’ll help you skate to where the puck is going to be for generative AI in the enterprise in 2025.

Here’s what you need to know. In 2025 we can likely expect:

  1. Generative AI as common as search

  2. Agentic AI emerges with limited impact

  3. New AI governance frameworks

Generative AI as common as search

Embedded generative AI solutions along with search bars

Google CEO Sundar Pichai has claimed AI is more profound than fire. Sure, that might end up being the case looking back many years from now. And let's look through the industry-driven hype. We can see that fundamentally, the business impact of generative AI is that it’s simply a new tool to operationalize data or convert data into business value.

As software converts more of the world into data, we build better shovels to help us organize and mine that ever-increasing pile of data (see Snowflake vs Databricks). We’ve seen this before. As software and operating systems evolved, the need for quick access to information within apps became apparent. This led to the gradual integration of search bars into various applications, aiming to enhance user experience by allowing users to find content quickly. This trend accelerated with the rise of the internet and mobile computing, making search bars a standard feature in most applications by the mid-to-late 2000s.

Search bar in Microsoft Word

Search has been an indispensable shovel to help us dig up our data. However, as data growth continued to explode, we reached an upper limit on the value we could extract with our search shovels. It became painful to sift through an ever-increasing number of relevant search results to synthesize the answers to our questions. We needed bigger shovels to synthesize that data for us and convert more of our data into value.

Fast forward to the 2020s, as we saw the rise of generative AI, and we learned that we could effectively embed the world’s data within a neural network. This allowed us to accelerate the time-to-value of our data because, for the first time, we had a tool that could synthesize all those search results and give us immediate answers to our specific questions.

In addition to better extracting insight from historical data, generative AI went a step further toward helping us realize value from our data by allowing us to synthesize the world’s data to create new data to achieve our objectives. We marveled as we watched generative AI create new content for us based on synthesizing all the training data provided to our neural networks. We have experienced Arthur C. Clarke’s third law firsthand — “Any sufficiently advanced technology is indistinguishable from magic.”

While generative AI’s creation abilities appear to be magic, we shouldn’t forget that, fundamentally, generative AI represents a better way to operationalize our data. Given the presence of search bars in most software applications we use today, along with competitive pressure, it’s not outlandish to predict that every one of these applications very shortly will also have embedded generative AI solutions since it is often superior to searching for and doing something valuable with your data stored within each application.

Further driving this trend are consumer expectations. In our ChatGPT world, consumers have learned to expect a generative AI chatbot experience in every application that stores their data. For example, even my Oura ring health tracker now includes generative AI called “Oura Advisor”.

Generative AI in Oura Sleep Tracker

Enterprise software companies that do not also start delivering generative AI solutions in addition to their search bars risk becoming obsolete. Of course, we should note a critical distinction that relative to search, generative AI requires far more computation power. So, unlike search, we should expect that software companies will seek to pass this non-trivial cost along to consumers who want this additional capability. We should also expect that Moore’s Law will roughly influence any technology tied to computation capacity and that compute capacity will double every two years as consumer pricing follows an opposite trend.

A fair argument can also be made that, unlike Google Search, dedicated generative AI platforms like ChatGPT Enterprise and Claude for Enterprise allow you to connect to your internal data storage to operationalize within their platforms vs. the application origin of the data, thereby reducing the need to include generative AI in every application. While dedicated generative AI platforms can help us reason across multiple data sources, when it comes to content creation, however, there is no better place for generative AI than the creation tool you are most comfortable with, due in large part to well-researched switching costs when refocusing our attention from one place to the next. There is also the additional burden of moving data back and forth between your creation tool of choice and the dedicated generative AI platform.

I’ll share a personal generative AI example to illustrate. Armed with new generative AI superpowers, I developed a full-stack cloud native AI news application to aggregate and summarize the top daily stories to help me and my readers stay current. Significantly, I created the application in English language prompts for large language models like OpenAI’s ChatGPT, which converted my intent into Python, Javascript, HTML, and CSS code. I started my journey by copying/pasting code back and forth between ChatGPT and VSCode. While this approach was initially practical, it quickly became cumbersome as my code base grew ever larger.

Then, I discovered the Cursor AI Code Editor (a flavor of VSCode), which brought the power of generative AI directly to my coding environment, including access to OpenAI’s latest models. Once I switched to Cursor, my productivity skyrocketed. I no longer needed to copy and paste code between applications. I could type my English prompts within Cursor and apply the code generated by the large language models (LLMs) to my codebase with a button click. I was also able to stay in the flow of my work for much longer, and my progress accelerated.

The concepts in my anecdote likely extend to other creative endeavors beyond software development. If, for example, I were most comfortable creating documents in Microsoft Word, I would probably get the most help from generative AI if such functionality was embedded within Word. The software industry has reached the same conclusion, and we see large-scale early evidence of this.

Microsoft Copilots

Microsoft Copilots provide perhaps the most well-known evidence of this trend. For those who prefer creating content within the Microsoft ecosystem, Microsoft Copilots now deliver generative AI experiences in virtually every Microsoft 365 application. Similarly, Adobe has embedded generative AI features across popular applications like Acrobat Reader, Photoshop, and Illustrator. Google has taken a similar approach, embedding generative AI in every Google work tool.

Given the context I provided and these industry indicators, it’s rational to conclude that the trend of embedding generative AI into every application where you find a search bar will only continue and will likely accelerate. If you agree, then the next question for the practical enterprise decision-maker is, so what? How should the emergence of universally embedded generative AI solutions impact our AI strategy for the enterprise next year?

If we expect every application we use daily to have embedded generative AI capabilities, perhaps we should strongly consider whether to build application-specific generative AI solutions. This trend may also indicate that the best use cases for dedicated generative AI platforms like ChatGPT Enterprise and Claude for Enterprise involve working with data across applications and databases from different vendors. And we should perhaps focus our internal build efforts on creating solutions unique to our core business that no vendor is likely to develop in the near future.

Agentic AI emerges with limited impact

Agentic AI Solutions in 2024

While it sounds complicated, Agentic AI is a framework for orchestrating multiple autonomous AI agents that can collaborate to make decisions and take actions with minimal human intervention. Each agent is specialized for a specific task and accepts input from the user or another agent. Then, it uses tools and a generative AI model to complete its task and sends an output to either the user or another agent.

Agentic AI helps us overcome a common limitation when we ask a generative AI model to do multiple things in a single prompt. Similar to small children, the more we ask them to do at once, the less likely they are to do all we ask. Conversely, if we ask a child to complete one simple task at a time, they are more likely to complete each task successfully. The same is true of generative AI models.

Agentic AI lets us break down complex tasks into simple subtasks, which are then assigned to specialized agents. The agents work together to complete the complex task, similar to how we might collaborate to achieve a common goal. Put another way, if we split up our single model prompt with multiple requirements into multiple model requests and daisy chain them together, we are far more likely to get the results we are looking for.

For example, if I ask an OpenAI model to generate an image for an article title in a specific style without misspellings, the model is unlikely to generate an image that meets all my requirements. I encountered this limitation when building my AI news app when submitting requests to OpenAI’s DALL-E 3 model to create images for each article title. The generated images frequently included text with misspellings. Before there was an available agentic AI solution to overcome this challenge, I learned I could reduce the number of images with misspellings if I broke down the task into two model requests:

Agentic AI Example for Image Spellcheck

  1. DALLE-3 image requests to generate the image based on the article title, then send the returned image to the GPT4-V vision model

  2. GPT4-V vision requests to visually inspect the DALLE-3 image and look for misspellings. If misspellings are found, trigger a retry of the DALLE-3 image request. Only exit this loop and return the final image when no misspellings are detected.

I unknowingly had created what the industry would call an “agentic AI” workflow that significantly improved the quality of my images. Fortunately, the industry has caught up with this need, and we’ve seen a host of agentic AI solutions emerge to simplify the process of making multiple requests to generative AI models to accomplish complex tasks.

In 2024, these are the leading Agentic AI solutions and frameworks:

All these agentic AI solutions share the common trait of reducing the code required to enable multiple agents to work together to achieve a given objective. They can also be used to create agentic AI solutions for any use case except Salesforce Agentforce, a no-code framework for building agents that help with customer service and sales.

As illustrated by my image generation example, there is value in these agentic AI systems that allow us to tackle more complex use cases. My prediction for 2025 is that we can expect to see autonomous AI agents achieve positive business outcomes in the enterprise. I also predict that the vast majority of business value from these agentic AI systems will emerge from a small subset of use cases relative to what the technology is capable of. Specifically, the most significant value we can expect from these systems next year will be mainly confined to internal, low-risk, high-impact use cases like software development, for example.

With current agentic AI frameworks, the technology exists to deploy agents to build full-stack software applications given a single user prompt. However, commercial solutions for this aren't yet available, but they might appear in 2025, as software development seems like the most obvious domain where agentic AI systems can deliver value. This is because software development presents a complex problem that can be broken down into many clearly defined subtasks and, critically, does not necessarily require access to sensitive business data or exposure to customers.

Soon, we can request our agentic AI development system to build us a web app, and a swarm of specialized agents will mobilize and work together to get it done. For example, the user request might go to a project manager agent that uses OpenAI’s most costly, strategic-thinking o1 model to break down the project into many subtasks and then farm each task out to agents that use cheaper or open-source models responsible for the back end, front end, cloud build, testing, monitoring, and every subtask that needs to be completed to deliver a working application.

Similarly, we could even see agentic AI systems emerge for use cases like research, with less generalizable subtasks relative to software development, which are still well within the imagination. One agent could be responsible for a rough draft, another to search the web for supporting evidence, another to look for counter-arguments, another accountable for style, and an editor agent to put it all together.

What I don’t expect in 2025 is agentic AI systems that replace many, if any, employees, provided they are willing to upskill. It seems likely we are entering an AI-native era in which human workers begin transitioning from primary to supervisory roles. We will let our AI agent children take on more household chores while we manage and monitor them. Gartner predicts that 80% of the engineering workforce will need to upskill by 2027.

Like small children, we won’t trust our AI agents to take on business-critical tasks. There isn’t enough adult supervision to limit our risk. Before this can happen, we will need more sophisticated AI risk management systems and new AI supervisory roles and processes.

So what does this all mean for the decision maker who wants to keep up with these changes and position their enterprise for generative AI success? Here’s a possible three-step approach that emphasizes starting small and learning:

  1. Identify a low-risk, high-impact use case for agentic AI with many clearly defined subtasks. For example, consider using Search Engine Optimization (SEO) for a marketing page on your website.

  2. Task an agentic AI SWAT team to create AI agents for each required subtask

  3. Monitor, manage, and report on every step of the process, including challenges, risks, and human involvement

By starting with this measured approach, you'll gain practical insights into agentic AI's potential while building the foundation for broader enterprise adoption in the AI-native era.

See this content in the original post

New AI governance frameworks

See this content in the original post

While enterprises can technically build autonomous AI agents today, few have deployed them into production. The limiting factor isn't the technology - it's governance. As an AI Strategist, I've experienced this firsthand. We can build impressive AI agents, but we struggle with fundamental governance questions: What data should these agents access? What actions should they be permitted to take? How do we ensure they're using the most current information?

Let me share an anecdote that illustrates this challenge. I recently created an AI Vendor Assistant agent to answer questions about our technology partners' AI offerings.

Chat with AI Vendor Assistant

The agent is grounded in data from product documentation, such as PDFs about Microsoft Copilot, OpenAI ChatGPT Enterprise, and others. While the agent works well, it highlights a critical governance gap: product information quickly becomes stale. How do we ensure the agent always uses the most current data? When I upload updated product information with new pricing, how do we detect and remove outdated versions to prevent the agent from providing incorrect information?

These questions point to a significant gap in the current AI technology ecosystem. While we have mature data governance and access control systems for traditional applications, we need governance solutions that are purpose-built for AI agents. In 2025, we'll see the emergence of specialized AI agent governance platforms designed to help enterprises manage, monitor, and maintain the data their agents use.

These new governance frameworks will likely address several critical capabilities:

See this content in the original post

Several market forces are driving the emergence of these governance solutions. First, as more enterprises move from AI agent experimentation to production deployment, they're discovering that traditional data governance tools aren't designed for the unique challenges of AI agents. These challenges include managing real-time access to large knowledge bases, handling natural language queries, and preserving context across interactions.

Second, regulatory pressure is increasing. The EU AI Act and recent US Executive Order on AI have highlighted the need for robust governance frameworks. This aligns with Gartner's prediction that by 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% faster rate of adoption. Enterprises need to demonstrate they have control over their AI systems, including the data these systems use to make decisions.

Third, the cost of getting it wrong is significant. Imagine an AI agent providing outdated pricing to customers, making decisions based on obsolete policies, or using superseded product specifications. The business impact could be substantial, from lost revenue to damaged customer relationships.

Early indicators of this trend are already visible. Major cloud providers are beginning to add AI governance features to their platforms, and established AI platform providers are expanding their governance capabilities - for example, Dataiku's AI Governance module provides centralized oversight of AI projects, model monitoring, and compliance tracking.

Dataiku Govern

Startups are emerging with specialized solutions for managing AI agent knowledge bases. And enterprises are creating new roles focused on AI governance and oversight.

For decision-makers planning their 2025 AI strategy, this trend has several implications:

  1. Evaluate potential AI agent governance solutions through the lens of your existing data governance framework. Look for solutions that complement rather than conflict with your current practices.

  2. Consider creating an AI agent governance pilot program. Start small with a limited number of agents and data sources to understand the governance challenges specific to your organization.

  3. Develop policies and procedures for managing AI agent knowledge bases. Even before specialized solutions emerge, having clear processes will help you maintain control over your AI agents.

  4. Invest in training for teams responsible for AI agent deployment and maintenance. They'll need to understand both the technical and governance aspects of managing AI agents.

The emergence of AI agent governance frameworks in 2025 will mark an important milestone in enterprise AI adoption. Just as we needed new governance tools when moving from on-premise to cloud computing, we now need specialized governance solutions for the age of autonomous AI agents. Organizations that prepare for this shift will be better positioned to deploy AI agents safely and effectively at scale.

This prediction connects directly with our earlier discussions about the proliferation of generative AI and the emergence of agentic AI. As these technologies become more widespread, the need for robust governance frameworks becomes increasingly critical. The enterprises that succeed with AI in 2025 will be those that find the right balance between innovation and control, empowering their AI agents while ensuring they operate within well-defined boundaries.

Enterprise AI Strategy: Preparing for 2025

As we look ahead to 2025, enterprise generative AI is poised for significant transformation. Our three key predictions – the ubiquity of generative AI alongside search, the emergence of limited but impactful AI agents, and the development of new AI governance frameworks – paint a picture of an evolving technology landscape that enterprise leaders must navigate strategically.

The integration of generative AI into everyday enterprise applications signals a fundamental shift in how we interact with and extract value from our data. Just as search bars became ubiquitous in the 2000s, generative AI will become a standard feature across enterprise software, transforming how employees access and utilize information. However, this widespread adoption brings new challenges that enterprises must address.

The rise of autonomous AI agents represents both an opportunity and a challenge. While these agents will deliver value in specific use cases like software development and research, their successful deployment depends on careful orchestration and human oversight. Enterprises that focus on low-risk, high-impact use cases while developing their workforce's AI supervision skills will be best positioned to capitalize on this technology.

Perhaps most critically, the emergence of specialized AI governance frameworks will enable enterprises to deploy AI agents safely and effectively at scale. As organizations grapple with data freshness, compliance, and risk management, these new governance solutions will become essential infrastructure for the AI-enabled enterprise.

For decision-makers planning their 2025 enterprise AI strategy, success will depend on:

  • Identifying where embedded generative AI can deliver the most immediate value

  • Developing clear frameworks for AI agent deployment and supervision

  • Investing in AI governance infrastructure and practices

  • Upskilling teams to work effectively with AI systems

  • Maintaining a balance between innovation and control

The enterprises that thrive in 2025 won't necessarily be those with the most advanced AI technology but those that implement these technologies thoughtfully and responsibly, with robust governance frameworks ensuring their AI systems remain reliable, current, and trustworthy.

As we navigate this transformative period in enterprise technology, one thing is clear: To truly 'skate to where the puck is going to be,' enterprise leaders must anticipate and prepare for the changes ahead. The future of enterprise AI isn't just about building powerful models – it's about creating the infrastructure, governance, and human capabilities needed to deploy these technologies effectively at scale. Organizations that begin preparing for these changes today will be best positioned to leverage enterprise generative AI for competitive advantage in 2025 and beyond.

See this content in the original post

See this content in the original post

See this content in the original post

Subscribe to our weekly Data Science & Machine Learning Technology Newsletter

See this form in the original post

Recent Posts

See this gallery in the original post

Posts by Category

See this content in the original post