Business

Your Data: The Untapped Goldmine

Remember the early days of generative AI? It felt like every other headline screamed about a new frontier, a revolution that would redefine industries overnight. Fast forward three years post-ChatGPT, and the narrative has shifted. Now, we’re hearing whispers—and sometimes shouts—of a “bubble,” with many questioning if generative AI is truly delivering material returns beyond a select few tech giants. It’s a fair question, especially when reports like the MIT NANDA study highlight that a staggering 95% of AI pilots fail to scale or deliver clear, measurable ROI. McKinsey echoed similar sentiments, suggesting that truly transformative benefits might lie in “agentic AI.” Then, perhaps most tellingly, AI leaders at a Wall Street Journal summit even advised CIOs to stop obsessing over AI’s ROI because measuring it is difficult, and likely wrong.

This puts technology leaders in a rather tricky spot. Their existing tech stacks are robust, tried-and-true workhorses that keep the business humming. Introducing new technology isn’t just about potential upside; it’s about navigating risk, avoiding destabilization, and ensuring continuity. A cheaper, shinier new component isn’t worth much if it jeopardizes your disaster recovery strategy or risks enterprise data during a transition. So, with this landscape of hype and skepticism, how do enterprises genuinely find a return on their AI investments?

Your Data: The Untapped Goldmine

Most conversations about AI data revolve around the engineering intricacies of ensuring models infer correctly from clean, relevant business repositories. And that’s crucial, of course. But there’s a more immediate and widely deployed use case in enterprise AI that often goes under-appreciated: simply uploading file attachments to prompt an AI model.

This deceptively simple act dramatically narrows the model’s focus, accelerating accurate response times and reducing the number of prompts needed to get the best answer. The trick, however, is that you’re sending your proprietary business data into an AI model. This isn’t a minor detail; it brings two vital considerations to the forefront, parallel to data preparation itself.

First, you need to govern your system for appropriate confidentiality. This means robust controls, clear policies, and a deep understanding of where your data goes and how it’s handled. Second, and perhaps more innovatively, you need to develop a deliberate negotiation strategy with model vendors. Why? Because these vendors, in their race to advance frontier models, desperately need access to high-value, non-public data—your business’s data. Recent mega-deals between Anthropic, OpenAI, and various enterprise data platforms aren’t just about money; they’re about data access.

Most enterprises would instinctively prioritize confidentiality, and rightly so. Maintaining trade secrets is paramount. But from an economic perspective, especially considering the per-call cost of model APIs, selectively exchanging access to your data for services, price offsets, or co-development opportunities might be a genius move. Rather than viewing model purchase as a standard supplier procurement exercise, think about the mutual benefits. You could help advance your supplier’s model while simultaneously accelerating your business’s adoption and realization of value.

“Boring by Design” AI Delivers Consistent Wins

If you’ve been following the generative AI space, you’ll know it’s a whirlwind. According to Information is Beautiful, 2024 alone saw 182 new generative AI models hit the market. And the churn is real. When GPT-5 arrived, many models from the previous 12-24 months were rendered unavailable, leading to panic among subscription customers whose stable AI workflows suddenly broke. Their tech providers, perhaps assuming customers would be thrilled by the latest and greatest, underestimated the premium businesses place on stability.

This is where enterprise AI fundamentally diverges from, say, consumer gaming. Gamers are delighted to constantly upgrade their rigs for new titles; businesses, on the other hand, cannot sustain swapping out core tech stack components three times a week. Back-office operations, by their very nature, are designed to be “boring”—reliable, predictable, and utterly stable.

The most successful AI deployments aren’t chasing the bleeding edge; they’re focused on solving specific, often mundane, business problems unique to their organization. These are tasks that run quietly in the background, accelerating or augmenting mandated processes. Think about relieving legal or expense audit teams from manually cross-referencing countless reports, while still ensuring a human retains final decision-making authority. This blend leverages the best of both worlds: AI for tedious heavy lifting, human for judgment and accountability.

Crucially, these kinds of valuable tasks rarely require constant updates to the latest model to deliver their benefits. This also highlights the power of abstracting your business workflows from direct model APIs. By doing so, you gain long-term stability and maintain the flexibility to update or upgrade the underlying AI engines at the pace that makes sense for your business, not the pace dictated by model developers.

Mini-van Economics: Practicality Over Flash

One of the easiest ways to find yourself with upside-down AI economics is by designing systems that align with vendor specs and benchmarks rather than the actual needs and consumption patterns of your users. We’ve all seen it: businesses falling into the trap of buying new gear or cloud services based on impressive, supplier-led benchmarks, rather than starting their AI journey from what their business can realistically consume, at what pace, and on the capabilities they already have deployed.

Think of it this way: a Ferrari is a magnificent machine, a marvel of engineering. Its marketing is incredibly effective. But it drives the same speed through school zones as a minivan, and it certainly lacks the trunk space for a week’s worth of groceries. In the enterprise AI world, the “Ferrari” approach often means layering on costs with every remote server and model touched by a user. Smart organizations design for frugality, reconfiguring workflows to minimize spending on third-party services.

We’ve seen too many companies implement customer support AI workflows that end up adding millions of dollars to operational run rate costs, often requiring even more development time and expense just to achieve some semblance of OpEx predictability. Meanwhile, companies that designed their systems to operate at a human reading pace—less than 50 tokens per second—have successfully deployed scaled-out AI applications with minimal additional overhead and a far clearer path to ROI.

There are so many facets to this new automation technology, and it’s easy to get lost in the noise. But the best guidance remains steadfast: start practical, design for independence in your underlying technology components to avoid disrupting stable, long-term applications, and shrewdly leverage the fact that your valuable business data can be a strategic asset, even to the advancement of your tech suppliers’ goals.

Charting a Realistic Course for AI ROI

The initial euphoria around generative AI has certainly matured, giving way to a more pragmatic, sometimes skeptical, assessment. But this isn’t a sign of failure; it’s a necessary evolution. The path to realizing significant returns on AI investments isn’t about chasing every shiny new model or benchmarking against vendor promises. Instead, it’s about a clear-eyed focus on your unique data as a strategic asset, embracing the power of “boring by design” solutions for core business problems, and adopting a “mini-van economics” mindset that prioritizes practical, cost-effective deployments aligned with actual business needs. By grounding your AI strategy in these principles, technology leaders can move beyond the hype and find genuine, measurable value that truly transforms their operations, rather than just adding to their overhead.

AI investment, AI ROI, enterprise AI, AI strategy, data monetization, operational efficiency, generative AI, technology leadership, business transformation, IT strategy

Related Articles

Back to top button