- Sabrina Ramonov 🍄
- Posts
- Top 3 Delusions Destroying Generative AI Implementations
Top 3 Delusions Destroying Generative AI Implementations
My Observations from Meeting 50+ Companies
The past few months, I’ve met with generative AI early adopters from over 50+ companies — startups, mid-market, and enterprise.
There’s a lot of energy behind gen AI, especially at the leadership level.
Many see generative AI as a transformative tool to automate workflows, improve processes, and boost productivity.
However, this enthusiasm leads to misconceptions, sometimes worsened by vendors overpromising and under-delivering.
Here are the top 3 delusions that derail generative AI implementations.
Delusion #1: Gen AI Can Handle All Complex Tasks
Many believe that gen AI can handle complex problem-solving and decision-making tasks without human intervention or monitoring. This overestimation often leads to poor outcomes and disappointment.
Is it reasonable to trust gen AI with complex problem-solving and mission-critical tasks?
Not yet.
Consider these scenarios:
Expecting gen AI to write and understand intricate legal agreements, which involve legal nuances, precedents, and risk assessment.
Relying on gen AI for nuanced customer interactions requiring emotional intelligence, e.g. high-value customers.
Using gen AI to manage context-specific tasks that require deep domain expertise.
It’s tempting to believe gen AI will solve your business’s hardest problems.
In some cases, it can significantly help.
But recognize its limitations and leverage its strengths. For instance, use AI to quickly draft legal agreements, then have a human lawyer finalize them.
A real-world example: A company implemented AI for customer service but soon realized it couldn't handle complex queries, leading to frustrated customers and a damaged reputation. They shifted to using AI for basic inquiries and routing complex issues to human agents, improving satisfaction.
I suggest balancing AI with human oversight to ensure quality and accuracy.
Even highly publicized examples of gen AI “replacing” hundreds of customer service agents are a bit misleading.
Octopus Energy CEO shared:
AI is now doing 250 employees’ worth of customer support work and netting better average results than those attained by humans.
The artificial intelligence’s output scored 80% customer satisfaction compared to trained workers’ 65% results,
But the CEO clarified that their service team still monitors every AI response…
[the team] “supervises the answers AI provides, so, for example, drafting a personalized response that a team member can review and then send on.”
In most use cases, AI should augment human capabilities, not replace them.
I also recommending thinking early about LLM operations - ensuring safe, reliable, quality gen AI outputs at scale.
Because of the probabilistic nature of gen AI, the same prompt is not guaranteed to return the same result.
Understanding gen AI's limitations helps you use it more effectively.
Delusion #2: We Don’t Need Robust Governance or QA
Some organizations neglect generative AI governance and QA, leading to risks like bias, hallucinations, privacy issues, and security concerns.
But governance is non-negotiable.
Especially in enterprise.
Given the enthusiasm for gen AI, discussions about risk management, governance, and quality tend to take a back seat.
But this oversight can lead to significant setbacks and challenges.
I recommend thinking of generative AI as any other software implementation, requiring governance, controls, security, continuous risk assessments, etc.
Juan Perez, EVP and CIO at Salesforce, “…views AI as just another application requiring the appropriate governance, security controls, maintenance and support, and lifecycle management.” (CIO.com)
Neal Sample, CIO of Walgreens Boots Alliance, “…notes that both government regulation and corporate governance will be necessary to realize responsible AI development.” (CIO.com)
Specific to generative AI, I recommend thinking these aspects early on:
Accountability: Ensure responsibility for AI actions and outcomes. AI must be transparent, explainable, and auditable.
Privacy: Protect personal data and ensure AI respects users' privacy rights and preferences, especially with customer-facing gen AI applications where users provide free-form text input.
Bias: Prevent or mitigate bias and discrimination. Ensure fairness, inclusivity, and diversity. This is critical to avoid reputation damage.
Safety: Ensure AI reliability, robustness, and security. Prevent and handle errors, failures, or attacks.
LLM operations helps you address these concerns by giving you tools to monitor gen AI outputs at scale, continuously monitoring for quality, bias, and safety issues.
Reframe your perception: generative AI projects should not get a free pass to ignore governance.
Ahmed Zaidi, CEO at Accelirate, recommends assembling a Governance Board for generative AI projects:
Effective governance reassures both employees and customers, while building confidence and trust in generative AI implementations.
Delusion #3: Biggest Model is Best
I’ve lost count how many people I’ve met — shocked by the high costs of plugging in gen AI APIs, especially for image and video generation.
In many enterprise use cases — accuracy, speed, and reliability of gen AI outputs are all important critical.
Yes, larger models often provide better performance in these areas due to larger training datasets.
But the larger the AI model, the more it costs.
It’s key to use the right model for the task. If you don’t need the Latest Biggest Model Ever, then use a smaller one:
Your enterprise model doesn’t need to know the words to every Taylor Swift song to generate a summary report on next quarter’s sales goals by region. Context is king and you need to be selective in just how much IQ a model requires for your use case.
I recommend balancing performance against cost considerations, and having a framework to evaluate the trade-offs between model size and output quality.
Not to sound like a broken record…
But again, this is where LLM operations comes in!
By avoiding the delusion that bigger is always better, you can deploy gen AI more strategically to solve specific needs, without unnecessary costs.
Last Thoughts
Obviously I’m bullish on generative AI, especially agents and agentic teams.
But implementations can easily go wrong — if you start them with overhyped expectations and ignore your “rational” brain.
Avoid the delusions that generative AI can handle all complex tasks accurately and autonomously, that it doesn't need governance or quality control because it’s “cool”, and that bigger is better.
I suggest understanding its limitations for your use cases, adding a layer of human oversight, implementing governance and quality monitoring via LLM operations platforms, and selecting the right model size for your use case.
No doubt, generative AI will transform work as we know it!