I predicted on the SuperDataScience podcast that this would be the year of the AI agent. From a marketing standpoint, I was right. From a practical use standpoint? I’m not so sure.
Every major enterprise company has jumped on the agent bandwagon. Salesforce, SAP, ServiceNow, all the BI tools like Qlik and Power BI, and of course, the cloud providers: Google, Microsoft, and AWS all have their agent platforms. The hype is deafening. But my question is: where is all the work these agents are supposedly doing?
My expectation was that we'd have AI capable of performing a job, or at least the core tasks of a job. What I've seen so far is a lot of agents that can barely handle a fraction of a single task.
I think there's a fundamental misconception about what an agent is. The standard definition, a software program that acts autonomously to achieve goals by perceiving its environment and making decisions, sounds impressive, but it doesn't mean much in practice.
Here’s a definition that makes more sense for 2025:
an agent is an LLM or a multi-modal LLM that uses tools.
This clarifies what we’re actually seeing. Companies are plugging these models into a front-end interface and tailoring them for their specific platforms. But the crucial step is missing: we have yet to properly tailor and train these agents for a specific job within the context of our own environments. That’s the sweet spot.
Think about it like hiring a new employee. Any manager knows that even an all-star candidate won't deliver incredible results on their second day. They need to be onboarded. They need to go through security protocols, data privacy training, and get access to the right tools and environments. Beyond that, they need to learn your business: who’s who, what’s in motion, and what the priorities are. It takes time for a person to become effective, a minimum of 30 days for an awesome hire, and more like 60 to 90 days for most people to get fully ramped up.
Agents are no different. People may not want to anthropomorphize them, but we get much better results when we do. We aren't thinking enough about the workflows they need to be meaningful. What is the full job description we expect them to fulfill? Do they have any supervision?
It’s no surprise that
Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027.
Why? Because it’s not magic. There is still a lot of training involved to make an agent useful in your environment, just like onboarding an employee and giving them some training wheels before you hand over the keys to the kingdom.
We have to do the same with agents, yet most people I see are not thinking about how to set up the proper workflows. What are the security protocols? What is the chain of command for when these agents break or fail? What supervision is in place?
So, does this mean agents are a wash? No, not at all. But we have to update our thinking on how we implement them, how we test them, and, most importantly, how we handle the edge cases they inevitably will not be able to manage at first.
We’re already seeing this with prompting, where “context is king.” The same is true for agents. The old, tried-and-true principle still holds: you have to train it on your business and your data, and you have to be very specific about what you want the technology to do and how you want it to be used.
What’s Next
I’m taking a break from travel this summer as I have a few big personal projects I’m working on. Stay tuned for:
1️⃣ My First Book · The AI Orchestrator A story driven, hands on guide that shows knowledge workers how to lead, create, and thrive alongside intelligent machines. The book launches in late 2025.
2️⃣ TEDx Talk · August 2025 I will share fresh insights on human AI collaboration and the future of work.
3️⃣ New Podcast · Sadie’s Lab Weekly deep dive conversations at the intersection of AI, consciousness, and what it means to be human. The show launches in Q4 2025.