FOMO Makes Fools Of Us All.
How Companies & People Get Tangled Up In Hype and Miss The Basics That Matter
Image by Hans from Pixabay
“Any fool can use a computer. Many do.”
— Ted Nelson
Six weeks ago I saw a perfectly reasonable, intelligent CTO demonstrate to a room of execs their “game-changing” AI customer service bot.
The demo looked decent. The metrics were impressive. The PowerPoint was flawless. But I wasn’t convinced. So I called their actual customer service line.
Twenty minutes of bouncing back and forth between an incompetent chatbot that didn’t understand my simple billing question and three different human agents who couldn’t tell me what the bot had led me to believe.
And here’s what really winds me up, the CTO wasn’t embarrassed. Not even slightly. When I mentioned my experience afterwards, he just shrugged and said, “Well, the AI is still learning.”
Still learning.
This is the same executive mindset that gave us open-plan offices because “collaboration,” hot-desking because “agility,” and mandatory team-building exercises because “synergy.” Now it’s AI chatbots because “innovation.”
I’ve sat through enough of these presentations to recognize the pattern. It always starts the same way: some consultant with a $200 haircut clicks to slide three and announces that “AI will revolutionize customer engagement whilst reducing operational costs by 40%.”
The executives nod along like dashboard dogs because nobody wants to be the one asking stupid questions about robots. Meanwhile, the actual customer service manager, Mrs Henderson, isn’t even in the room.
So they deploy the AI.
It fails spectacularly.
Customers revolt.
Staff panic.
Mrs. Henderson starts laughing!
And instead of admitting they’ve bought a very expensive way to annoy people, they double down with “optimization” and “fine-tuning” and my personal favorite: “We need to manage the customer journey more effectively.”
This means, “The robot is rubbish, but we’ve spent too much money to admit it, so let’s blame the customers for not adapting to our terrible system.”
That CTO had just joined the ranks of companies discovering what 42% of businesses learned the hard way in 2025: they’re scrapping most of their AI initiatives, up from just 17% the year before.
And honestly, this was predictable. From the moment they started automating processes they didn’t understand in the first place.
The Klarna Reality Check
Klarna became the poster child for this exact problem. After spending a year boasting about replacing 700 customer service agents with AI and watching their workforce shrink by 40%, CEO Sebastian Siemiatkowski recently told Bloomberg they’re hiring humans again because “what you end up having is lower quality” service.
No shit, Sebastian.
Here’s how it went: Klarna offered atrocious customer service to begin with. They thought AI would fix it. Instead, they simply turbocharged bad customer service. Some customers demanded a better AI, one that wasn’t so limited it sabotaged the quality of service.
The company that offered to be OpenAI’s “favourite guinea pig” ended up proving you can’t polish a turd with machine learning.
But Klarna is not the only company learning this costly lesson.
The Real Numbers Behind the Hype
The average organization scraps 46% of AI proof-of-concepts before they ever reach production. Meanwhile, somewhere between 70–85% of AI projects fail to meet their expected outcomes. That’s twice the failure rate of standard IT projects.
Only 1 in 4 AI projects delivers the return on investment they promised.
Yet 64% of CEOs admit they invest in technologies before they understand the value, because they’re terrified of falling behind.
I know, I know.
FOMO makes fools of us all.
The Four Deadly Small Business Sins
First mistake - Automating broken workflows.
You can’t optimize what you haven’t figured out how to do well manually.
If your customer service sucked before AI, congratulations — now it sucks at scale with better dashboards.
Second mistake - Skipping the boring stuff.
No one gets promoted for mapping workflows or identifying failure points. But companies that do well with AI begin there. This is why the leading companies are better at spotting and deploying use cases that get them to the prize. Positive outcomes for less risk.
Third mistake - Letting the wrong people drive the bus.
I have been in far too many AI strategy meetings with the loudest voice in the room coming from someone whose deepest technical experience was using ChatGPT to write their LinkedIn posts. Healthier companies also make it easier for people to work across departments, since they have the right people in the room solving problems together.
Fourth mistake - Blowing up everyone’s job description.
AI implementation creates role confusion that most companies ignore entirely. Suddenly, your customer service rep doesn’t know if they’re supposed to override the AI, train it, or just clean up its mess. Your sales team can’t figure out if AI is their assistant or their replacement. Middle managers panic because half their job, monitoring and reporting, just got automated.
The companies that nail AI spend as much time redesigning roles as they do deploying technology. They’re crystal clear about who does what, when AI steps in, and how humans stay in control.
Without this clarity, you get organizational chaos. People either resist the AI entirely or rely on it for decisions they should be making themselves.
The Problem with Josh
The mindset isn’t limited to boardrooms. It’s creeping into how we handle our own lives, too. The same magical thinking that makes executives believe AI will fix their broken customer service is making people believe productivity apps will fix their broken habits.
I watched an old colleague of mine, Josh, spend a few hours last month setting up a complicated meditation app with guided sessions, progress tracking, and streak counters. He was so excited about his "systematic approach to mindfulness."
Two weeks later, he was back to scrolling TikTok instead of dealing with the stress that was eating him alive.
Josh's problem isn't that he needs a better app. His problem is that he's trying to meditate away the fact that he hates his job, never calls his mum, and has been putting off that difficult conversation with his girlfriend for six months.
You can't optimize what you haven't figured out how to do well manually. If you can't sit still for five minutes without a phone, no app is going to teach you mindfulness. If your relationships are held together with avoidance and crossed fingers, a scheduling app won't fix your intimacy issues.
There’s no firewall between your personal life and your professional one.
You can't compartmentalize dysfunction because your brain doesn't have separate operating systems for home and work - it's the same neural pathways that make you avoid looking at your bank statement that make you avoid looking at your department's real performance metrics.
The same perfectionism that makes you procrastinate on personal projects makes you over-engineer business solutions. The same fear of conflict that keeps you from setting boundaries with your family keeps you from setting boundaries with unrealistic stakeholders.
That CTO who shrugged off the failed chatbot with "it's still learning" has probably been saying the same thing about some dysfunctional pattern in his marriage for the past five years, because both situations require him to admit that he deployed a solution before he understood the problem, and that level of self-awareness is exactly the skill he's been avoiding developing in every area of his life.
The personal development industry has become the AI consulting of self-help. "This one weird trick will revolutionize your morning routine whilst reducing anxiety by 40%."
Same consultant, different haircut, same bullshit promise.
The magic fix mentality is everywhere.
Therapy apps instead of actual therapy.
Fitness trackers instead of addressing why we stopped moving our bodies.
Budgeting software instead of confronting why we're so terrified of looking at our bank statements.
And when these tools inevitably fail to transform our lives overnight, we don't blame the broken foundation - we blame ourselves for not using them properly.
"I'm just not disciplined enough."
"I need to find the right system."
"Maybe I should try that new app everyone's talking about."
No. Maybe you should own your shit.
When AI Actually Works
Here’s what the companies that are getting it right know: Artificial intelligence isn’t magic, it’s really good pattern recognition. Plus, patterns only exist when you know what you’re looking for.
Lumen cut the time it took to prepare sales from four hours to 15 minutes, a change that saved $50 million a year. But look at what they didn’t do. They didn’t reinvent sales. They were just doing good sales processes faster.
Companies succeeding with AI have four things in common: executive sponsorship, mature partnership networks, cross-departmental collaboration, and they focus on practical implementation over theoretical possibilities.
The History We Keep Ignoring
Every transformative technology follows the same pattern. Back in the 1980s, when factories first automated, they learned the hard way that machines can only do what humans know how to do well. If you put a robot on a messy assembly line with no clear instructions, guess what? It makes a mess faster and at scale.
In the 1990s, companies spent millions on websites that were basically digital brochures because they didn’t understand what the internet was for.
In the 2000s, they built elaborate CRM systems that nobody used because they automated sales processes that were already broken.
The 1840s railroad frenzy gave us hundreds of railway companies liable to go bankrupt laying tracks to nowhere, until a few clever players realized they could do well by moving goods between places that people genuinely wanted to go.
Same pattern. Different technology.
What Smart Companies Do Instead
Start small. Start boring. Start with the 1 AM problems.
The companies winning with AI are the ones that: Map their workflows first, not after the AI is deployed. They know exactly what good performance looks like because they’ve measured it. They test AI on non-critical tasks before betting the company on it. They build human oversight into every automated process because they understand AI makes really confident, wrong decisions.
If you can’t explain what your best employee does better than your worst one, AI won’t figure it out for you. If your processes are held together with emails and crossed fingers, automation will just break things faster.
And if your current solution to problems is “throw more people at it,” then yes, AI might seem appealing. But you’re not solving the problem, you’re just changing who gets blamed when things go sideways.
Right now, the scoreboard reads:
Basic Fundamentals: 1
Hype and Magical Thinking: 0
And the fundamentals aren’t even trying that hard, they’re just sitting there, being all sensible and whatnot.
Remember, AI is meant as an addition to, not a substitute for, people. It's a tool for making individuals better at what they already do well, the same way Josh needs to sort out his actual problems before any meditation app can help him find peace.
The companies and people who figure this out will have a huge leg up. The rest will go on frittering away money on costly methods of automating confusion, just like Josh will keep downloading productivity apps instead of making that phone call to his mum.
Fix the foundation. Or keep blaming the hammer.
Your call.
Thank you so much for taking the time :)
WorkmanShit is a reader-supported publication.
To support my coffee habit, consider becoming a free or paid subscriber.
See discounted rates below:
$3 per month – The caffeine tease, just enough to make me hopeful
$4 per month – You’ve funded ¾ of a latte and 100% of my gratitude
$5 per month – You’re a saint
No cash? No problem!
Smashing that ❤️ button or sharing this post keeps the wheels on this greasy hamster wheel, too.
Sometimes reading your work is like hearing myself just in your voice... the voice of commonsense is the rarest of all when business is afoot
This is so true, and something that I can see going terribly wrong (or continuing to go terribly wrong). Companies assuming the AI can do no wrong and removing human oversight