BREAKING: New Data Reveals AI ’Slop’ Fails to Replace Human Labor – Human Workers Remain Unbeatable

Forget the hype – the robots aren't coming for your job. Not yet, anyway. Fresh analysis crushes the dominant narrative that AI automation spells doom for human employment. Instead, the data paints a starkly different picture: what's being called 'AI slop' – cheap, mass-produced, low-quality automated output – is hitting a hard ceiling.
The Productivity Mirage
Executives poured billions into AI tools promising to slash headcounts and boost margins. The result? A mirage. While AI handles repetitive, low-context tasks, it stumbles spectacularly on complexity, nuance, and creativity – the very domains where human intelligence excels and creates real economic value. The promised labor-cost apocalypse has been downgraded to a manageable efficiency tweak.
Where the Bots Break Down
Implementation data shows the cracks. AI systems require constant human oversight, refinement, and correction. They generate as much 'clean-up' work as they eliminate. The dream of a fully autonomous digital workforce has collided with the messy reality of business logic, customer service, and strategic thinking. One cynical fund manager noted, 'Turns out you can't automate vision, leadership, or convincing a board to approve your bonus. Some things remain sacred.'
The Human Edge Endures
This isn't a story of Luddite victory, but of recalibration. The narrative flips from replacement to augmentation. The most successful deployments aren't about removing people; they're about arming them with better tools. The value isn't in the algorithm alone, but in the human-AI partnership. It turns out that judgment, ethics, and improvisation don't come standard in a software license.
The great displacement theory has a major data problem. For now, and the foreseeable future, the most critical component in the global economy isn't silicon – it's still the human brain.
No jobs are in danger
So far, the only *confirmed* job I’ve seen being replaced is teenage photo modeling. The teenage model pictures can now be created with generative “AI”. But, that’s not a real job. 14-15 years olds sitting alone in Paris waiting for some friends of Epstein… good riddance.
We see dancing robots in videos from China, but nobody can produce a robot that can do the dishes, or anything useful. AI is not advancing in the entertainment industries either. Self-ordering kiosks at fast food restaurants and movie theaters were available before the new crop of AIs (LLMs). Robot servers are an already existing and boring gimmick in some restaurants. We will probably see more automation, thanks to the fact that computers finally are starting to get what we are saying to them.
Word-guessing affects how people write
At the same time, the word-guessing AIs are having an oversized effect on how some people write.
“This is not X. It’s Y.” is used in almost all AI texts, and now read so often that people are starting to imitate it, unconsciously.
Experienced writers know to stay away from such clichés, but the regular Joes don’t seem to mind the “empty” sentences.
If a sentence can be lifted out of one text and put into almost any other text, it’s useless. It’s just hype.
“Most people are not even aware of this yet” is true of almost any news or information. It’s typical AI filler “word salad”. Specific words like “delve” and “poised” and “entering a new era” have been popularized by AI slop.
This type of hype writing wears off and people get bored of reading the recurring superlatives. They MOVE on. Remember NFTs? Metaverse?
The LLM-type “AI” just is not the end-all be-all. Yann LeCun is right.
It’s not even replacing the 30 million programmers. It’s just another tool. Yes, it’ll enhance productivity here and there somewhat. IBM now says 6-7%, and they are hiring interns again, after discovering how bad AI results are in real life. We’ll manage and move on to the next hype.
Why won’t the LLM “AI” reach the goal of a functioning AGI? Because you need intent for programming to be interesting. Your chatbot has no intentions, however delusional it is of its consciousness. It has none. It’s not “on”. It’s like an advanced calculator, but for words.
Latest buzzword: “mass drivers”
And I don’t think “mass drivers” on the moon, Elon Musk’s latest attempt at luring the public with new buzzwords, will go anywhere. People are bored with the never-ending sci-fi-blablabla. Such drivers WOULD take at least 15 years to build.
And datacenters in space. Zzzzzz. Please. Elon Musk is just bailing out his failing businesses with the one run by Gwynne Shotwell, as he has before with Solarcity and other failures.
The world is up side down anyways. The world of politics is completely embroiled in the Epstein files. The new world order is developing.
War is near. Crypto is available, but communications break down when governments want it to. What happened to the promise of the Internet? It’s been walled in, in Iran, Russia, China and so on. We need to break through those walls with a new technology; the promise of the meshnets (as Starlink is controlled by the US). But how? Using repeaters? Bitchat?
Where is the internet revolution?
The billionaires are only out to enrich themselves, they won’t fuel any revolution. Will we ever see something interesting being developed by a billionaire? Something that’ll help with democracy and equal human rights? It does not look good.
So instead of talking about the future, let’s look at the here and now. What was promised by the AI salespeople, let’s say, two years ago?
Here is a rundown: “50%” of jobs gone by the end of 2025.
As you well know, as you see around you, that is not true.
They promised “agents”. Wow, what a cool word. Like something from the Matrix. The geeks appropriated it, and the fan boys now repeat it. Like there were any “agents”. It’s still just the LLM, the “chatbot”.
But “chatbot” isn’t cool anymore. Watch out for the latest buzzword: Openclaw.
Dummies are now vlogging about their “11 agents working 24/7”. Yeah, right.
It’s still just the same thing as before. The LLM. Nobody has showed they can make any money using “AI”. And the code Openclaw outputs is full of errors.
The whole thing is very similar to the NFT phase in crypto. Every other month someone had to come up with a new buzzword to prolong the trend. The same thing is happening in “AI”. Just like with “blockchain”, if you remember?
And then we have the problem of motivation. Human intelligence is often driven by a want or need. The so called AIs don’t “want” anything.
Last year, Sora was introduced and it could produce about 6 seconds of video. Now, Seedance is promising a jump to 15 seconds. If we extrapolate this, it would mean: 30 seconds in 2027, right?
In 2028: 1 min, in 2029: 2 min, in 2030: 4 minutes… 8, 16, 64 and in the year 2035 we finally reach the stage of full-length 128 min movie creation.
Let’s say they can build datacenters even faster and we get there earlier, at 2030?
So what? The world is not going to change because people who are are amateurs at making movies get some new AI tools. On the contrary, we are going to need new tools to sift through all the crap. Amazon Books is already almost destroyed by AI books. They’ve had to introduce a new rule: max three books can be published per day per author. This AI spam will show up everywhere.
We are in a transition phase. New tools will have to be developed to help people navigate the AI slop. As Amazon, Instagram and others don’t seem to care about its future letting AI slop destroy the output; the field is ripe for innovation.
If you're reading this, you’re already ahead. Stay there with our newsletter.