top of page

Half the internet doesn’t breathe, yet it’s talking louder than us. -AI Slop

  • Writer: X —iO
    X —iO
  • Oct 14
  • 8 min read

Welcome to the age of AI Slop, a rising tide of content that technically says something… without saying anything. It sneaks into comment sections, review boxes, newsletters, social feeds, even your “trusted” news, -because where attention turns into money, bots learn to speak.


Today, nearly half of all web traffic is non-human. Fake users write fake reviews, argue with real people, and generate outrage on command. It’s not just annoying, -it’s shaping what we see, think, and trust.

A familiar facepalm: 


I asked an AI to “write a viral post about productivity.” It replied:

“In today’s fast-paced digital landscape, productivity is the key to success. Leverage cutting-edge tools to unlock your potential and thrive! ...”


Technically… not wrong. Practically… not useful. Edible... maybe, -but empty. An intro I know by heart at this point 🙄. 

What is “AI slop,” exactly?


Short version: content that looks like content, but doesn’t do anything for you. It’s generic, repetitive, vaguely motivational, and often fact-light, -or just wrong. It clogs feeds, confuses search, and wastes time. Journalists, researchers, and creators now use “AI slop” to describe this surge of low-quality, auto-generated posts, images, videos, and pages. 


Medias and researchers have documented slop-like patterns across blogs, news rewrites, social media platforms, and even ecommerce sites, often at remarkable volumes for SEO (Search Engine Optimization) or monetization. 

Where AI slop shows up


  • Blog platforms: Investigations found a large share of sampled posts likely AI-generated at scale. 

  • Search results: AI rewrites still ranking above original reporting in some cases. 

  • News exposure: AI summaries (e.g., Overviews) can divert attention impacting traffic to the underlying sources. 

  • Misinformation ecosystems: AI-generated “news” sites have exploded in number, accelerating low-cost disinfo. 

  • Video: Platforms are battling mass-produced, repetitive AI videos, and creators worry about being drowned out. 

  • Reviews: on products, services, websites, google...

Why should normal people care?


  • Trust erosion: When feeds fill with junk, we start doubting everything (even the good stuff). Disinformation reimagined: how AI could erode democracy in the 2024 US elections. Although this article may sound old, real elections still happen every few years in democratic countries.

  • Time waste: You spend more minutes scrolling for fewer useful insights.

  • Search pollution: Original reporting can be pushed below derivative AI rewrites.

  • Creator burnout: Humans feel pressure to compete with the firehose, not with quality. 

  • The AlgorithmYou, falling in the loop of dangerous content


And sometimes, AI systems present confident nonsense, -like telling people the year is wrong. That matters when summaries sit above the links you used to click. 

But AI itself isn’t the villain


AI is a tool. High-quality, AI-assisted content exists, -and it’s great when humans guide it with purpose, facts, and craft. Even Google’s guidance is clear: quality and usefulness matter more than whether words came from a keyboard or a model. The problem isn’t AI, but its uncritical, high-volume automation without editing, and even without care. 

Text reads: "DISINFORMATION ACTORS WITH FINANCIAL MOTIVE: A COST-BENEFIT ANALYSIS" followed by mathematical conditions.
Nir Kshetri, UNCG

Computing's Economics: where Mb = monetary benefits from the creation/distribution of disinformation; Pb = psychological benefits; Ic = direct investment costs; O1c = opportunity costs; Pc = psychological costs; O2c = monetary opportunity costs of conviction; πarr = probability of arrest; and πcon = probability of conviction. The term O2cπarrπcon is referred to as the expected penalty effect.

Will regulation or standards help?


Possibly, there’s momentum for platform policy updates, watermarking research, and stronger moderation. But for now, good editorial hygiene and audience literacy are the best defenses. Media studies and watchdog groups continue tracking how AI-generated junk shapes attention, politics, and public trust. 

How We Fight the Slop (without declaring war on AI)


Fixing AI slop doesn’t mean rejecting AI. It means rejecting laziness or investing in transformation. There are four players in this story: Creators, Platforms, Readers, and Technology.


1. Creators: Use AI responsibly


AI can speed up drafts, but it should never replace thinking. Use it for structure, summaries, sparks. Then cut, verify, rewrite. If you wouldn’t say it out loud to a human, don’t publish it online. If it doesn’t inform, challenge, or entertain, -it’s slop.


2. Platforms: Algorithms need a taste filter


If AI can spot cat faces, why not? 

Search engines and social platforms already claim to “reward quality,” but now they must prove it. Google says it rewards helpful, people-first content, regardless of how it’s produced, and has folded “Helpful Content” into core ranking updates to reduce unoriginal pages. YouTube says it will demonetize spammy, repetitive, “inauthentic” AI videos, while allowing original work that uses AI as a tool. That’s progress, but enforcement is hard at internet scale.


3. Readers: Train your slop radar


The internet isn’t a feed, -it’s a buffet. Don’t eat everything. Ask yourself: Is this saying anything new? Can I verify the source?

<Critical thinking is the last real algorithm we control/>

4. Technology & Provenance: Blockchain as Proof of Authenticity


Beyond content habits and algorithms, technology itself must help verify what is real and who created it. As Dr. Oliver Krause puts it when I asked him:

“Potential solutions can develop at the intersection of AI and Blockchain Technology: public immutable ledgers are predestined to prove authenticity of voice, video, or pictures as well as transparency on modifications over time. This is a use case where I can picture very profitable business models beyond the financial and trading use cases that still dominate the space today.”

Just like a photo encloses metadata. This is a descriptive data embedded in an image file that provides information such as date, time, camera settings (aperture, shutter speed), GPS location, copyright details, author, captions, or keywords, which today is often wiped or altered when screenshots or simply edits occur. Imagine all that but on a Blockchain-powered provenance that could allow us to timestamp immutable authentic content publicly and track modifications, -making it harder for synthetic slop and AI fakes to pass as truth and simply accessible to all users with just a tap on the screen. This isn’t sci-fi, -it might be one of the most promising business frontiers emerging because of AI slop.


As Dr. Krause explained further: Blockchain won’t stop AI slop from being written, but it can restore authenticity and accountability to the web.


How to spot AI slop


You can still recognize AI slop text by its familiar symptoms: the over-generic intro (“In today’s fast-paced digital world…”), followed by vague statements with no data, citations, or point. It repeats itself like a motivational fortune cookie, offers confident but unverifiable claims, sometimes even wrong dates or broken facts. No voice, no story, -just empty calories built to trigger dopamine, not deliver insight.


Spotting AI-generated images or videos is becoming way harder. We’ve moved beyond six-finger hands and melting eyes. Today’s giveaways are more subtle: inconsistent lighting and shadows directions, bent or twisted text on signs, hyper-smooth skin, jewelry links or morphing from frame-to-frame, overly perfect eyelashes or makeup, mixing time like the oil painting of the Mona Lisa wearing an iWatch, or background objects that subtly shift between frames…


Experts like graphic designers or photographers can magnify files and inspect pixel edges, but casual users rarely see missing authenticity signals like metadata or pixels


To fight this, Adobe co-founded the Content Authenticity Initiative (CAI) in 2019 and new technologies are emerging that verify where media comes from, not just how it looks. Leica M11-P became the world’s first camera to ship with Content Credentials built in (C2PA), signing images at capture and verified by visiting content credentials or the app. Upcoming Sony models are rolling out Camera Verify signatures issuing external sharing URLs for third parties to verify. But again, not so accessible to users at a glance while scrolling apps. Serious news and institutions will take the time to do so before posting them, but... What about the rest of the disinfo ranking high?


Meanwhile, Google’s SynthID introduces invisible AI watermarks in images, video, and audio, detectable even after resizing or compression. Platforms like TikTok, and Meta have begun labeling “Made with AI” content and "requiring" disclosure for realistic AI media.


Recently, Merantix Momentum experts showcased their Agentic Workflows for Image Forensics being used to detect manipulated imagery, -including image forensic analysis once applied manually, but now amplified, accelerated, and scaled by AI (noise signature, compression pattern, metadata, re-saves, lighting, shadow, geometry consistency, pixel clones, splicing, texture, frequency in case of AI...) 

Digital integrity verification is much needed today for institutions, in news, political media, and online discourse. Also essential for young generations relying merely on social media and for the future of democracy while it lasts.

It's easier to fool people than to convince them that they have been fooled. - Mark Twain.

And as if drowning in AI-generated slop wasn’t enough, there’s another plot twist: AI doesn’t just eat data — it consumes a lot of electricity. While we worry about fake reviews and hallucinated news anchors, AI is quietly straining the power grid.

AI’s Real Currency: Electricity

It’s not money—and it’s not semiconductors, either. AI’s natural limit is electricity, not chips. 

Eric Schmidt former CEO and Chairman at Google added "The U.S. is currently expected to need another 92 gigawatts of power to support the AI revolution. For reference, one gigawatt is roughly the equivalent of one nuclear power station...” / fortune


"...we need to innovate not only in algorithms, but in energy systems. That means reimagining generation, transmission, and storage across renewables, nuclear, and next-generation technologies with the same ambition and urgency that drive AI research." / post

Chasing the Signal, Not the Slop


If AI-Slop blooms on ignorance, then curiosity is our antidote:

in a nutshell – kurzgesagt investigates the rise of AI-generated content and its impact on the internet and this animated explainer examines the challenges of distinguishing authentic work from AI-generated "slop."


Today about half of the internet traffic is bot. The majority of them are used for destructive purposes.
Globe split showing global internet traffic: Bots 51% in green, Humans 49% in blue. Text reads Global Internet Traffic.
AI Slop is Destroying the Internet, by Kurzgesagt - In a Nutshell

For accessible explainers with humor, see AI Slop on John Oliver's segment and several creator breakdowns on AI Slop dynamics. 

But we've clearly got bigger problems than people being duped by non-existent animals. There's an environmental impact from the energy and resources consumed in producing all of this shit. And then there's the fact some slop makers specialize in videos that claim to depict real world calamities, which can lead to the spread of worrying misinformation
TV host smiles with Instagram post on screen showing an illustrated, chubby cat in jeans. Text reads, "Is this real?" Blue studio backdrop.
"The only thing more upsetting...is one of the top comments, is someone asking: Is this real?" John Oliver on AI Slop
Text overlay on blurred image asks, "Why did I watch this and why does this have almost 16 mill likes?" Background shows a brown pattern.
Feels familiar? For sure!
Garbage in, garbage out

In conclusion: If we feed the internet with curiosity, facts, and voice, -it will amplify that too. The danger isn’t that AI will outsmart humans, in my opinion it happened already. The danger is that humans will stop using their own minds. And even though "Half the internet doesn't breathe,..." it shouldn't think louder than us. 


So before you scroll, like, post, or share, ✋PAUSE and ask: Is this feeding my brain, -or just injecting me with cheap dopamine? Is this educational or informational content like fatty fish, leafy greens, berries, nuts, and dark chocolate, or just sugar, colorants, additives, and digital junk calories disguised as inspiration to keep me in the dopamine loop?

Meanwhile: The X⎻iO 🕳️🐇 checklist 


When creating content: start with intent, ground with sources, prompt like a pro, act ruthless while editing, add dates and examples, keep your tone, publish less, polish more. If you can’t source it, don’t publish it. 


As a consumer: Don’t feed the slop,-feed the signal. Demand substance, not just anxiety or cheap dopamine, because in the age of infinite content, attention is a vote, and votes translate into transformation, money, or power.

TALK TO X

Multi choice
  • Instagram
  • Facebook
  • LinkedIn
  • Blogger
  • Telegram
bottom of page