Unfiltered AI: the risk of drowning communication in disposable content
The technology is inevitable, but quality is not optional
In recent months, YouTube has been overtaken by a phenomenon as bizarre as it is concerning: the so-called "AI slop", a term used to describe AI-generated content that spreads at breakneck speed with no editorial judgement or real value. The Guardian, in an article published on 11 August 2025, reports extreme examples. Babies trapped in space, cats starring in melodramatic soap operas, public figures digitally reconstructed for impossible storylines — and despite their absurdity, these videos accumulate millions of views.
In July alone, nine of the 100 fastest-growing channels on the platform were dedicated exclusively to this type of production. The ease with which new video-generation tools, such as Google's Veo 3 or Elon Musk's Grok Imagine, enable mass creation is amplifying the problem.
For communication professionals, this trend is more than a cultural curiosity: it is a direct threat to relevance, credibility and the ability to build solid narratives. The multiplication of these videos increases digital noise, making it harder to distinguish genuine messages in a sea of disposable content. Public trust is eroded when the line between human and synthetic blurs without warning, and the logic of intentional storytelling is replaced by an endless stream of "content for content's sake". Worse still, when brands or organisations are inadvertently associated with this material, they face serious reputational risks.
This concern is not just a matter of subjective perception. European data confirms a clear sensitivity to the issue. According to the Eurobarometer, 84% of EU citizens believe that artificial intelligence must be carefully managed to protect privacy and ensure transparency. An academic study involving 4,006 citizens from eight European countries shows that although attitudes towards AI are mostly positive, trust depends on digital literacy, ethics and transparency. A report from the European Broadcasting Union reinforces that consumers continue to trust traditional media more, precisely because of the perception of stronger editorial responsibility and human presence. Meanwhile, the European Centre for Algorithmic Transparency warns of the need for active oversight and clear labelling of algorithm-generated content.
There are also concrete examples of how to respond to this wave of artificial content without rejecting technology altogether. Norwegian newspaper VG explicitly labels any content involving AI and explains how it was produced, building trust through transparency. The BBC uses AI only for data analysis, keeping narrative framing in the hands of journalists. Germany's DW invests in audience literacy, producing educational pieces on deepfakes and automated content. Spain's El País uses AI to support infographic design but ensures human validation before publication. These examples show that it is possible to use technology responsibly, with clear curation and editorial oversight.
What these practices and data tell us is simple, yet urgent: the public values authenticity, transparency and human presence in communication. "AI slop" threatens these pillars by turning the internet into a space saturated with disposable stimuli. The role of communication professionals is not to compete with higher volume or greater speed, but to reinforce the integrity, purpose and narrative that sustain trust with audiences. Technology is inevitable, but quality is not optional. And trust — far from being measured in clicks — is built through consistency, clarity and soul.
