A compelling new study suggests that much of today’s internet is not authored by humans at all—but by machines. According to research conducted by Graphite, an SEO and content analytics firm, more than half of newly published articles on the web are now generated by artificial intelligence.
This shift marks a watershed moment: the web is gradually transforming from a human-driven ecosystem into one dominated by algorithmic voices. While that might sound dystopian, the trend is already reshaping how we access information—and forcing educators, media houses, and content creators everywhere to reconsider their roles.

How the Study Was Done — And What It Means
Graphite’s researchers trawled through the Common Crawl archive, a large repository of web pages, and selected around 65,000 English-language URLs published between 2020 and 2025. Each article was analysed by AI-detection tools and classified as “AI-generated” if at least half of it matched machine-authored patterns.
The results were striking: as of November last year, AI-written content crossed the 50 per cent threshold of new articles published. Since then, the proportion has hovered at that level—suggesting the surge may be stabilising.
However, the study also sounds a note of caution: not all these AI articles get read. Because many do not satisfy optimisation signals for Google, or pass rigorous editorial filters, they seldom surface in search results or in responses generated by AI assistants. In other words, quantity is climbing—but visibility remains skewed toward human-written content.
These findings add clarity to earlier predictions and fears. In 2022, Europol estimated that up to 90 per cent of online content might eventually be AI-originated—but Graphite’s data suggests a slower, more measured pace.
Implications for Education, Media, and Truth in the Digital Age
1. Erosion of Authorship and Originality
For students and educators, the rise of machine-generated writing changes the very nature of assignment, assessment, and academic integrity. When a high proportion of online essays, blogs, and reports are the product of AI, differentiating original thought from algorithmic mashups becomes ever harder. Teachers may need new methods or tools to detect AI use—and to teach students how to distinguish their voices from those of models.
Media organisations, too, face erosion of editorial identity. If articles are increasingly templated and auto-produced, brands risk losing the human touch that builds trust and engages readers. The last mile—the narration, insight, voice—is harder for machines to replicate convincingly.

2. Quality vs. Volume Will Still Matter
Even in a machine-rich internet, not all content carries equal weight. The study notes that a majority of automatically generated pieces never rise in rankings, possibly because they lack depth, context, originality, or proper SEO signals.
This means there is still premium value in high-quality, well-researched, human-centred content. Organisations willing to prioritise editing, fact-checking, storytelling, and platform engagement may continue to outperform mass-produced filler.
3. The “Dead Internet” Debate Gains Ground
A growing conversation around the so-called Dead Internet Theory holds that most of what we consume online now originates from bots, algorithms, or AI, rather than from real human beings. While much of the theory is speculative or conspiratorial, parts of it resonate amid rising AI content volumes.
Some recent studies emphasise that because AI models are increasingly trained on web text (much of it generated by AI itself), we may risk an echo chamber—models learning from models, gradually amplifying systemic bias or flattening creativity.
4. Education Must Adapt or Be Left Behind
Curricula built for a “research → write → submit” model may become obsolete. Educators will need to teach students how to:
- Critically evaluate machine-produced text for accuracy and bias
- Use AI as a tool—not as a substitute for original thinking
- Cite responsibly when models influence writing
- Uphold academic honesty in an age when “ghostwriting” is easier than ever
Media houses, too, will have to invest in editorial oversight, fact-checking, and verification tools—because as machine content proliferates, misinformation and shallow reporting may spread more easily.

Looking Ahead: The Co-Writing Era
We may be entering an era where the internet is less a human canvas and more a co-authored space—machines and humans writing side by side. But the critical question is: who sets the agenda?
Search engines, platforms, and AI detectors will play gatekeeping roles. If Google and AI assistants continue to prioritise human or well-edited content, they act as filters in our algorithmic wildfire. The Graphite study already suggests many AI articles, though voluminous, fail to break into visibility.
On the other hand, sloppy or malicious AI authorship may worsen problems of misinformation, echo chambers, and content homogeneity. As machines pick up on patterns, they may replicate biases, gaps, or misleading angles.
For education and media, the job ahead is not to resist this change but to steer it. Human curators, editors, teachers, and readers become the glue that holds authenticity in the age of algorithmic writing.
Join Our Social Media Channels:
WhatsApp: NaijaEyes
Facebook: NaijaEyes
Twitter: NaijaEyes
Instagram: NaijaEyes
TikTok: NaijaEyes