AI chatbots are kicking journalism while it's down
Where do you go when you need to find information online? For decades, my answer to that question would have been Google, but in the past few years I’ve increasingly turned to ChatGPT and other AI services, which - generally speaking - uncover niche information more quickly and reliably.
But for the media outlets that produce contemporaneous information, there is one crucial difference with AI chatbots: The source link is downplayed, and since the model can identify the specific information that matches the query, the user very rarely has a reason to click through to an external website.
As AI’s popularity surges, there are very real knock-on effects for journalism, which was already reeling from its recent transition from print to an online advertising model. There’s a chicken-egg conundrum at play here - if AI puts the content creators out of business, who will create new content to train AI?

Out of the frying pan
When I joined the industry in the 2010s, journalism already had its problems. The widespread adoption of the internet was disrupting its traditional print revenue model. For established outlets, this meant fewer, less experienced journalists being paid less to do more work - after all, they now needed to take photos and videos for the website, and perhaps record a podcast.
BuzzFeed started to pull huge numbers with listicles and quizzes1, and the old guard took note. Suddenly, respected national newspaper sites were full of slideshows and clickbait, which at the time was annoying but harmless - but eventually morphed into the political outrage machine we know today once they realised that emotional headlines largely outperform mysterious ones.
From that point onwards, mainstream online journalism focused less on quality and accuracy, and more on bending to the whims of the internet’s new homepages: search engines and social media. The main objectives were to get links in front of users, and to convince them to click. Anything was fair game in the battle to generate impressions for their online advertisers.
Into the fire
But now the journalism model is being upended once again - this time by AI. Whether via a large language model (LLM) like ChatGPT, or via a widget on a search engine results page, AI fetches and distils information reliably enough that in many cases a click to the source website becomes unnecessary.
The effects are being felt. The Wall Street Journal reports that search traffic to some popular news sites has dropped by as much as 50 percent in the last three years. The chief executive of The Atlantic notes Google is reinventing itself as an “answer engine” and expects traffic from its search results to trend towards zero, mandating a change in the publication’s business model.

For its part, Google says that traffic routed via its AI summary links is of a higher quality, with users remaining on the destination site for longer. But that will be scant consolation for news publishers, whose online revenue is currently largely tied to impressions generated for their advertisers’ on-page placements - a metric based on the quantity of visits, not their quality.
News outlets are rushing to repair the damage by bolstering relationships with readers and encouraging direct visits, but sites like Business Insider have already attributed cuts in headcount to this trend. If it continues, we could see further decline in an already faltering media industry, leaving a dearth of reliable news to inform audiences and, indeed, to train AI on future events.
Then who writes the news?
There are options under which journalistic output could be maintained, but they all require either a significant upheaval of current trends, or for those regulating AI companies’ activities to make decisions that are perhaps unlikely due to the huge implications for the tech sector’s bottom line.
Maybe independent blogging will make a return? You might expect me to be upbeat on this, but I can’t see it. Discoverability has declined considerably since blogging’s heyday, and while a small group of high-profile names seem to do well on Substack, most would-be bloggers are unlikely to continue publishing into the abyss in return for an even smaller trickle of traffic.
Perhaps it’s the responsibility of the AI companies? News reporting isn’t their specialism and it would be a cost centre, but it’s not out of the question for them to reach into the real world for new data. We’ve already seen job openings for niche hobby experts to train generative AI models, after all.
There’s also the matter of the copyright question looming over the AI sector. If data is used to train models, and is utilised in LLMs’ responses, should AI developers pay the creators of said data? The law is taking a while to catch up, but an unfavourable ruling could either prevent AI firms from using copyrighted journalism to train models, or force them to pay media outlets for the privilege and make up for some of that lost advertising revenue.
Filling the void
This is a more pressing issue than most people probably think. While it’s harder to come by these days, quality journalism is a valuable public service that keeps us informed and ensures public accountability. However, while it would inevitably be missed if news sites folded en masse, just as concerning is what would emerge in the vacuum created by such a collapse.
If visitors evaporated, receiving their information directly via AI chatbots, who could afford to publish news on the internet? Certainly not anybody who relies on visitors and advertising revenue. That opens doors to publishers with private funding, which includes those with less noble motives.
If AI makes journalism in the public service financially untenable, the doors will open to publishers with less noble motives
At the less nefarious end of the scale are corporate publishers, who put out news content to gather an audience and promote the parent company’s products or services. The pitfalls here are probably obvious - if the site is a glorified marketing tool, its writing is more likely to be biased, and stories may be omitted altogether if they clash with the organisation’s goals.
More concerning is the potential for malicious foreign actors to step into the space left by a shrinking news media. By creating their own news websites under shell organisations, hostile nations like Russia can not only promote their agendas to unsuspecting readers, but also potentially even influence AI models’ responses if they can work their content into their training data.
A fragile ecosystem
AI chatbots and LLMs are undeniably powerful as information tools, but they risk draining the resources that they rely on to be effective. Journalism - as flawed as it may be in the modern media landcape - remains one of our strongest barriers against misinformation and bias, and ironically this makes it a key source of trustworthy content on which to train AI models.
If we want AI to stay useful, we need to ensure that there is a sustainable flow of current information from reliable sources. This may mean reshaping how value flows in the digital economy, whether through licencing deals, regulatory intervention, or new funding models for journalism.
Something has to change. Without such efforts, we risk a future where we have tools capable of answering any question we can throw at them, but no confidence that those answers are up-to-date or free of manipulation.
Notes and references
- To their credit, they invested some of the proceeds in hiring top-tier journalists to produce some more serious output for BuzzFeed News, although it was ultimately dissolved in 2023.