Wikipedia’s overall usage has not declined since the launch of ChatGPT, but researchers warn that large language models and automated data scrapers could still disrupt the platform’s future.
A study by King’s College London examined Wikipedia traffic trends across 12 languages—six accessible to ChatGPT and six not—to assess how the rise of generative AI has affected the world’s largest online encyclopedia. Published in ACM Collective Intelligence, the study found no evidence of decreased Wikipedia usage since ChatGPT’s introduction in 2022.
However, traffic growth in languages where ChatGPT operates has slowed compared to those where it does not, indicating a modest impact. This contrasts with earlier concerns from 2021, when a veteran Wikipedia editor suggested that chatbots might eventually “kill” the platform by replacing it as the go-to information source and flooding the web with AI-generated inaccuracies.
While those fears have not materialized, Wikipedia is facing new pressures. AI companies rely heavily on the platform’s high-quality content to train their models, sending automated scrapers that drive up server traffic without contributing resources back. Between June 2024 and June 2025, referral traffic to major information sites—including Wikipedia—fell by 15%, suggesting search results that incorporate AI-generated summaries may be diverting users away from original sources.
“Our work did not confirm the most alarmist scenario, but we’re not out of the woods yet,” said Professor Elena Simperl of King’s College London. “Scrapers are increasing server load, and AI summaries are using Wikipedia content without crediting it, which siphons traffic away while depending on its work.”
Wikipedia, which hosts more than 6.6 million articles in 292 languages, is a critical source of free information for search engines and communities worldwide—particularly outside Europe and East Asia. Rising operational costs driven by AI scrapers could force the platform to reconsider how it allocates resources.
Researchers suggest that a new framework is needed to balance the use of Wikipedia data by AI companies with the platform’s sustainability. “We need a new social contract between AI companies and providers of high-quality data like Wikipedia,” said Neal Reeves, the study’s first author. “They should retain more power over their material while still allowing it to be used responsibly for AI training.”
The team is working with the Wikipedia community to build a global monitoring tool that will track AI’s impact on the platform. The aim is to give communities better analytical tools to respond as AI usage continues to expand.