By Christophe Henner - schiste .
Former Chair of the Board of Trustees, Wikimedia Foundation
20-year Wikimedian
On 15 January, 2026, Wikipedia turns 25. A quarter century of free knowledge. The largest collaborative project humanity has ever undertaken. Sixty million articles in over 300 languages.[1] Built by volunteers. Free forever.
I've been part of this movement for more than half of that journey (twenty years). I've served as Chair of Wikimedia France and Chair of the Wikimedia Foundation Board of Trustees. I've weathered crises, celebrated victories, made mistakes, broken some things, built other things, and believed every day that what we built matters.
We should be celebrating. Instead, I'm writing this because the numbers tell a story that demands urgent attention. It's nothing brand new, especially if you read/listen to my ranting, but now it's dire.
Since 2016, humanity has added 2.7 billion people to the internet.[2] Nearly three billion new potential readers, learners, contributors. In that same period, our page views declined. Not stagnated. Declined. The world has never been more online, and yet, fewer and fewer people are using our projects.
To put this in concrete terms, if Wikimedia had simply kept pace with internet growth, we would be serving 355 billion page views annually today. Instead, we're at 177 billion. We're missing half the audience we should have.
And these numbers are probably optimistic. In twenty years of working with web analytics, I've learned one thing: the metrics always lie, and never in your favor. AI crawlers have exploded, up 300% year-over-year according to Arc XP's CDN data,[5] now approaching 40% of web traffic according to Imperva's 2024 Bad Bot Report.[6] How much of our "readership" is actually bots harvesting content for AI training? Wikimedia's analytics team has worked to identify and filter bot traffic, and I've excluded known bots from the data in this analysis, but we know for a fact that detection always misses a significant portion. We don't know precisely how much. But I'd wager our real human audience is lower than the charts show.
As this piece was being finalized in January 2026, third-party analytics confirmed these trends. Similarweb data shows Wikipedia lost over 1.1 billion visits per month between 2022-2025, a 23% decline.[7] The convenient explanation is "AI summaries." I'm skeptical. What we're witnessing is something more profound: a generational shift in how people relate to knowledge itself. Younger users don't search. They scroll. They don't read articles. They consume fragments. The encyclopedia form factor, our twenty-year bet, may be losing relevance faster than any single technology can explain. AI is an accelerant, not the fire.
But readership is only part of the crisis. The pipeline that feeds our entire ecosystem (new contributors) is collapsing even faster.
Read those numbers together: we're acquiring 36% fewer new contributors while total edits have increased. This means we're extracting more work from a shrinking base of committed volunteers. The system is concentrating, not growing. We are becoming a smaller club working harder to maintain something fewer people see.
And let's be honest about who that club is. The contributor base we're losing was never representative to begin with. English Wikipedia, still the largest by far, is written predominantly by men from North America and Western Europe.[11] Hindi Wikipedia has 160,000 articles for 600 million speakers. Bengali has 150,000 for 230 million speakers. Swahili, spoken by 100 million people across East Africa, has 80,000.[1][12] The "golden age" we mourn was never golden for the Global South. It was an English-language project built by English-language editors from English-language sources. Our decline isn't just a quantity problem. It's the bill coming due for a diversity debt we've been accumulating for two decades.
The 2.7 billion people who came online since 2016? They came from India, Indonesia, Pakistan, Nigeria, Bangladesh, Tanzania, Iraq, Algeria, Democratic Republic of the Congo, Myanmar, Ethiopia, Ghana. They came looking for knowledge in their languages, about their contexts, written by people who understand their lives. And we weren't there. We're still not there. The contributor pipeline isn't just shrinking. It was never built to reach them in the first place.
Some will say: we're simply better at fighting vandalism now, so we need fewer editors. It's true we've improved our anti-vandalism tools over the years. But we've been fighting vandalism consistently for two decades. This isn't a sudden efficiency gain. And even if anti-vandalism explains some of the concentration, it cannot explain all the data pointing in the same direction: declining page views, declining new registrations, declining editor recruitment, all while the internet doubles in size. One efficiency improvement doesn't explain a systemic pattern across every metric.
Let me be clear about what these numbers do and don't show. Content quality is up. Article count is up. Featured articles are up. The encyclopedia has never been better. That's not spin. That's the work of an extraordinary community that built something remarkable.
The question isn't whether the work is good. It's whether the ecosystem that produces the work is sustainable. And the answer, increasingly, is no.
We've now hit the limits of that optimization. For years, efficiency gains could compensate for a shrinking contributor base. That's no longer true. When edits per new user doubles, you're not seeing a healthy community getting more efficient. You're seeing concentration risk. Every experienced editor who burns out or walks away now costs exponentially more to replace, because there's no pipeline behind them. Our efficiency gains can no longer compensate for when an experienced editor stops editing. The quality metrics aren't evidence that we're fine. They're evidence that we built something worth saving, and that the people maintaining it are increasingly irreplaceable.
Some will ask: why do page views matter so much? We're a nonprofit. We don't sell ads. Who cares if fewer people visit?
Three answers:
So when I say page views are declining, I'm not pointing at a vanity metric. I'm pointing at survival, mission, and motivation, all under pressure simultaneously.
Some will counter: fewer readers means lower infrastructure costs. That's true in the moment it happens. If readership declines, recruitment declines. To compensate, we need to invest more in active recruitment, better editing tools, and editor retention, all of which cost money. The short-term savings from lower traffic are swamped by the long-term costs of a collapsing contributor pipeline. We need to build additional revenue streams precisely so we can keep improving editor efficiency, keep recruiting people, and fund the work required to do that. The cost doesn't disappear. It shifts.
The uncomfortable addition: our content is probably reaching more people than ever. It's just reaching them through intermediaries we don't control: search snippets, AI assistants, apps, voice devices. The knowledge spreads. The mission arguably succeeds. But we don't see it, we can't fund ourselves from it, and our editors don't feel it.
This creates a dangerous gap. The world benefits from our work more than ever. We benefit from it less than ever. That's not sustainable.
Some will say: focus on page views. Optimize the website. Fight for direct traffic. That's the mission we know. Others will say: page views are yesterday's metric. Embrace the new distribution. Meet people where they are, even if "where they are" is inside an AI assistant.
Both camps are half right. We need both. Not one or the other. Both.
We need to defend page views, because they're survival today. Better mobile experience. Better search optimization. Better reader features. Whatever it takes to keep people coming directly to us.
AND we need to build new models, because page views alone won't sustain us in five years. Revenue from entities that use our content at scale. New metrics that capture use and reuse beyond our site. New ways to show editors their impact even when it happens off-platform.
The two-year window isn't about abandoning what works. It's about building what's next while what works still works. If we wait until page views are critical, we won't have the resources or time to build alternatives.
Page views remain essential. But we need to add:
The goal isn't to replace page views with these metrics. It's to see the full picture. A world where page views decline but reach expands is different from a world where both decline. We need to know which world we're in, and right now, we're flying blind.
Here's a frame that might help community members see where they fit: we need both human production and machine production.
Human production is what we do now. Editors write and maintain content. Community verifies and debates. It's slow, high-trust, transparent. It cannot be automated. It is irreplaceable.
Machine production is what we could do. Structured data through Wikidata. APIs that serve verification endpoints. Confidence ratings on claims. Services that complement AI systems rather than compete with them. It's fast, scalable, programmatic.
These aren't competing approaches. They're complementary. Human production creates the verified knowledge base. Machine production makes it usable at AI scale. Content producers (the editors who write and verify) and content distributors (the systems that package and serve) both matter. Both need investment. Both are part of the mission.
If you're an editor: your work powers not just Wikipedia, but an entire ecosystem of AI systems that need verified information. That's more impact, not less. The distribution changed. The importance of what you do only grew.
To understand where we are, we need to understand where we've been, and be honest about what we built and for whom. The relationship between Wikimedia and the broader internet has gone through three distinct phases. I call them the Pioneers, the Cool Kids, and the Commodity:[13]
The pandemic briefly disguised this trend. In April 2020, page views spiked 25% as the world stayed home. New registrations jumped 28%.[14] For a moment, it looked like we might be turning a corner. We weren't. The spike didn't translate into sustained growth. By 2022, we were back on the declining trajectory, and the decline has accelerated since.
The harsh truth: while the internet nearly doubled in size, Wikimedia's share of global attention was cut in half. And the people we lost, or never had, are precisely the people the internet added: young, mobile-first, from the Global South. We went from being essential infrastructure of the web to being one option among many, and increasingly, an option that doesn't speak their language, literally or figuratively.
These numbers would be concerning in any era. In 2026, they're existential.
We're living through the full deployment of digital society. Not the internet's arrival (that happened decades ago) but its complete integration into how humanity thinks, learns, and makes decisions. Three forces are reshaping the landscape we occupy:
At several points in debates about our future, AI has been mentioned as a "tool," something we can choose to adopt or not, integrate or resist. I believe this is a fundamental misreading of the situation. AI is not a tool; it is a paradigm shift.
I've seen this before. In 2004, when I joined Wikipedia, we faced similar debates about education. What do we do about students who copy-paste from Wikipedia? We saw the same reactions: some institutions tried to ban Wikipedia, others installed filters, others punished students who cited it. All these defensive approaches failed. Why? Because you cannot prohibit access to a tool that has become ubiquitous. Because students always find workarounds. And above all, because prohibition prevents critical learning about the tool itself.
Wikipedia eventually became a legitimate educational resource, not despite its limitations, but precisely because those limitations were taught. Teachers learned to show students how to use Wikipedia as a starting point, how to verify cited sources, how to cross-reference. That transformation took nearly fifteen years.
With AI, we don't have fifteen years.
The technology is advancing at unprecedented speed. Large language models trained on our content are now answering questions directly. When someone asks ChatGPT or Gemini a factual question, they get an answer synthesized partly from our 25 years of work, but they never visit our site, never see our citation standards, never encounter our editing community. The value we created flows outward without attribution, without reciprocity, without any mechanism for us to benefit or even to verify how our knowledge is being used.
This isn't theft. It's evolution. And we have to evolve with it or become a historical artifact that AI once trained on. A footnote in the training data of models that have moved on without us.
Some will say: we've faced skepticism before and won. When Wikipedia started, experts said amateurs couldn't build an encyclopedia. We proved them wrong. Maybe AI skeptics are right to resist.
But there's a crucial difference. Wikipedia succeeded by being native to the internet, not by ignoring it. We didn't beat Britannica by being better at print. We won by "understanding" that distribution had fundamentally changed. The communities that tried to ban Wikipedia, that installed filters, that punished students for citing it. They wasted a decade they could have spent adapting.
We can do it again. I believe we can. But ChatGPT caught up in less than three years. The pace is different. We competed with Britannica over fifteen years. We have maybe two years to figure out our relationship with AI before the window closes.
And here's what makes this urgent: OpenAI already trained on our content. Google already did. The question isn't whether AI will use Wikipedia. It already has. The question is whether we'll have any say in how, whether we'll benefit from it, whether we'll shape the terms. Right now, the answer to all three is no.
The data is stark. Cloudflare reports that Anthropic's crawl-to-refer ratio is nearly 50,000:1. For every visitor they send back to a website, their crawlers have already harvested tens of thousands of pages.[15] Stanford research found click-through rates from AI chatbots are just 0.33%, compared to 8.6% for Google Search.[16] They take everything. They return almost nothing. That's the deal we've accepted by default.
Misinformation doesn't just compete with accurate information. It actively undermines the infrastructure of truth. Every day, bad actors work to pollute the information ecosystem. Wikipedia has been, for 25 years, a bulwark against this tide. Our rigorous sourcing requirements, our neutral point of view policy, our transparent editing history. These are battle-tested tools for establishing what's true.
But a bulwark no one visits is just a monument. We need to be in the fight, not standing on the sidelines.
Mobile has fundamentally changed how people consume information. Our data shows the shift: mobile devices went from 62% of our traffic in 2016 to 74% in 2025.[17] Mobile users have shorter sessions, expect faster answers, and are more likely to get those answers from featured snippets, knowledge panels, and AI assistants: all of which extract our content without requiring a visit.
We've spent two decades optimizing for a desktop web that no longer exists. The 2.7 billion people who came online since 2016? Most of them have never used a desktop computer. They experience the internet through phones. And on phones, Wikipedia is increasingly invisible. Our content surfaces through other apps, other interfaces, other brands.
The threat isn't that Wikipedia will be destroyed. It's worse than that. The threat is that Wikipedia will become unknown: a temple filled with aging Wikimedians, self-satisfied by work nobody looks at anymore.
For 25 years, we've told ourselves a story: Wikipedia's value is its content. Sixty million articles. The sum of all human knowledge. Free forever.
This story is true, but incomplete. And the incompleteness is now holding us back.
Wikipedia's real innovation was never the encyclopedia. It was the process that creates and maintains the encyclopedia. The talk pages. The citation standards. The consensus mechanisms. The edit history. The ability to watch any claim evolve over time, to see who changed what and why, to trace every fact to its source.
This isn't just content production. It's a scalable "truth"-finding mechanism. We've been treating our greatest innovation as a means to an end rather than an end in itself.
AI can generate text. It cannot verify claims. It cannot trace provenance. It cannot show its reasoning. It cannot update itself when facts change. Everything we do that AI cannot is the moat. But only if we recognize it and invest in it.
This capability, collaborative truth-finding at scale, may be worth more than the content itself in an AI world. But we've been giving it away for free while treating our website as our core product.
Our mental model is: people visit Wikipedia → people donate → people edit → cycle continues.
Reality is: AI trains on Wikipedia → users ask AI → AI answers → no one visits → donation revenue falls → ???
As the website becomes "just" a production platform (a place where editors work) we need to embrace that reality rather than pretending we're still competing for readers. The readers have found other ways to access our content. We should follow them.
Almost all Wikimedia revenue comes from individual donations, driven by banner campaigns during high-traffic periods. This worked when we were growing. It's increasingly fragile as we're shrinking.
Every major AI company has trained on our content. Every search engine surfaces it. Every voice assistant uses it to answer questions. The value we create flows outward, and nothing comes back except banner fundraising from individual users who are, increasingly, finding our content elsewhere.
We need to be able to generate revenue from entities that profit from our work. Not to become a for-profit enterprise, but to sustain a mission that costs real money to maintain.
Let me be precise about what this means, because I know some will hear "toll booth" and recoil.
Content remains free. The CC BY-SA license isn't going anywhere. Anyone can still access, reuse, and build on our content. That's the mission.
Services are different from content. We already do this through Wikimedia Enterprise: companies that need high-reliability, low-latency, well-formatted access to our data pay for serviced versions. The content is free; the service layer isn't. This isn't betraying the mission. It's sustaining it.
What I'm proposing is expanding this model. Verification APIs. Confidence ratings. Real-time fact-checking endpoints. Services that AI companies need and will pay for, because they need trust infrastructure they can't build themselves.
The moat isn't our content. Everyone already has our content. The moat is our process: the community-verified, transparent, traceable provenance that no AI can replicate.
We're not proposing to replace donation revenue. We're proposing to supplement it. Right now, 100% of our sustainability depends on people visiting our site and seeing donation banners. That's fragile. If entities using our content at scale contributed to sustainability, we'd be more resilient, not replacing individual donors, but diversifying beyond them.
The hostility to AI tools within parts of our community is understandable. But it's also strategic malpractice. We've seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.
AI isn't going away. The question isn't whether to engage. It's whether we'll shape how our content is used or be shaped by others' decisions.
In a world flooded with AI-generated text, what's scarce isn't information. It's verified information. What's valuable isn't content. It's the process that makes content trustworthy. We've spent 25 years building the world's most sophisticated system for collaborative truth-finding at scale. We can tell you not just what's claimed, but why it's reliable, with receipts. We can show you the conversation that established consensus. We can trace the provenance of every fact.
What if we built products that gave confidence ratings on factual claims? What if we helped improve AI outputs by injecting verified, non-generative data into generated answers? What if being "Wikipedia-verified" became a standard the world relied on. The trust layer that sits between AI hallucinations and human decisions?
This is the moat. This is the opportunity. But only if we move fast enough to claim it before someone else figures out how to replicate what we do, or before the world decides it doesn't need verification at all.
What could we offer, concretely? Pre-processed training data, cleaner and cheaper than what AI companies scrape and process themselves. Confidence ratings based on our 25 years of edit history, which facts are stable versus contested, which claims have been challenged and survived scrutiny. A live verification layer that embeds Wikipedia as ground truth inside generated answers. A hybrid multimodal multilingual vectorized dataset spanning Wikipedia, Commons, Wikisource, and Wikidata. And the "Wikipedia-verified" trust mark that AI products could display to signal quality.
Wikimedia Enterprise already exists to build exactly this kind of offering.[18] The infrastructure is there. The question is whether we have the collective will to resource it, expand it, and treat it as strategic priority rather than side project.
The data is clear: we're losing new editors. The website that built our community is no longer attracting new contributors at sufficient rates. We need new relays.
This might mean funding local events that bring new people into the movement. It might mean rethinking what counts as contribution. It might mean, and I know this is controversial, considering whether some kinds of work should be compensated.
The current money flows primarily to maintaining website infrastructure. If the website is now primarily a production platform rather than a consumer destination, maybe the priority should be recruiting the producers.
And here's what this means for existing editors: investing in production means investing in you. Better tools. Faster workflows. Measurable quality metrics that show the impact of your work. If we're serious about content as our core product, then the people who make the content become the priority, not as an afterthought, but as the central investment thesis. The goal isn't just to have better content faster; it's to make the work of editing more satisfying, more visible, more valued.
Are we an encyclopedia? A knowledge service? A trust infrastructure? The "sum of all human knowledge" vision is beautiful, but the method of delivery may need updating even if the mission doesn't.
In 2018, I argued we should think of ourselves as "Knowledge as a Service". The most trusted brand in the world when it comes to data and information, regardless of where or how people access it. That argument was premature then. It's urgent now.
This is the hardest section to write. Because it implicates all of us, including me.
For 25 years, we've talked about being "the sum of all human knowledge." We've celebrated our 300+ language editions. We've funded programs in the Global South. We've written strategy documents about "knowledge equity" and "serving diverse communities."[19]
And yet. English Wikipedia has 6.8 million articles. Hindi, with over 600 million speakers when including second-language users, has 160,000. The ratio is 42:1.[1][12] Not because Hindi speakers don't want to contribute, but because we built systems, tools, and cultures that center the experience of English-speaking editors from wealthy countries. The knowledge gaps aren't bugs. They're the predictable output of a system designed by and for a narrow slice of humanity.
Our decline is the diversity debt coming due.
We optimized for the editors we had rather than the editors we needed. We celebrated efficiency gains that masked a shrinking, homogenizing base. We built the most sophisticated vandalism-fighting tools in the world, and those same tools systematically reject good-faith newcomers, especially those who don't already know the unwritten rules. Research shows that newcomers from underrepresented groups are reverted faster and given less benefit of the doubt.[20] We've known this for over a decade. We've studied it, published papers about it, created working groups. The trends continued.
The 2030 Strategy named knowledge equity as a pillar.[19] Implementation stalled. The Movement Charter process tried to redistribute power. It fractured.[21] Every time we approach real structural change. The kind that would actually shift resources and authority toward underrepresented communities. We find reasons to slow down, study more, consult further. The process becomes the product. And the gaps persist.
Here's the uncomfortable truth: the Global North built Wikipedia, and the Global North still controls it. The Foundation is in San Francisco. The largest chapters are in Germany, France, the UK.[22] The technical infrastructure assumes fast connections and desktop computers. The sourcing standards privilege published, English-language, Western academic sources, which means entire knowledge systems are structurally excluded because they don't produce the "reliable sources" our policies require.[23]
I'm not saying this to assign blame. I'm saying it because our decline cannot be separated from our failure to grow beyond our origins. The 2.7 billion people who came online since 2016 aren't choosing TikTok over Wikipedia just because TikTok is flashier. They're choosing platforms that speak to them, that reflect their experiences, that don't require mastering arcane markup syntax and navigating hostile gatekeepers to participate.
If we want to survive, knowledge equity cannot be a side initiative. It must be front and center of the strategy. Not because it's morally right (though it is) but because it's existentially necessary. The future of the internet is not in Berlin or San Francisco. It's in Lagos, Jakarta, São Paulo, Dhaka. If we're not there, we're nowhere.
And being there means more than translating English articles. It means content created by those communities, about topics they care about, using sources they trust, through tools designed for how they actually use the internet. It means redistributing Foundation resources dramatically toward the Global South. It means accepting that English Wikipedia's dominance might need to diminish for the movement to survive.
That's the disruption we haven't been willing to face. Maybe it's time.
I've watched and been part of this movement for twenty years. And I've seen this pattern before. And some old timers may remember how much I like being annoying.
We identify a problem. We form a committee. We draft a process. We debate the process. We modify the process. We debate the modifications. Years pass. The world moves on. We start over.
We are in a loop, and it feels like we have grown used to it.
Perhaps we have grown to even love this loop?
But I, for one, am exhausted of it.
No one here is doing something wrong. It is the system we built that is wrong. We designed governance for a different era. One where we were pioneers inventing something new, where deliberation was a feature not a bug, where the world would wait for us to figure things out.
I should be honest here: I helped build this system. I was Board Chair from 2016 to 2018. I saw these trends emerging. In 2016, I launched the Wikimedia 2030 Strategy process discussion precisely because I believed we needed to change course before crisis hit.
The diagnosis was right. The recommendations were largely right. The execution failed. Three years of deliberation, thousands of participants, a beautiful strategic direction, and then the pandemic hit, priorities shifted, and the implementation stalled. The strategy documents sit on Meta-Wiki, mostly unread, while the trends they warned about have accelerated.
I bear responsibility for that. Every Board Chair faces the same constraint: authority without control. We can set direction, but we can't force implementation. The governance system diffuses power so effectively that even good strategy dies in execution. That's not an excuse. It's a diagnosis. And it's why this time must be different.
Part of the problem is structural ambiguity. The Wikimedia Foundation sits at the center of the movement, holding the money, the technology, the trademarks, but often behaves as if it's just one stakeholder among many. In 2017, it launched the Strategy process but didn't lead it to completion. It neither stepped aside to let communities decide nor took full responsibility for driving implementation. This isn't anyone's fault. It's a design flaw from an earlier era. The Foundation's position made sense when we were small and scrappy. It makes less sense now.
The governance structures that carried us for 25 years may not be fit for the next 25. That's not failure. That's evolution. Everything should be on the table, including how we organize ourselves.
The world is no longer waiting.
By Wikipedia's 26th birthday, we need to have made fundamental decisions about revenue models, AI integration, knowledge equity, and contributor recruitment.
That's the window. After that, we're managing decline.
Why two years? There is no way to rationalize it. All I know is that every second counts when competing solutions catch up with you in 3 years. At current decline rates, another 10–15% drop in page views threatens the donation revenue and our contributor pipeline is collapsing fast enough that two more years of decline means the replacement generation simply won't exist in sufficient numbers. And one thing the short Internet history has shown us is that the pace of decline accelerates with time.
Is two years precise? No. It's an educated guess, a gut feeling, a forcing function. But the direction is clear, and "later" isn't a real option. We've already been late. The urgency isn't manufactured. It's overdue.
This time, I'm not calling for another movement-wide negotiation. Those have run their course.
I'm calling on the Wikimedia Foundation to finally take the leadership we need.
To stop waiting for consensus that will never come. To gather a small group of trusted advisors, and not the usual suspects, not another room of Global North veterans, but people who represent where the internet is actually going. Do the hard thinking behind closed doors, then open it wide for debate, and repeat. Fast cycles. Closed deep work, open challenge, back to closed work. Not a three-year drafting exercise. A six-month sprint.
This needs to be intentionally disruptive. Radical in scope. The kind of process that makes people uncomfortable precisely because it might actually change things, including who holds power, where resources flow, and whose knowledge counts. The Foundation has the resources, the legitimacy, and. If it chooses. The courage. What it's lacked is the mandate to lead without endless permission-seeking. I'm saying: take it. Lead. We'll argue about the details, but someone has to move first.
Let's do it.
Twenty-five years ago, a group of idealists believed humanity could build a free encyclopedia together. They were right. What they built changed the world.
The question now is whether what we've built can continue to matter.
I've watched parents ask ChatGPT questions at the dinner table instead of looking up Wikipedia. I've watched students use AI tutors that draw on our content but never send them our way. I've watched the infrastructure of knowledge shift underneath us while we debated process improvements.
We have something precious: a proven system for establishing truth at scale, built by millions of people over a quarter century. We have something rare: a global community that believes knowledge and information should be free. We have something valuable: a brand that still, for now, means "trustworthy."
What we're running out of is time.
To every Board member, every staffer, every Wikimedian reading this: the numbers don't lie. The internet added 2.7 billion users since 2016. Our readership declined. That's not a plateau. That's being left behind. And the forces reshaping knowledge distribution aren't going to wait for us to finish deliberating.
This is not an attack on what we've built. It's a call to defend it by changing it. The Britannica didn't fail because its content was bad. It failed because it couldn't adapt to how knowledge distribution was evolving. We have an opportunity they didn't: we can see the shift happening. We can still act.
What does success look like? Not preserving what we have.
Success is the courage to reopen every discussion, to critically reconsider everything we've been for 25 years that isn't enshrined in the mission itself.
The mission is sacred. Everything else—our structures, our revenue models, our relationship with technology, our governance—is negotiable. It has to be.
Happy birthday, Wikipedia. You've earned the celebration.
Now let's earn the next 25 years.
All data comes from public sources: Wikimedia Foundation statistics (stats.wikimedia.org), ITU Facts and Figures 2025, and Our World in Data. The methodology and complete datasets are available on request.
| Metric | 2016 | 2021 | 2025 | Change |
|---|---|---|---|---|
| Internet Users (World) | 3.27B | 5.02B | 6.0B | +83% |
| Page Views (Annual) | 194B | 192B | 177B | -9% |
| New Registrations (Monthly Avg) | 317K | 286K | 202K | -36% |
| Edits (Monthly Avg) | 15.6M | 21.6M | 21.4M | +37% |
| Edits per New User | 49.0 | 75.4 | 105.7 | +116% |
| Mobile Share (EN Wiki) | 62% | 68% | 74% | +12pp |
| Year | Internet Users | Page Views | Gap |
|---|---|---|---|
| 2016 | 100 | 100 | – |
| 2017 | 106 | 98 | -8 |
| 2018 | 116 | 98 | -18 |
| 2019 | 128 | 100 | -28 |
| 2020 | 144 | 103 | -41 |
| 2021 | 154 | 99 | -55 |
| 2022 | 162 | 94 | -68 |
| 2023 | 168 | 98 | -70 |
| 2024 | 177 | 97 | -80 |
| 2025 | 183 | 91 | -92 |
Causation vs. correlation: This analysis identifies trends and divergences but does not prove causation. Multiple factors contribute to these patterns, including platform competition, mobile shifts, search engine changes, and AI integration.
Primary Data Sources:
AI & Bot Traffic:
Editor Demographics:
Academic Research:
Strategy & Governance:
Financials:
Discuss this story