The Signpost
Single-page Edition
WP:POST/1
15 January 2026

News and notes
Wikipedia's 25th anniversary is here!
Special report
Wikipedia at 25: A Wake-Up Call
Serendipity
The WMF wants to buy you books!
WikiProject report
Time for a health check: the Vital Signs 2026 campaign
In the media
Fake Acting President Trump and a Wikipedia infobox
Community view
The inbox behind Wikipedia
Recent research
Art museums on Wikidata; comparing three comparisons of Grokipedia and Wikipedia
Traffic report
Tonight I'm gonna rock you
Comix
Oh come on man.
 

File:Observeowl wikipetan 25.png
ObserveOwl
CC BY-SA 4.0
40
0
400
2026-01-15

Wikipedia's 25th anniversary is here!

Contribute   —  
Share this
By Bluerasberry, Bri, Oltrepier and JPxG
Happy birthday — by ObserveOwl

Wikipedia's 25th anniversary: join the live birthday event!

We're happy to remind you that January 15, 2026, marks the 25th anniversary of the 2001 inception of Wikipedia.

To commemorate, the Village pump proposals process found consensus to display a variant of the puzzle globe logo for the anniversary. Celebratory events around the world can be found at meta:Wikipedia 25/Events. A notable entry on the event list is the first Wiki meetup in Iraq.

As part of the anniversary, Wikipedia's 25th birthday party will be hosted virtually on January 15, starting from 16:00 UTC, although the pre-party countdown will start at 15:45 UTC. The virtual celebration will be broadcasted both on the Wikimedia Foundation website – through Owncast – where users will be able to watch the party in Arabic, French, Spanish, Portuguese or Chinese. The same event will also be hosted live on Wikipedia's official YouTube channel, although it will just provide the English localization. Come join the party! – B and O

WikiConference North America 2026 seeks applicants for travel scholarships

WikiConference North America will be held in Edmonton, Alberta, in September 2026. The organizers invite Wikipedia editors from Canada, the United States, Mexico, and the Caribbean to apply for travel scholarships until 15 February. While much of the conference is in English and may be of interest to English Wikipedia editors, there are always French and Spanish speaking groups present, and all of the other language communities are also welcome. – BR

Brief notes

A German hope chest



Reader comments

File:Chart1 divergence (cropped).png
Claude
CC BY 4.0
0
0
300
2026-01-15

Wikipedia at 25: A Wake-Up Call

Contribute   —  
Share this
By Christophe Henner - schiste
This piece was first published on Meta-wiki on January 9, 2026, with the preamble "This is a personal essay. It reflects the views of the author."


Wikipedia at 25: A Wake-Up Call
The internet is booming. We are not.

By Christophe Henner - schiste .
Former Chair of the Board of Trustees, Wikimedia Foundation
20-year Wikimedian

Contents

Part I: the crisis

92 points
The gap between internet growth (+83%) and our page views (-9%) since 2016

On 15 January, 2026, Wikipedia turns 25. A quarter century of free knowledge. The largest collaborative project humanity has ever undertaken. Sixty million articles in over 300 languages.[1] Built by volunteers. Free forever.

I've been part of this movement for more than half of that journey (twenty years). I've served as Chair of Wikimedia France and Chair of the Wikimedia Foundation Board of Trustees. I've weathered crises, celebrated victories, made mistakes, broken some things, built other things, and believed every day that what we built matters.

We should be celebrating. Instead, I'm writing this because the numbers tell a story that demands urgent attention. It's nothing brand new, especially if you read/listen to my ranting, but now it's dire.

+83%
Internet users growth
2016 → 2025
(3.3B → 6.0B)[2]
-9%
Wikimedia page views
2016 → 2025
(194B → 177B)[3]
↑ A 92 percentage point divergence[4]

Since 2016, humanity has added 2.7 billion people to the internet.[2] Nearly three billion new potential readers, learners, contributors. In that same period, our page views declined. Not stagnated. Declined. The world has never been more online, and yet, fewer and fewer people are using our projects.

To put this in concrete terms, if Wikimedia had simply kept pace with internet growth, we would be serving 355 billion page views annually today. Instead, we're at 177 billion. We're missing half the audience we should have.

And these numbers are probably optimistic. In twenty years of working with web analytics, I've learned one thing: the metrics always lie, and never in your favor. AI crawlers have exploded, up 300% year-over-year according to Arc XP's CDN data,[5] now approaching 40% of web traffic according to Imperva's 2024 Bad Bot Report.[6] How much of our "readership" is actually bots harvesting content for AI training? Wikimedia's analytics team has worked to identify and filter bot traffic, and I've excluded known bots from the data in this analysis, but we know for a fact that detection always misses a significant portion. We don't know precisely how much. But I'd wager our real human audience is lower than the charts show.

As this piece was being finalized in January 2026, third-party analytics confirmed these trends. Similarweb data shows Wikipedia lost over 1.1 billion visits per month between 2022-2025, a 23% decline.[7] The convenient explanation is "AI summaries." I'm skeptical. What we're witnessing is something more profound: a generational shift in how people relate to knowledge itself. Younger users don't search. They scroll. They don't read articles. They consume fragments. The encyclopedia form factor, our twenty-year bet, may be losing relevance faster than any single technology can explain. AI is an accelerant, not the fire.

But readership is only part of the crisis. The pipeline that feeds our entire ecosystem (new contributors) is collapsing even faster.

-36%
Drop in new registrations[8]
(2016: 317K/mo → 2025: 202K/mo)
2.1×
Edits per new user[9]
(Growing concentration risk)
+37%
Edit volume increase[10]
(Fewer editors work harder)

Read those numbers together: we're acquiring 36% fewer new contributors while total edits have increased. This means we're extracting more work from a shrinking base of committed volunteers. The system is concentrating, not growing. We are becoming a smaller club working harder to maintain something fewer people see.

And let's be honest about who that club is. The contributor base we're losing was never representative to begin with. English Wikipedia, still the largest by far, is written predominantly by men from North America and Western Europe.[11] Hindi Wikipedia has 160,000 articles for 600 million speakers. Bengali has 150,000 for 230 million speakers. Swahili, spoken by 100 million people across East Africa, has 80,000.[1][12] The "golden age" we mourn was never golden for the Global South. It was an English-language project built by English-language editors from English-language sources. Our decline isn't just a quantity problem. It's the bill coming due for a diversity debt we've been accumulating for two decades.

The 2.7 billion people who came online since 2016? They came from India, Indonesia, Pakistan, Nigeria, Bangladesh, Tanzania, Iraq, Algeria, Democratic Republic of the Congo, Myanmar, Ethiopia, Ghana. They came looking for knowledge in their languages, about their contexts, written by people who understand their lives. And we weren't there. We're still not there. The contributor pipeline isn't just shrinking. It was never built to reach them in the first place.

Some will say: we're simply better at fighting vandalism now, so we need fewer editors. It's true we've improved our anti-vandalism tools over the years. But we've been fighting vandalism consistently for two decades. This isn't a sudden efficiency gain. And even if anti-vandalism explains some of the concentration, it cannot explain all the data pointing in the same direction: declining page views, declining new registrations, declining editor recruitment, all while the internet doubles in size. One efficiency improvement doesn't explain a systemic pattern across every metric.

Let me be clear about what these numbers do and don't show. Content quality is up. Article count is up. Featured articles are up. The encyclopedia has never been better. That's not spin. That's the work of an extraordinary community that built something remarkable.

The question isn't whether the work is good. It's whether the ecosystem that produces the work is sustainable. And the answer, increasingly, is no.

We've now hit the limits of that optimization. For years, efficiency gains could compensate for a shrinking contributor base. That's no longer true. When edits per new user doubles, you're not seeing a healthy community getting more efficient. You're seeing concentration risk. Every experienced editor who burns out or walks away now costs exponentially more to replace, because there's no pipeline behind them. Our efficiency gains can no longer compensate for when an experienced editor stops editing. The quality metrics aren't evidence that we're fine. They're evidence that we built something worth saving, and that the people maintaining it are increasingly irreplaceable.

Why page views matter, and what they miss

Some will ask: why do page views matter so much? We're a nonprofit. We don't sell ads. Who cares if fewer people visit?

Three answers:

  1. Page views are how we fund ourselves. The donation banners that sustain this movement require eyeballs. Fewer visitors means fewer donation opportunities means less money. This isn't abstract. It's survival.
  2. Page views are how we recruit. Our most successful contributor pipeline has always been: someone reads an article → notices an error or gap → clicks "edit" → becomes a contributor. Fewer readers means fewer potential editors. The contributor crisis and the readership crisis are linked.
  3. Page views are how editors know their work matters. The feedback loop that has sustained volunteer motivation for 25 years is simple: I write, people read, I can see the impact. Break that loop and you break the engine for some contributors. Social glue would then be the main retention lever we'd have.

So when I say page views are declining, I'm not pointing at a vanity metric. I'm pointing at survival, mission, and motivation, all under pressure simultaneously.

Some will counter: fewer readers means lower infrastructure costs. That's true in the moment it happens. If readership declines, recruitment declines. To compensate, we need to invest more in active recruitment, better editing tools, and editor retention, all of which cost money. The short-term savings from lower traffic are swamped by the long-term costs of a collapsing contributor pipeline. We need to build additional revenue streams precisely so we can keep improving editor efficiency, keep recruiting people, and fund the work required to do that. The cost doesn't disappear. It shifts.

The uncomfortable addition: our content is probably reaching more people than ever. It's just reaching them through intermediaries we don't control: search snippets, AI assistants, apps, voice devices. The knowledge spreads. The mission arguably succeeds. But we don't see it, we can't fund ourselves from it, and our editors don't feel it.

This creates a dangerous gap. The world benefits from our work more than ever. We benefit from it less than ever. That's not sustainable.

The Strategic imperative: Both/And

Some will say: focus on page views. Optimize the website. Fight for direct traffic. That's the mission we know. Others will say: page views are yesterday's metric. Embrace the new distribution. Meet people where they are, even if "where they are" is inside an AI assistant.

Both camps are half right. We need both. Not one or the other. Both.

We need to defend page views, because they're survival today. Better mobile experience. Better search optimization. Better reader features. Whatever it takes to keep people coming directly to us.

AND we need to build new models, because page views alone won't sustain us in five years. Revenue from entities that use our content at scale. New metrics that capture use and reuse beyond our site. New ways to show editors their impact even when it happens off-platform.

The two-year window isn't about abandoning what works. It's about building what's next while what works still works. If we wait until page views are critical, we won't have the resources or time to build alternatives.

Expanding what we measure

Page views remain essential. But we need to add:

The goal isn't to replace page views with these metrics. It's to see the full picture. A world where page views decline but reach expands is different from a world where both decline. We need to know which world we're in, and right now, we're flying blind.

Two forms of production

Here's a frame that might help community members see where they fit: we need both human production and machine production.

Human production is what we do now. Editors write and maintain content. Community verifies and debates. It's slow, high-trust, transparent. It cannot be automated. It is irreplaceable.

Machine production is what we could do. Structured data through Wikidata. APIs that serve verification endpoints. Confidence ratings on claims. Services that complement AI systems rather than compete with them. It's fast, scalable, programmatic.

These aren't competing approaches. They're complementary. Human production creates the verified knowledge base. Machine production makes it usable at AI scale. Content producers (the editors who write and verify) and content distributors (the systems that package and serve) both matter. Both need investment. Both are part of the mission.

If you're an editor: your work powers not just Wikipedia, but an entire ecosystem of AI systems that need verified information. That's more impact, not less. The distribution changed. The importance of what you do only grew.

Three eras of Wikimedia growth

To understand where we are, we need to understand where we've been, and be honest about what we built and for whom. The relationship between Wikimedia and the broader internet has gone through three distinct phases. I call them the Pioneers, the Cool Kids, and the Commodity:[13]

2001–2007
The Pioneers: Outpacing the Market
Internet +18%/yr · Edits +238%/yr · Registrations +451%/yr
Internet users grew ~18% annually. We scaled orders of magnitude faster than the internet itself. But let's be clear about who "we" was: overwhelmingly English-speaking, male, from wealthy countries with fast internet and time to spare. We built something extraordinary, and we built it for people who looked like us.
2008–2015
The Cool Kids: Keeping Pace
Internet +8%/yr · Edits +12%/yr · Registrations +10%/yr
Wikipedia became mainstream, a household name. But mainstream where? The global internet was shifting. Mobile-first users in the Global South were coming online by the hundreds of millions, and we kept optimizing for desktop editors in the Global North. We called it success. It was the beginning of the gap.
2016–Now
The Commodity: Falling Behind
Internet +7%/yr · Edits +4%/yr · Registrations -5%/yr
Page views: declining. New registrations: collapsing. The billions who came online found an encyclopedia that didn't speak their languages, didn't cover their topics, and wasn't designed for their devices. We became infrastructure for AI companies while remaining invisible to the people we claimed to serve. Our content powers the internet. But whose content? Whose internet?

The pandemic briefly disguised this trend. In April 2020, page views spiked 25% as the world stayed home. New registrations jumped 28%.[14] For a moment, it looked like we might be turning a corner. We weren't. The spike didn't translate into sustained growth. By 2022, we were back on the declining trajectory, and the decline has accelerated since.

The harsh truth: while the internet nearly doubled in size, Wikimedia's share of global attention was cut in half. And the people we lost, or never had, are precisely the people the internet added: young, mobile-first, from the Global South. We went from being essential infrastructure of the web to being one option among many, and increasingly, an option that doesn't speak their language, literally or figuratively.

Part II: why this matters now

These numbers would be concerning in any era. In 2026, they're existential.

We're living through the full deployment of digital society. Not the internet's arrival (that happened decades ago) but its complete integration into how humanity thinks, learns, and makes decisions. Three forces are reshaping the landscape we occupy:

The AI transformation

At several points in debates about our future, AI has been mentioned as a "tool," something we can choose to adopt or not, integrate or resist. I believe this is a fundamental misreading of the situation. AI is not a tool; it is a paradigm shift.

I've seen this before. In 2004, when I joined Wikipedia, we faced similar debates about education. What do we do about students who copy-paste from Wikipedia? We saw the same reactions: some institutions tried to ban Wikipedia, others installed filters, others punished students who cited it. All these defensive approaches failed. Why? Because you cannot prohibit access to a tool that has become ubiquitous. Because students always find workarounds. And above all, because prohibition prevents critical learning about the tool itself.

Wikipedia eventually became a legitimate educational resource, not despite its limitations, but precisely because those limitations were taught. Teachers learned to show students how to use Wikipedia as a starting point, how to verify cited sources, how to cross-reference. That transformation took nearly fifteen years.

With AI, we don't have fifteen years.

The technology is advancing at unprecedented speed. Large language models trained on our content are now answering questions directly. When someone asks ChatGPT or Gemini a factual question, they get an answer synthesized partly from our 25 years of work, but they never visit our site, never see our citation standards, never encounter our editing community. The value we created flows outward without attribution, without reciprocity, without any mechanism for us to benefit or even to verify how our knowledge is being used.

This isn't theft. It's evolution. And we have to evolve with it or become a historical artifact that AI once trained on. A footnote in the training data of models that have moved on without us.

Some will say: we've faced skepticism before and won. When Wikipedia started, experts said amateurs couldn't build an encyclopedia. We proved them wrong. Maybe AI skeptics are right to resist.

But there's a crucial difference. Wikipedia succeeded by being native to the internet, not by ignoring it. We didn't beat Britannica by being better at print. We won by "understanding" that distribution had fundamentally changed. The communities that tried to ban Wikipedia, that installed filters, that punished students for citing it. They wasted a decade they could have spent adapting.

We can do it again. I believe we can. But ChatGPT caught up in less than three years. The pace is different. We competed with Britannica over fifteen years. We have maybe two years to figure out our relationship with AI before the window closes.

And here's what makes this urgent: OpenAI already trained on our content. Google already did. The question isn't whether AI will use Wikipedia. It already has. The question is whether we'll have any say in how, whether we'll benefit from it, whether we'll shape the terms. Right now, the answer to all three is no.

The data is stark. Cloudflare reports that Anthropic's crawl-to-refer ratio is nearly 50,000:1. For every visitor they send back to a website, their crawlers have already harvested tens of thousands of pages.[15] Stanford research found click-through rates from AI chatbots are just 0.33%, compared to 8.6% for Google Search.[16] They take everything. They return almost nothing. That's the deal we've accepted by default.

The Trust crisis

Misinformation doesn't just compete with accurate information. It actively undermines the infrastructure of truth. Every day, bad actors work to pollute the information ecosystem. Wikipedia has been, for 25 years, a bulwark against this tide. Our rigorous sourcing requirements, our neutral point of view policy, our transparent editing history. These are battle-tested tools for establishing what's true.

But a bulwark no one visits is just a monument. We need to be in the fight, not standing on the sidelines.

The attention economy

Mobile has fundamentally changed how people consume information. Our data shows the shift: mobile devices went from 62% of our traffic in 2016 to 74% in 2025.[17] Mobile users have shorter sessions, expect faster answers, and are more likely to get those answers from featured snippets, knowledge panels, and AI assistants: all of which extract our content without requiring a visit.

We've spent two decades optimizing for a desktop web that no longer exists. The 2.7 billion people who came online since 2016? Most of them have never used a desktop computer. They experience the internet through phones. And on phones, Wikipedia is increasingly invisible. Our content surfaces through other apps, other interfaces, other brands.

The threat isn't that Wikipedia will be destroyed. It's worse than that. The threat is that Wikipedia will become unknown: a temple filled with aging Wikimedians, self-satisfied by work nobody looks at anymore.

Part III: what we got wrong

For 25 years, we've told ourselves a story: Wikipedia's value is its content. Sixty million articles. The sum of all human knowledge. Free forever.

This story is true, but incomplete. And the incompleteness is now holding us back.

The process is the product

Wikipedia's real innovation was never the encyclopedia. It was the process that creates and maintains the encyclopedia. The talk pages. The citation standards. The consensus mechanisms. The edit history. The ability to watch any claim evolve over time, to see who changed what and why, to trace every fact to its source.

This isn't just content production. It's a scalable "truth"-finding mechanism. We've been treating our greatest innovation as a means to an end rather than an end in itself.

AI can generate text. It cannot verify claims. It cannot trace provenance. It cannot show its reasoning. It cannot update itself when facts change. Everything we do that AI cannot is the moat. But only if we recognize it and invest in it.

This capability, collaborative truth-finding at scale, may be worth more than the content itself in an AI world. But we've been giving it away for free while treating our website as our core product.

The website is now a production platform

Our mental model is: people visit Wikipedia → people donate → people edit → cycle continues.

Reality is: AI trains on Wikipedia → users ask AI → AI answers → no one visits → donation revenue falls → ???

As the website becomes "just" a production platform (a place where editors work) we need to embrace that reality rather than pretending we're still competing for readers. The readers have found other ways to access our content. We should follow them.

Our revenue model assumes 2005

Almost all Wikimedia revenue comes from individual donations, driven by banner campaigns during high-traffic periods. This worked when we were growing. It's increasingly fragile as we're shrinking.

Every major AI company has trained on our content. Every search engine surfaces it. Every voice assistant uses it to answer questions. The value we create flows outward, and nothing comes back except banner fundraising from individual users who are, increasingly, finding our content elsewhere.

We need to be able to generate revenue from entities that profit from our work. Not to become a for-profit enterprise, but to sustain a mission that costs real money to maintain.

Let me be precise about what this means, because I know some will hear "toll booth" and recoil.

Content remains free. The CC BY-SA license isn't going anywhere. Anyone can still access, reuse, and build on our content. That's the mission.

Services are different from content. We already do this through Wikimedia Enterprise: companies that need high-reliability, low-latency, well-formatted access to our data pay for serviced versions. The content is free; the service layer isn't. This isn't betraying the mission. It's sustaining it.

What I'm proposing is expanding this model. Verification APIs. Confidence ratings. Real-time fact-checking endpoints. Services that AI companies need and will pay for, because they need trust infrastructure they can't build themselves.

The moat isn't our content. Everyone already has our content. The moat is our process: the community-verified, transparent, traceable provenance that no AI can replicate.

We're not proposing to replace donation revenue. We're proposing to supplement it. Right now, 100% of our sustainability depends on people visiting our site and seeing donation banners. That's fragile. If entities using our content at scale contributed to sustainability, we'd be more resilient, not replacing individual donors, but diversifying beyond them.

Our relationship with AI is adversarial

The hostility to AI tools within parts of our community is understandable. But it's also strategic malpractice. We've seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.

AI isn't going away. The question isn't whether to engage. It's whether we'll shape how our content is used or be shaped by others' decisions.

The opportunity we're missing

In a world flooded with AI-generated text, what's scarce isn't information. It's verified information. What's valuable isn't content. It's the process that makes content trustworthy. We've spent 25 years building the world's most sophisticated system for collaborative truth-finding at scale. We can tell you not just what's claimed, but why it's reliable, with receipts. We can show you the conversation that established consensus. We can trace the provenance of every fact.

What if we built products that gave confidence ratings on factual claims? What if we helped improve AI outputs by injecting verified, non-generative data into generated answers? What if being "Wikipedia-verified" became a standard the world relied on. The trust layer that sits between AI hallucinations and human decisions?

This is the moat. This is the opportunity. But only if we move fast enough to claim it before someone else figures out how to replicate what we do, or before the world decides it doesn't need verification at all.

What could we offer, concretely? Pre-processed training data, cleaner and cheaper than what AI companies scrape and process themselves. Confidence ratings based on our 25 years of edit history, which facts are stable versus contested, which claims have been challenged and survived scrutiny. A live verification layer that embeds Wikipedia as ground truth inside generated answers. A hybrid multimodal multilingual vectorized dataset spanning Wikipedia, Commons, Wikisource, and Wikidata. And the "Wikipedia-verified" trust mark that AI products could display to signal quality.

Wikimedia Enterprise already exists to build exactly this kind of offering.[18] The infrastructure is there. The question is whether we have the collective will to resource it, expand it, and treat it as strategic priority rather than side project.

Our investment in people

The data is clear: we're losing new editors. The website that built our community is no longer attracting new contributors at sufficient rates. We need new relays.

This might mean funding local events that bring new people into the movement. It might mean rethinking what counts as contribution. It might mean, and I know this is controversial, considering whether some kinds of work should be compensated.

The current money flows primarily to maintaining website infrastructure. If the website is now primarily a production platform rather than a consumer destination, maybe the priority should be recruiting the producers.

And here's what this means for existing editors: investing in production means investing in you. Better tools. Faster workflows. Measurable quality metrics that show the impact of your work. If we're serious about content as our core product, then the people who make the content become the priority, not as an afterthought, but as the central investment thesis. The goal isn't just to have better content faster; it's to make the work of editing more satisfying, more visible, more valued.

Our mission itself

Are we an encyclopedia? A knowledge service? A trust infrastructure? The "sum of all human knowledge" vision is beautiful, but the method of delivery may need updating even if the mission doesn't.

In 2018, I argued we should think of ourselves as "Knowledge as a Service". The most trusted brand in the world when it comes to data and information, regardless of where or how people access it. That argument was premature then. It's urgent now.

Our failure on Knowledge Equity

This is the hardest section to write. Because it implicates all of us, including me.

For 25 years, we've talked about being "the sum of all human knowledge." We've celebrated our 300+ language editions. We've funded programs in the Global South. We've written strategy documents about "knowledge equity" and "serving diverse communities."[19]

And yet. English Wikipedia has 6.8 million articles. Hindi, with over 600 million speakers when including second-language users, has 160,000. The ratio is 42:1.[1][12] Not because Hindi speakers don't want to contribute, but because we built systems, tools, and cultures that center the experience of English-speaking editors from wealthy countries. The knowledge gaps aren't bugs. They're the predictable output of a system designed by and for a narrow slice of humanity.

Our decline is the diversity debt coming due.

We optimized for the editors we had rather than the editors we needed. We celebrated efficiency gains that masked a shrinking, homogenizing base. We built the most sophisticated vandalism-fighting tools in the world, and those same tools systematically reject good-faith newcomers, especially those who don't already know the unwritten rules. Research shows that newcomers from underrepresented groups are reverted faster and given less benefit of the doubt.[20] We've known this for over a decade. We've studied it, published papers about it, created working groups. The trends continued.

The 2030 Strategy named knowledge equity as a pillar.[19] Implementation stalled. The Movement Charter process tried to redistribute power. It fractured.[21] Every time we approach real structural change. The kind that would actually shift resources and authority toward underrepresented communities. We find reasons to slow down, study more, consult further. The process becomes the product. And the gaps persist.

Here's the uncomfortable truth: the Global North built Wikipedia, and the Global North still controls it. The Foundation is in San Francisco. The largest chapters are in Germany, France, the UK.[22] The technical infrastructure assumes fast connections and desktop computers. The sourcing standards privilege published, English-language, Western academic sources, which means entire knowledge systems are structurally excluded because they don't produce the "reliable sources" our policies require.[23]

I'm not saying this to assign blame. I'm saying it because our decline cannot be separated from our failure to grow beyond our origins. The 2.7 billion people who came online since 2016 aren't choosing TikTok over Wikipedia just because TikTok is flashier. They're choosing platforms that speak to them, that reflect their experiences, that don't require mastering arcane markup syntax and navigating hostile gatekeepers to participate.

If we want to survive, knowledge equity cannot be a side initiative. It must be front and center of the strategy. Not because it's morally right (though it is) but because it's existentially necessary. The future of the internet is not in Berlin or San Francisco. It's in Lagos, Jakarta, São Paulo, Dhaka. If we're not there, we're nowhere.

And being there means more than translating English articles. It means content created by those communities, about topics they care about, using sources they trust, through tools designed for how they actually use the internet. It means redistributing Foundation resources dramatically toward the Global South. It means accepting that English Wikipedia's dominance might need to diminish for the movement to survive.

That's the disruption we haven't been willing to face. Maybe it's time.

Part IV: a path forward

I've watched and been part of this movement for twenty years. And I've seen this pattern before. And some old timers may remember how much I like being annoying.

We identify a problem. We form a committee. We draft a process. We debate the process. We modify the process. We debate the modifications. Years pass. The world moves on. We start over.

We are in a loop, and it feels like we have grown used to it.

Perhaps we have grown to even love this loop?

But I, for one, am exhausted of it.

No one here is doing something wrong. It is the system we built that is wrong. We designed governance for a different era. One where we were pioneers inventing something new, where deliberation was a feature not a bug, where the world would wait for us to figure things out.

I should be honest here: I helped build this system. I was Board Chair from 2016 to 2018. I saw these trends emerging. In 2016, I launched the Wikimedia 2030 Strategy process discussion precisely because I believed we needed to change course before crisis hit.

The diagnosis was right. The recommendations were largely right. The execution failed. Three years of deliberation, thousands of participants, a beautiful strategic direction, and then the pandemic hit, priorities shifted, and the implementation stalled. The strategy documents sit on Meta-Wiki, mostly unread, while the trends they warned about have accelerated.

I bear responsibility for that. Every Board Chair faces the same constraint: authority without control. We can set direction, but we can't force implementation. The governance system diffuses power so effectively that even good strategy dies in execution. That's not an excuse. It's a diagnosis. And it's why this time must be different.

Part of the problem is structural ambiguity. The Wikimedia Foundation sits at the center of the movement, holding the money, the technology, the trademarks, but often behaves as if it's just one stakeholder among many. In 2017, it launched the Strategy process but didn't lead it to completion. It neither stepped aside to let communities decide nor took full responsibility for driving implementation. This isn't anyone's fault. It's a design flaw from an earlier era. The Foundation's position made sense when we were small and scrappy. It makes less sense now.

The governance structures that carried us for 25 years may not be fit for the next 25. That's not failure. That's evolution. Everything should be on the table, including how we organize ourselves.

The world is no longer waiting.

The Two-Year Window

By Wikipedia's 26th birthday, we need to have made fundamental decisions about revenue models, AI integration, knowledge equity, and contributor recruitment.

By Wikipedia's 27th birthday, we need to have executed them.

That's the window. After that, we're managing decline.

Why two years? There is no way to rationalize it. All I know is that every second counts when competing solutions catch up with you in 3 years. At current decline rates, another 10–15% drop in page views threatens the donation revenue and our contributor pipeline is collapsing fast enough that two more years of decline means the replacement generation simply won't exist in sufficient numbers. And one thing the short Internet history has shown us is that the pace of decline accelerates with time.

Is two years precise? No. It's an educated guess, a gut feeling, a forcing function. But the direction is clear, and "later" isn't a real option. We've already been late. The urgency isn't manufactured. It's overdue.

This time, I'm not calling for another movement-wide negotiation. Those have run their course.

I'm calling on the Wikimedia Foundation to finally take the leadership we need.

To stop waiting for consensus that will never come. To gather a small group of trusted advisors, and not the usual suspects, not another room of Global North veterans, but people who represent where the internet is actually going. Do the hard thinking behind closed doors, then open it wide for debate, and repeat. Fast cycles. Closed deep work, open challenge, back to closed work. Not a three-year drafting exercise. A six-month sprint.

This needs to be intentionally disruptive. Radical in scope. The kind of process that makes people uncomfortable precisely because it might actually change things, including who holds power, where resources flow, and whose knowledge counts. The Foundation has the resources, the legitimacy, and. If it chooses. The courage. What it's lacked is the mandate to lead without endless permission-seeking. I'm saying: take it. Lead. We'll argue about the details, but someone has to move first.

Let's do it.

Part V: the birthday question

Twenty-five years ago, a group of idealists believed humanity could build a free encyclopedia together. They were right. What they built changed the world.

The question now is whether what we've built can continue to matter.

I've watched parents ask ChatGPT questions at the dinner table instead of looking up Wikipedia. I've watched students use AI tutors that draw on our content but never send them our way. I've watched the infrastructure of knowledge shift underneath us while we debated process improvements.

We have something precious: a proven system for establishing truth at scale, built by millions of people over a quarter century. We have something rare: a global community that believes knowledge and information should be free. We have something valuable: a brand that still, for now, means "trustworthy."

What we're running out of is time.

To every Board member, every staffer, every Wikimedian reading this: the numbers don't lie. The internet added 2.7 billion users since 2016. Our readership declined. That's not a plateau. That's being left behind. And the forces reshaping knowledge distribution aren't going to wait for us to finish deliberating.

This is not an attack on what we've built. It's a call to defend it by changing it. The Britannica didn't fail because its content was bad. It failed because it couldn't adapt to how knowledge distribution was evolving. We have an opportunity they didn't: we can see the shift happening. We can still act.

What does success look like? Not preserving what we have.

Success is the courage to reopen every discussion, to critically reconsider everything we've been for 25 years that isn't enshrined in the mission itself.

The mission is sacred. Everything else—our structures, our revenue models, our relationship with technology, our governance—is negotiable. It has to be.

Happy birthday, Wikipedia. You've earned the celebration.

Now let's earn the next 25 years.

– Christophe

Appendix A: the Data

All data comes from public sources: Wikimedia Foundation statistics (stats.wikimedia.org), ITU Facts and Figures 2025, and Our World in Data. The methodology and complete datasets are available on request.

Key Metrics summary

Key Metrics 2016–2025
Metric 2016 2021 2025 Change
Internet Users (World) 3.27B 5.02B 6.0B +83%
Page Views (Annual) 194B 192B 177B -9%
New Registrations (Monthly Avg) 317K 286K 202K -36%
Edits (Monthly Avg) 15.6M 21.6M 21.4M +37%
Edits per New User 49.0 75.4 105.7 +116%
Mobile Share (EN Wiki) 62% 68% 74% +12pp

The market share collapse

Indexed Growth (2016 = 100)
Year Internet Users Page Views Gap
2016 100 100
2017 106 98 -8
2018 116 98 -18
2019 128 100 -28
2020 144 103 -41
2021 154 99 -55
2022 162 94 -68
2023 168 98 -70
2024 177 97 -80
2025 183 91 -92

Methodological notes

  • Page views are filtered to human users (agent=user); bot traffic excluded
  • Edits are "user" editors only, content pages only; excludes anonymous and bots
  • Unique devices are for English Wikipedia only, not all projects
  • 2025 Wikimedia data is partial year (through available months)

Causation vs. correlation: This analysis identifies trends and divergences but does not prove causation. Multiple factors contribute to these patterns, including platform competition, mobile shifts, search engine changes, and AI integration.

Notes and references

  1. ^ a b c Wikipedia has 358 language editions with 342 currently active. English Wikipedia: ~6.9M articles; Hindi: ~163K; Bengali: ~152K; Swahili: ~80K. Sources: Meta-Wiki List of Wikipedias; Statista (December 2024).
  2. ^ a b 2016-2021 from Our World in Data; 2022-2025 from ITU Facts and Figures. Growth: (6.00 - 3.27) / 3.27 = +83%.
  3. ^ All page view data from Wikimedia Statistics. Known bot traffic filtered. 2016: 194.1B, 2025: 177.0B. Calculation: -8.8%, rounded to -9%.
  4. ^ Internet growth (+83%) minus page view growth (-9%) = 92 percentage points. If Wikimedia had grown at the same rate as internet users, we would have 355B page views today instead of 177B.
  5. ^ Arc XP CDN data showing 300% year-over-year increase in AI-driven bot traffic.
  6. ^ Imperva 2024 Bad Bot Report: "LLM feeder" crawlers increased to nearly 40% of overall traffic in 2023.
  7. ^ Similarweb data via DataReportal (June 2025): Wikipedia.org declined from 165M daily visits (March 2022) to 128M (March 2025).
  8. ^ Wikimedia Statistics "New registered users" report. 2016: 317K/month. 2025: 202K/month. Calculation: -36%.
  9. ^ Edits per new user = total monthly edits ÷ new monthly registrations. 2016: 49.0. 2025: 105.7. Ratio: 2.16×.
  10. ^ Wikimedia Statistics "Edits" report. 2016: 15.6M/month. 2025: 21.4M/month. Calculation: +37%.
  11. ^ Community Insights 2018: 90% male, 8.8% female, 48.8% Western Europe. Community Insights 2023: 80% male, 13% women, 4% gender diverse. Sources: 2018 Report, 2023 Report.
  12. ^ a b Ethnologue 2025 via Visual Capitalist: Hindi 609M (345M L1 + 264M L2), Bengali 284M, Swahili 80M+. Note: Hindi L1 (~345M) < English L1 (~390M), but total Hindi speakers exceed 600M.
  13. ^ CAGR calculated for each era using Wikimedia Statistics and ITU/OWID data. Early data (2001-2007) is less complete than recent data.
  14. ^ Early April 2020: 673M page views in 24 hours (highest in 5 years). Nature study (Nov 2021): Edits increased dramatically, "most active period in previous three years." Source: Wikipedia and the COVID-19 pandemic.
  15. ^ Cloudflare Blog: Anthropic's ratio is ~50,000:1; OpenAI at 887:1; Perplexity at 118:1.
  16. ^ Stanford Graduate School of Business research cited in Arc XP analysis: AI chatbots 0.33% CTR vs Google Search 8.6%.
  17. ^ Wikimedia Statistics "Page views by access method": 2016: 62% mobile. 2025: 74% mobile. Consistent across major language Wikipedias.
  18. ^ Wikimedia Enterprise FY 2023-2024: $3.4M revenue (up from $3.2M), 1.8% of Foundation total. Launched March 2021. Source: Diff blog.
  19. ^ a b Wikimedia 2030 Strategic Direction: "Knowledge Equity" as one of two pillars alongside "Knowledge as a Service." Also: WMF Knowledge Equity page.
  20. ^ Halfaker, A., Geiger, R.S., Morgan, J.T., & Riedl, J. (2013). "The Rise and Decline of an Open Collaboration System." American Behavioral Scientist, 57(5), 664-688. Key finding: semi-automated tools reject good-faith newcomers, predicting declining retention. Meta-Wiki summary.
  21. ^ Movement Strategy 2018-20: Charter ratified but implementation contentious. Also: Movement Strategy overview.
  22. ^ Wikimedia Deutschland is largest chapter by budget/staff, followed by France, UK. Foundation HQ in San Francisco. Source: WMF governance structure, chapter annual reports.
  23. ^ State of Internet's Languages Report: English Wikipedia dominates coverage in 98 countries. Global South "significantly less represented than population densities."

External sources

Primary Data Sources:

AI & Bot Traffic:

Editor Demographics:

Academic Research:

Strategy & Governance:

Financials:




Reader comments

File:3-365 Just another day at the library. - Flickr - ginnerobot.jpg
Ginny
CC By-SA 2.0
-200
300
2026-01-15

The WMF wants to buy you books!

Contribute   —  
Share this
By HouseBlaster and RAdimer-WMF

Sources are the most important part of writing quality content. But how do we get them? While it's hard to overstate the resources available through The Wikipedia Library, it doesn't have everything. Some editors have in-person access to great local or university libraries, and are able to share resources with volunteers at the resource exchange. But the resource exchange can be limited, especially with newer resources; due to copyright restrictions, full books cannot be shared, and often that's what is needed.

The resource support pilot (WP:RESUP) aims to fill this gap by purchasing resources directly for content editors. The pilot's been open since June, and so far has fulfilled about 20 requests.

What resources qualify?

We can support resources that will be used for creating content on Wikipedia, and which we are able to purchase and get to you. While this is mostly books, we can cover other types of resources too. For specifics, see the relevant FAQ section.

Before making a request, please attempt to get access from your local library and The Wikipedia Library. If your request is in scope of the resource exchange, an unsuccessful request there is required before we can purchase the resource.

Which editors can make requests?

Any extended confirmed user – like you! can make a request. Again, please double-check that the source is not available using the Wikipedia Library or at your local library before filing a request.

Some Wikimedia movement affiliates have similar programs. If your local affiliate has such a program, we ask that you use their program instead.

How do I make a request?

Use the form on Wikipedia:Resource support pilot to create a page for your resource request and transclude it on Wikipedia:Resource support pilot/Requests. Please let RAdimer-WMF know if you have any questions.

Alternatives



Reader comments

File:110714-N-RM525-063 (5950844785).jpg
US DoD
PD
60
0
421
2026-01-15

Time for a health check: the Vital Signs 2026 campaign

Contribute   —  
Share this
By Femke

It's time for a health check. And no, I don't mean the health check your healthcare professional might usually offer to you. Let's rather check our most important health articles and ensure they are fit for purpose in 2026.

This is exactly the main goal of the Vital Signs 2026 campaign. Within the WikiProject Medicine, we've identified our 101 most important articles: the campaign is trying to make sure that all of these meet the B-class criteria by the end of the year. At the moment, there are "only" 15 C-class articles in this list. But medical content often tends to get out of date quickly as science progresses, so it's likely that most articles need at least a bit of TLC, including those listed as good articles (27%) and featured articles (10%).

How to help, and why you don't need to be an expert

Editing medical content is not as difficult as you'd think. Biomedical content has its own sourcing guidelines. In a nutshell, most sources need to be secondary sources published in the last five years. This can be (international) clinical guidelines, review papers, WHO reports, book chapters, or information pages from trusted organisations such as the NHS. On the talk page of each medical article, there is a link to PubMed to find review papers that meet these requirements. Because the source requirements are stricter, there are usually fewer sources to read before you can jump in.

Most of these sources are written in plain(-ish) English, so you do not need a medical degree to understand them. A subset of review papers is written in highly technical language, but you can set these sources aside at first. When you edit medical articles, you initially may want to skip the causes and mechanism (pathophysiology) section of articles, as they are more difficult to grasp, especially for beginners. On the other hand, the epidemiology section can be a good one to start with. Diagnosis and treatment are usually covered well in clinical guidelines, so they provide another good place to start editing medical content.

In terms of campaign tasks, there are big ones and small ones. A few to get started:

  1. Help assess articles for the campaign's Progress table
  2. Update how many people have a condition and related mortality by using the latest Global Burden of Disease study.
  3. Check Commons for better images in text-heavy articles
  4. Add alternative descriptions to images for WP:ACCESSIBILITY
  5. Check the lead for understandability, and leave a talk page message if you cannot resolve it yourself

If you have more time, why not adopt one of the articles? Read it top-to-bottom, update key facts and statistics and remove the overly technical details not relevant to our likely readers.

The importance of editing important articles

In the age of AI, Wikipedia is losing pageviews, partially because Google is pushing its inaccurate AI into search. And maybe that's not (entirely) a bad thing, given the state of some of our medical articles at the moment. Before this campaign started, we scared readers by providing them with cancer survival data more than ten years out of date, and the management section in asthma was even more dated. Google rightly punishes websites for being out of date, but these big articles are the ones that are most likely to attract readers in high numbers: therefore, only if we have more readers, we can also have more folks falling into our "rabbit hole" and joining the community. We can make a virtuous cycle out of a vicious one.

Pageviews on (top-importance) medical articles are declining. This is partially due to datedness, but also due to a 2018–2020 change in how Google ranks medical sites for authority (Google seems to downrank Wikipedia's medical content more [1][2]), and recently, as we compete against AIs.

And there's more to do. We have articles like Borderline personality disorder, where AI misuse is suspected and requires cleaning; Breast cancer, which is using 2013 sourcing to question the usefulness of screening campaigns for the disease; our article on obesity does not even mention GLP-1 agonists – like Ozempic – in the lead yet, and has a statistics section (in addition to a more standard epidemiology section) dedicated only to the US.

Editing medical content is impactful. Despite the post-pandemic drop in pageviews, our top-importance medical articles were read by 164,000 people every day last year, amounting to 60 million views in total. And most importantly, people often read these articles when they are going through illness themselves, or when their loved ones are. After every medical GA or FA I’ve written, people have written to tell me how the updated content helped them, something you don't get in many other areas of Wikipedia. Will you join us checking Vital Signs?




Reader comments

File:Official Presidential Portrait of President Donald J. Trump (2025).jpg
Daniel Torok, White House
public domain
0
70
300
2026-01-15

Fake Acting President Trump and a Wikipedia infobox

Contribute   —  
Share this
By Bri and Smallbones
A fake Wikipedia infobox as published by U.S. President Donald Trump on Truth Social, January 11, 2026.

A true fake

An article in The Independent, a British newspaper, sports the headline "Trump shares fake Wikipedia page calling himself 'Acting President' of Venezuela" (archive). To explain: President Donald Trump, who is, at least, by all accounts President of the United States, posted on his Truth Social media account that he is Acting President of Venezuela. This false announcement is apparently a snub to Delcy Rodriguez, the real fake Acting President of Venezuela.

Rodriguez was the Vice-President of Venezuela under Nicolas Maduro who was removed from office by U.S. troops in a January 3 strike against against Venezuela. Rodriguez was then sworn in as Acting President.

TIME, USA Today, Euronews, the Times of India, Latin Times, China's Global Times and scores of other news outlets covered the story. Almost all noted the similarity of the post to something they've seen on Wikipedia, often calling it "a page". Some viewed the post as "sardonic" or "satire". Some just seemed awe-struck and let their jaws hang.

This is not the first time Trump has caused a brouhaha by posting on Truth Social. In April during the official mourning period for Pope Francis, Trump reposted a picture of himself in papal vestments. In October he posted an AI video of Trump himself dressed as a king, flying a jet fighter and dropping fecal matter on US protestors. – S

"I really don’t like bullies"

Katherine Maher, former CEO of the Wikimedia Foundation and current CEO of NPR, hasn't "brought a tote bag to a knife fight" according to The New York Times. Indeed, the Times calls her "aggressive, refusing to compromise with Congress" while handling multiple crises. She can be "unyielding".

Soon after her 2024 appointment, the right-wing press had called for her ouster after publicizing some of her old tweets. Maher responded that they were her personal tweets from long before she joined NPR and told the Times that "I really don’t like bullies". See prior Signpost coverage

Being CEO of NPR is a difficult job: she is the seventh CEO in the last fifteen years. Furthermore, the last year has been exceptionally difficult with $500 million in funding cut by the federal government, and congressional hearings titled "Anti-American Airwaves: Accountability for the Heads of NPR and PBS." Under Maher NPR even sued the Corporation for Public Broadcasting.

She will soon be taking maternity leave. Congratulations, Katherine and keep up the good work. – S

"I have been officially banned by the WMF"

TKTK
When people don't like things, sometimes they write stuff.

D.F. Lovett on his substack Edit History says he has been contacted by Ovsk Mendov who was "officially banned by the WMF". Mendov tells Lovett that he "recently leaked a massive quantity of sensitive information from WMF wikis and [is] about to release it." If true, this could be a really juicy story, but more likely will lead to him releasing hundreds of pages of the most boring stuff you've ever read. Messages from ticked-off blocked sockpuppets are like that. This story does have a couple of interesting sections however: The unlinked letter from the Foundation's Trust and Safety team blocking Mendov does look authentic and could only have been released, according to WMF rules, by the blocked editor. The other interesting section discusses two websites, Wikipediocracy and "Wikipedia Sucks", which are critical of Wikipedia. Lovett suggests that Wikipediocracy has become too tame and has too many members from Arbcom editing the site to really stay a Wikipedia criticism site. But (again according to him), Wikipedia Sucks has kept the faith and is still dishing out the real stuff, not that I recommend it. S

To ERR is human

TKTK
Bronze Soldier of Tallinn (shown in populated area before 2007 relocation to a cemetery), now a symbol of Estonian–Russian Federation tensions, depicted a Red Army soldier liberating Tallinn

Eesti Rahvusringhääling (ERR), Estonia's government supported public broadcasting organization, states that "Estonian volunteers [are] struggling to protect [English] Wikipedia from Russian propaganda". It's a bit more complicated than that, but ERR's source, the newspaper Digigeenius (in Estonian and partially paywalled) wrote three articles on the topic over eight days in some detail.

At first glance, the Estonian position looks weak: The Estonian editors on English Wikipedia have been trying to maintain an earlier status quo that listed the birthplace of Estonians born from 1940-1991 as simply Estonia, but other, presumably Russian, editors were changing this to Estonian SSR, USSR. From 1940-1941 and 1944-1991 births in the area of Estonia were recorded by the government of the Estonian SSR. From 1942-1944 the area was controlled by the Nazi German army. The 1940 Soviet takeover followed quickly after the Molotov–Ribbentrop Pact when the Soviets and Nazis secretly divided eastern Europe into Soviet and German spheres of influence. Estonia's argument is that the occupation and annexation of Estonia was illegal and it was never agreed to by the Estonians.

ERR quotes Ronald Liive from Digigeenius who calls the birthplace campaign "mass manipulation on an industrial scale." (Summaries of ERR text added by The Signpost in parentheses)

"A single user has systematically altered nearly 600 profiles of prominent Estonians. From EU High Representative Kaja Kallas and WRC champion Ott Tänak to supermodel Carmen Kass."

"Their birthplaces were forcibly changed to 'Estonian SSR, Soviet Union.' This is not a technical correction; it is a deliberate attempt to erase the legal continuity of the Republic of Estonia. In one instance, this user spent 21 hours and 40 minutes straight redacting Estonian history."

(When Estonian volunteers tried to change the articles, they were banned for "pushing a nationalistic narrative.")

"Meanwhile, articles like Kaja Kallas's have been locked in their distorted pro-Kremlin state, preventing further factual corrections."

(Liive noted similar issues with the Estonian War of Independence, which he said is being "redefined.")

"In several key articles 'defensive campaign' has been replaced with offensive campaign. Estonia's birth is being framed as 'separatism from Russia,' aligning perfectly with modern imperialist rhetoric.

Although Wikipedia favors simple facts over ideological interpretations, considering this, the case for the Estonians now looks much stronger. Several RfCs at the Manual of style on how to record the birthplaces were closed as "no consensus" but several admins interpret that as meaning Estonia SSR should be used. The obvious compromise of listing the birthplaces as "Estonia under Russian occupation" was suggested but ignored by both sides.

One additional oddity: How does one person edit for over 20 hours straight? User:Glebushko0703, who signs his posts on en.Wiki with Gigman, added many Estonian SSR edits and did indeed put in several 20+ hour days making manual edits, on several topics (not all on Estonian biographies) and also just copying and pasting Estonian SSR, USSR into biographies. He is also an editor of the Russian Wikipedia, with 210 edits, where he edits about the days of the calendar year and sites around Moscow. On his en.Wiki user page he accuses Robert Treufeldt, a board member of the Estonian Wikimedia Chapter, of insulting him on Estonian national television. – S

Wikipedia's governance logic examined

TKTK
Are Wikipedia's readers treated like a horse with blinders? A countercurrents contributor thinks so.

"Provisional Bondi Truths: Containment, Power, and the Struggle to Name Palestine on Wikipedia", published at countercurrents.org, looks at a number of issues that came up at the English Wikipedia in 2025. Among several intriguing insights about Wikipedia are these:

"[L]ong-standing governance logic on Wikipedia [is] most visible in Israel/Palestine coverage, where politically charged topics are managed through timing, attribution, and deferral — determining not only what may be stated, but when claims become speakable and whose framing is allowed to appear as neutral knowledge.

and

Official statements are elevated to anchor the framing; structural or contextual analysis is pushed downward; contested interpretations are withheld pending "further verification." The lead language contracts to what carries the least procedural risk, even when that narrowing strips the event of the structural context that gives it meaning.

B

In brief

The alt text for the image.
Who's a Jolly Good Fellow?



Do you want to contribute to "In the media" by writing a story or even just an "in brief" item? Edit next week's edition in the Newsroom or leave a tip on the suggestions page.



Reader comments

File:Seattle Water Department employee, 1990 (26452013172).jpg
Seattle Water Department (anon.)
CC-By 2.0
455
2026-01-15

The inbox behind Wikipedia

Contribute   —  
Share this
By Jonatan Svensson Glad
TKTK
The unofficial logo of VRT

Even among experienced Wikipedia editors, including many who have been active for a decade or more, there is often little understanding of what the Volunteer Response Team (VRT) actually does. Outside of some knowledge that VRT handles copyright verification or permissions, most long-time editors are unaware of the wide range of emails VRT handles daily and the complex role it plays as the public-facing interface of Wikipedia.

What is the most visible public-facing function of almost any organisation? Customer service. It is therefore reasonable to ask how this works for one of the most visited websites in the world. Wikipedia does not have a call centre, a chatbot, or a ticketing system in the conventional sense. Instead, it has a few email inboxes.

These inboxes are handled by VRT, formerly known as OTRS. VRT is a group of trusted Wikimedia volunteers who respond to emails sent to various Wikimedia project addresses, including Wikipedia. For anyone looking for a direct way to contact Wikipedia, the English-language email address info-en@wikimedia.org is listed prominently on both Wikipedia:Contact us (accessible from the left-hand menu in the classic Vector skin and the hamburger menu in the post-2022 Vector update) and the Wikimedia Foundation contact page. While most editors will never interact directly with VRT, nearly everyone has at some point told another user to "email VRT".

Most editors encounter VRT indirectly in a few familiar contexts. These include copyright permission emails when authorship or ownership of a work is unclear, for example when a text or image has been published elsewhere before being uploaded to Wikimedia. Another common case is identity verification, where a user's chosen username corresponds to the name of a notable person and additional confirmation is required.

Those are only a small subset of what arrives in the VRT queues each day.

What kinds of emails does VRT receive

VRT handles a wide range of incoming messages, many of them from people with little or no prior understanding of how Wikipedia works. Common categories include:

The volume of correspondence is substantial. For the English-language inboxes alone, VRT volunteers reply to hundreds of emails each week, if not on a daily basis.

TKTK
During 2025, a noticeable amount of emails has been responded to regarding content updates in our articles about Gaza genocide and Zionism

Over the past year, VRT has seen an increase in privacy-related requests from article subjects as well as complaints about perceived political bias. Many correspondents allege an anti-Israel stance or a left-leaning perspective (their words) in certain articles. These complaints often focus on how particular events, groups, or individuals are described, the terminology used, or which sources are cited. Complainants may request changes to wording, demand removal of certain statements, or question why contrary viewpoints are presented. VRT volunteers respond by explaining Wikipedia's core content policies, the need for neutral presentation, and the public processes through which editorial disagreements are addressed. Multiple rounds of correspondence are sometimes necessary to clarify why articles are worded as they are and why certain editorial decisions reflect community consensus rather than individual viewpoints.

In practice, it is very rare that senders are satisfied with the outcome. Many simply want to vent their frustration or air their grievances, using email as the only way they know to express their dissatisfaction with how Wikipedia presents certain topics.

One reason so many correspondents turn to VRT is that Wikipedia's talk page system is not always prominent or easy to navigate. Talk pages are often not closely monitored or responded to by enough editors, and their structure can be confusing, especially for people unfamiliar with the site. Article subjects who wish to raise concerns may find themselves left with little option but to email VRT or attempt to locate a relevant noticeboard through trial and error. Both of these on-wiki forums are public, which can discourage participation, and the overall forum structure can feel complex, unfamiliar and intimidating to non-editors.

VRT volunteers respond to all such messages, sometimes engaging in multiple rounds of correspondence. Responses often involve detailed explanations of Wikipedia policies, clarifications on why an article is worded as it is, and guidance on public processes such as edit requests, Requests for Comment or dispute resolution.

Much of this correspondence does not fit neatly into any on-wiki process, and senders may not even be editors. This makes VRT's role one of education, explanation, and setting expectations, rather than direct editorial intervention.

The limits of VRT

A recurring challenge for VRT volunteers is that email is often the wrong venue for resolving content disputes. Volunteers frequently explain that editorial disagreements should be raised on article talk pages, relevant noticeboards, or through established dispute resolution processes. In practice, many correspondents never follow those suggestions. Instead, they continue the discussion by email, expecting personalised explanations of why an article is written as it is, why certain sources are acceptable and others are not, or why Wikipedia cannot simply make a requested change.

All correspondence handled by VRT is covered by a confidentiality agreement and it is therefore not permitted for volunteers to disclose the content of emails on-wiki or elsewhere. This can create challenges when article subjects request changes or deletions, as VRT cannot discuss the specifics of individual cases publicly. As a result, much of VRT's work is invisible even to seasoned editors.

VRT volunteers also receive forwarded correspondence from the Wikimedia Foundation. A few times per month, teams such as Legal, Trust and Safety, or Communications pass along emails from article subjects who are demanding changes, deletions, or corrections. These messages often come with heightened expectations of authority and urgency. The Foundation is aware of VRT's limits and does not expect volunteers to take official action. Instead, they ask VRT to review the issue and determine whether the volunteer community wishes to engage with the matter.

It is important to be clear about what VRT is not. VRT has no editorial power. These volunteers do not decide article content, override community consensus, or act on behalf of the Wikimedia Foundation. Every reply includes a disclaimer stating that the response is not official Foundation correspondence.

Nevertheless, for many people outside the Wikimedia movement, a reply from a Wikipedia email address feels official. VRT volunteers are, in effect, the human interface between Wikipedia and the general public. They are often the first, and sometimes the only, point of contact a concerned reader, article subject, or aggrieved contributor will ever have with the Wikimedia community.

In that sense, VRT functions much like customer service, not by fixing everything directly, but by explaining, redirecting, and setting expectations. It is quiet, mostly invisible work, but it plays a significant role in how Wikipedia is perceived by the world beyond its edit buttons and talk pages.

Getting involved with VRT

I personally recommend that any experienced Wikipedia editor in good standing take a moment to read the Volunteer Response Team recruiting page on Meta-Wiki. In my experience, serving on VRT provides a unique perspective on how Wikipedia interacts with the public and the Wikimedia Foundation. Volunteers are not selected by the community at large; instead, they are chosen by VRTS administrators, who themselves are appointed by cooptation or the Wikimedia Foundation.




Reader comments

File:Washington October 2016-12.jpg
Alvesgaspar
CC 4.0 BY-SA
80
30
450
2026-01-15

Art museums on Wikidata; comparing three comparisons of Grokipedia and Wikipedia

Contribute   —  
Share this
By Kasia Makowska (WMPL) and Tilman Bayer


A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.

Benchmarking Data Practices of Art Museums in Wikidata

Reviewed by Kasia Makowska (WMPL)
TKTK
From the paper: "Metadata footprint of licences across institutional collections. This chart illustrates the varying proportions of artworks with documented licence or copyright status within each museum’s Wikidata records."

This discussion paper[1] is part of a "Special Collection" of the Journal of Open Humanities Data (JOHD), titled "Wikidata Across the Humanities: Datasets, Methodologies, Reuse", which focuses on Wikidata as both a tool and an object of academic research.

The paper looks at the adoption of key open data best practices, focusing on art museums in Wikidata. The work is outlined in three steps: i) selection of a sample of data repositories of such museums in Wikidata; ii) definition of open data compliance criteria; and iii) reporting the results.

For the selection of repositories, art museums (using the item “art museum”, Q207694 as the reference point) with at least 5,000 records in Wikidata were chosen, and the sample was further limited to the ten museums with the most records in Wikidata.

When it comes to defining the compliance criteria, the authors say:

(...) the work seeks to answer the following questions: 1) What criteria can be used to assess the compliance of Art museums' open data practices with Wikidata? 2) Which Art museums are most represented on Wikidata, and what is the level of maturity in their data practices and ecosystem integration? The purpose of this work is to define a set of best practices for open data publishing in Wikidata and to benchmark the current level of compliance among major Art museums. The results will provide a clear roadmap for institutions to improve their open data strategies.

Then, they define a set of data quality criteria, as described below:

The results are then reported and discussed: ten preselected institutions have been assessed based on the above criteria. A full table of results with detailed scores can be found in the paper, with a brief spoiler alert for the less patient readers:

In light of all these assessments, it can be stated that the National Gallery of Art demonstrates the highest level of open data compliance maturity and can be considered a best practice example.

When discussing the results, the authors clearly and transparently outline the limitations of their work, in scope and coverage, and point out additional topics to consider as extension of this work. Interestingly, they mention two criteria (the provision of machine-readable metadata and clear licensing information) which do not form part of the assessment in the paper. This is because analysis shows these to be "not binary properties of an institution, but rather emergent characteristics of digital collections", which is followed by a proposal to reframe them as quantifiable "metadata footprints". The paper also provides an interesting analysis using the copyright status property on Wikidata, with a chart clearly illustrating artwork with documented license or copyright status within each museum's Wikidata records (see above).

In summary, this work provides a really useful benchmark of practices for museums willing to start using Wikidata to enrich and reuse their digital collections. Speaking from an affiliate perspective, such work is a valuable guide for speaking with GLAM institutions, presenting them with good practice examples and suggesting space for improvement.

A final note from the authors highlights another important use for such research:

More importantly, because it clearly highlights the geographical bias in Wikidata, it can also be seen as a call to action: all the top museums in Wikidata (by number of records) are located in the Global North[supp 1]. This is not a coincidence, but rather a reflection of the material and institutional resources required for the sustained digital cultural work that facilitates integration with platforms like Wikidata. This disparity, however, risks creating and reinforcing digital silos that reproduce the unequal global distribution of knowledge. By mapping this limitation, our article aims to raise awareness of this inequity and contribute to scholarly and practical efforts to diversify the digital cultural sphere.

Comparing comparisons of Grokipedia vs. Wikipedia by three different research teams

Reviewed by Tilman Bayer

On October 27, Elon Musk's company xAI launched Grokipedia, an AI-generated encyclopedia designed to rival Wikipedia by addressing its alleged biases, errors, and ideological slants. As summarized in recent Signpost issues (see here, here and here), it immediately attracted a lot of commentary by journalists and pundits, many of them highlighting examples of Grokipedia's own apparent right-wing biases.

At the same time, various academic researchers embarked on more systematic analyses of Grokipedia, resulting in several preprint publications already. These include at least three comparisons with Wikipedia, making this an interesting experiment showing how different research teams may tackle the same kind of questions:

"We define the "Epistemic Profile" of an encyclopedia article not merely as a bibliography, but as the structural composition of its testimonial network. As a practical implementation, we approximate this theoretical goal by mapping which institutions (e.g., Academic, Governmental, Corporate) an encyclopedia grants the authority to speak, as reflected in cited sources."

A fourth analysis, by Włodzimierz Lewoniewski of Poznań University of Economics and Business (author of various other academic publications about Wikipedia), was provided in form of a blog post and video[5] that compare Grokipedia with Wikipedia editions in 16 languages, by listing the number of articles each encyclopedia has in a number of different topics.

Data

As promised in its title ("A comprehensive analysis of Grokipedia"), the Cornell team's paper is based on the largest Grokipedia dataset:

"We scraped Grokipedia from 28 to 30 October 2025 [...] using parallel processing on Google Cloud, routing requests through a proxy. In total, we were to successfully scrape 883,858 articles from the service, representing 99.8% of all published Grokipedia articles."

The Dublin team scraped a partial sample:

"We analyzed the 20,000 most-edited English-language Wikipedia articles as of October 2025, identified via cumulative edit counts. Prior research shows that heavily edited entries correlate strongly with controversy, topical salience, and social polarization [...] we excluded all list-style pages as well as titles that were date- or year-like rather than topical. [...] For each remaining title, we retrieved the corresponding entries from Wikipedia and Grokipedia [...] HTML pages were downloaded between 5–11 November 2025 [...] with polite delays and a standard user-agent header [...] we retained only article pairs in which both platforms produced at least 500 words of clean prose. Of the original 20,000 target titles, 17,790 matched pairs met these criteria and formed the final analytical sample."

The Davis team contented itself with the smallest dataset:

To establish a baseline of high-interest and contentious topics, we compiled a list of the 100 most-revised articles on English Wikipedia. Corresponding articles were harvested from both Wikipedia and Grokipedia to create a comparative dataset. [... a] filtration process yielded a final parallel corpus of 72 matched article pairs [...].

Unlike the other two teams, the Cornell researchers also recorded whether each Grokipedia article was marked as "adapted from Wikipedia" under its CC license:

496,058 of the Grokipedia entries that we were able to scrape displayed a Creative Commons Attribution-ShareAlike license (56% of the total) while 385,139 do not.

(They note that "CC-licensed articles on Grokipedia contain a public log of edits that Grok made to the source Wikipedia article, and non-CC-licensed articles do not. We were unable to scrape this information on our first attempt".)

The Cornell team is the only one to have released its data, in form of a 1.72 GB dataset on Hugging Face. (The Davis researchers state that theirs is available upon request.) All three were drawing from the initial "0.1" version of Grokipedia, which around November 20 was replaced by version "0.2" whose content appears to differ substantially (it now also accepts proposed edits from users). Therefore the Cornell dataset might already be seen as an important historical artefact (although it only provides the former Grokipedia articles in a somewhat mangled "chunked" form, see below; other scrapes have been made available by others, and the Archive Team has begun preserving much or all of the site on the Wayback Machine).

Lewoniewski observes that as of around November 1:

Almost all articles in Grokipedia have corresponding articles in English Wikipedia. 24,288 article titles from Grokipedia were matched to corresponding Wikipedia articles through the redirect analysis. However, 3,536 Grokipedia articles [... have] no direct match to any title in English Wikipedia.

Article length and citation density

The Dublin team found that:

"Overall, Grokipedia entries are systematically longer than their Wikipedia counterparts. While Wikipedia contains a larger number of very short articles, Grokipedia articles exhibit a pronounced peak around 7,000 words, indicating that most Grokipedia entries are substantially more verbose. [...] Grokipedia articles average 7,662 words versus 6,280 on Wikipedia, and they exhibit far fewer explicit references, links, and headings per thousand words."

The Davis team similarly observed that:

"While Grokipedia produces articles that are, on average, longer than their Wikipedia counterparts, they exhibit lower citation per article and, therefore, also notably lower citation density."

The Cornell paper found that:

Grokipedia articles are significantly longer than their corresponding Wikipedia counterparts. Approximately 96% of Grokipedia contain as many or more text chunks than their Wikipedia counterparts. Similarly, if we parse out article structure and measure article length in terms of its outline structure, we can see that the median non-CC-licensed Grokipedia article is approximately 4.6 times longer than its Wikipedia counterpart, and some Grokipedia articles are dozens of times longer [...].

Source analysis: Reliability, political leanings, and "institutional nature"

The Dublin team evaluated the political leanings of the cited sources using the "News Media Bias and Factuality" dataset[supp 2]. As summarized in a December 8 Twitter thread by one of the authors:

"When we looked more closely into the political shift, analysing the references used in both Wikipedia and Grokipedia, we found: Grokipedia is shifted to the right, compared to Wikipedia. On average, the shift is not huge, and Grokipedia is still left-leaning, like Wikipedia. BUT Religion and History are dramatically pushed to the Right."

(The paper mentions that citations were rated "only when a numeric bias was available for a domain or its brand variant (e.g., bbc.com/bbc.co.uk)" in the "News Media Bias and Factuality" dataset. Unfortunately it doesn't disclose how many citations this excluded. As found by the Davis authors - see below - both encyclopedias include a large share of non-news citations.)

The Cornell researchers first evaluated the reliability of both encyclopedias' citations according to the ratings in Wikipedia's own "perennial sources list":

[...] “generally reliable” sources make up a far larger proportion of Wikipedia citations (12.7%) than “generally unreliable” (2.9%) or “blacklisted” (0.04%) sources [...]. “Generally reliable” sources are cited in roughly 2 of 5 (41.1%) of Wikipedia articles, as opposed to roughly 1 in 5 (21.8%) articles citing “generally unreliable” sources and 1 in 167 (0.6%) articles citing “blacklisted” sources. Grokipedia’s citation practices appear to be less in line with Wikipedia editorial norms. “Generally reliable” sources make up 7.7% of citations on Grokipedia (a relative decrease of 39%), “generally unreliable” sources are 5.4% of citations (a relative increase of 86%), and “blacklisted” sources make up 0.1% of citations (a relative increase of 275%). At the article level, the increase is even more drastically visible: 5.5% of Grokipedia articles contain at least one citation to “blacklisted” sources—a ninefold increase in prevalence compared to Wikipedia.

Of course, it is unsurprising that Wikipedia adheres better to its own sourcing standards than other encyclopedias. As a result, the Cornell authors repeated this analysis with a dataset of quality ratings of news website domains from an academic paper (Lin et al.), with results that are "roughly in line with those that relied on English Wikipedia’s Perennial Source list":

English Wikipedia is more likely to cite domains on the 0.6 and above higher end of Lin et al.’s quality score (27.4% of all citations) than Grokipedia (21.3% of all citations). Low quality domains—which we define as having quality scores between 0.0 and 0.2—make up three times the share of total citations on Grokipedia than Wikipedia. Even though this share is relatively small (0.03% of the total), it means Grokipedia includes 12,522 citations to domains deemed of very low credibility. Websites in this category include the white nationalist forum Stormfront, the antivaccine conspiracy website Natural News, and the fringe website InfoWars. None of these domains are cited at all on Wikipedia; they have 180 citations on Musk’s service.

The Cornell authors cautioned that:

A limitation with both source quality scores is that they don’t rate the majority of citations used on either service. What we can say at this stage is that Grokipedia is both more capacious in its citations—almost doubling Wikipedia’s total—and more ecumenical in its approach, including many more sources across all quality buckets.

In contrast, the Davis paper's two research questions about sourcing differences between Grokipedia and Wikipedia eschewed a direct analysis of the quality or political orientation of the citations:

RQ2: Is there a qualitative difference in the institutional nature of referenced sources?

RQ3: Is there a difference in how diverse article topics are epistemologically sourced?

Rather than relying on external datasets like the Dublin and Cornell authors (and thus having to limit their conclusions to only those citations covered by these datasets), the Davis authors were able to classify every citation in their dataset. This was achieved by "develop[ing] and appl[ying] a systematic content coding scheme based on [their own] 'Citation Content Coding Manual' [...] to assign each unique citation to exactly one of eight mutually exclusive categories", such as "Academic & Scholarly", "Government & Official", or "User-Generated (UGC)". This scheme was then automatically applied (by Gemini Flash 2.5, aided by an extensive coding manual and vetted against a manual classification of a small sample) to classify the roughly 50,000 citations in the entire dataset.

Regarding RQ2, the results revealed

a fundamental divergence in the substrate of authority used by each platform. Wikipedia is anchored by a dual foundation of "News & Journalism" and "Academic & Scholarly" sources. Together, these two categories account for approximately 64.7% of the global corpus [...] In Grokipedia, the reliance on "News & Journalism" remains robust, merely being reduced by 20 percent. However, the "Academic" pillar drops significantly, experiencing a 3-fold reduction [...]. Grokipedia substitutes scholarly sources with an increase in citations to Corporate & Commercial, Reference & Tertiary, Government & Official, Opinion & Advocacy, –all increasing by almost 50 percent of their Wikipedia share– and especially NGOs/Think Tanks (whose share increases by 3x), and User-Generated Content (UGC) sources (whose share increases by 4x) [...]

To investigate RQ3, the Davis authors manually classified the 72 articles in their corpus by topic area, finding that:

[Wikipedia] alters its sourcing hierarchy based on the nature of the topic. For "Politics & Conflict" and "General Knowledge & Society," Wikipedia relies heavily on Academic & Scholarly sources. Conversely, for "Sports & Athletics" and "Media & Entertainment," the academic band shrinks, and the platform pivots appropriately to News & Journalism, which dominates the citations. In contrast, Grokipedia [...] exhibits a fundamental restructuring of authority in high-stakes domains. While it mirrors Wikipedia’s news-heavy approach for entertainment topics, the "Academic & Scholarly" band is critically depleted, especially in "Politics & Conflict," where Grokipedia substitutes this with a massive influx of Government & Official sources and NGO/Think Tank reports.

Similarity of content between Grokipedia and Wikipedia

The Cornell and Dublin teams also ventured beyond citations to directly compare the text of both encyclopedias. Both first split each article into smaller text segments and then applied quantitative text similarity measures to these.

Specifically, the Cornell researchers:

for both the Grokipedia and Wikipedia corpora [...] extracted the plaintext content of each article in 250-token chunks, with a 100-token overlap between chunks.

This method seems a bit crude, as the resulting chunks (arbitrary example) cut across sentences and paragraphs, i.e. contain lots of mangled sentences. In contrast, the Dublin team used an established NLP tool to split the text while keeping these intact:

Each cleaned article was tokenized into sentences and words using nltk’s Punkt tokenizer.

The Cornell team then:

embedded each of these [chunks] using Google’s EmbeddingGemma [14], a state-of-the-art 300M parameter embedding model. Once we had embeddings, we calculated the within-article pairwise cosine similarity for each chunk [i.e. between pairs of chunks from the Grokipedia and Wikipedia article about the same topic]. This allows us to meaningfully discuss metrics like content similarity (filtered by various factors), average article similarity (aggregated across chunks), and more.

In contrast, the Dublin team employed a whole "suite of eight similarity measures grouped into four conceptual domains": lexical similarity (e.g. "cosine similarity of TF–IDF vectors"), n-gram overlap, semantic similarity (including based on LLM embeddings, similar to the Cornell team, albeit using older and smaller models), and stylistic similarity (aggregating differences in various simpler metrics such as sentence lengths and readability scores).

As one would expect, the Cornell team found that Grokipedia's "adapted from Wikipedia" articles were more similar to their Wikipedia counterpart than those without that notice:

non-CC-licensed entries on Grokipedia [... have] a mean chunk similarity to their Wikipedia equivalents of 0.77. The similarity for entries with the license is more heavily distributed towards the far end of the spectrum, with a much higher mean chunk similarity of 0.90.

Interestingly, their chunk similarity analysis also seems to function as a plagiarism detector of sorts:

Grokipedia articles with very high average chunk similarity to their corresponding Wikipedia article include verbatim transcriptions. These articles appear in both the CC-licensed and non-CC-licensed subsets of the data; that is, identical articles (or chunks) do not necessarily carry an attribution to Wikipedia or a CC license. For instance, Table 1 shows two excerpts from Grokipedia entries that have exact matches on equivalent Wikipedia articles. The entry for the Mejia Thermal Power Station is not CC-licensed, whereas the one for Sono Sachiko, a 19th century member of the Japanese imperial family, attributes Wikipedia.

Note that, in the non-CC-licensed Mejia Thermal Power Station page, the first sentences on both Wikipedia [[|relevant revision] ] and Grokipedia [ Wayback snapshot] include the same typo: “Commissioned on [sic] 1996”.

The Cornell authors leave it open how frequent such unattributed matching sentences are overall.

The Dublin researchers ultimately combined their eight different article similarity metrics into a single one (using principal components analysis), finding that its

distribution [...] is distinctly bimodal, suggesting the presence of two substantive groups of article pairs: one in which Grokipedia and Wikipedia differ substantially, and another in which the two versions are highly similar.

Presumably these two groups correspond to the CC-licensed and non-CC-licensed Grokipedia articles, but the paper did not consider this property (in contrast to the Cornell researchers).

Similar to the Davis researchers, the Dublin paper also classified articles by topic, however (due to their much larger sample) using an automated method (relying on GPT-5). This enabled them to conclude that

the largest cross-platform differences in similarity appear in articles related to politics, geography, history, business, and religion. In parallel, [...] the strongest rightward shifts [from Wikipedia to Grokipedia] in source bias occur in articles on religion, history, languages, and business, indicating that ideological divergence is especially pronounced in these domains.

2026 Wikimedia Research Fund announced

The Wikimedia Foundation's Research department announced the launch of the 2026 Wikimedia Research Fund". It funds

Research Proposals (Type 1), Extended Research Proposals (Type 2), and Event and Community-Building Proposals (Type 3). [...] The maximum request is 50,000 USD (Type 1 and 3) and 150,000 USD (Type 2).

Letters of intent for research proposals (Type 1 and 2) are due by January 16, 2026, and full proposals for all three types on April 3, 2026.

See also our related earlier coverage:

Briefly

Other recent publications

Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.

"Investigating the evolution of Wikipedia articles through underlying triggering networks"

This paper in the Journal of Information Science (excerpts) considers networks that have "factoids" as nodes, and associations between them as edges, and finds e.g. that "the inclusion of one factoid [on Wikipedia] leads to the inclusion of many other factoids". From the abstract:[6]

In collaborative environments, the contribution made by each user is perceived to set the stage for the manifestation of more contribution by other users, termed as the phenomenon of triggering. [...] In this work, we analyse the revision history of Wikipedia articles to examine the traces of triggering present in them. We also build and analyse triggering networks for these articles that capture the association among different pieces of the articles. The analysis of the structural properties of these networks provides useful insights on how the existing knowledge leads to the introduction of more knowledge in these articles [...]

From the "Discussion" section:

Our analysis on triggering networks of Wikipedia articles not only validates and extends the old classical theories on the phenomenon of existing knowledge triggering the introduction of more knowledge but also provides useful insights pertaining to the evolution of Wikipedia articles. Examining the network structure reveals many properties of the triggering phenomenon. For example, a well-defined community structure clearly endorses that the inclusion of one factoid leads to the inclusion of many other factoids. Moreover, many of the factoids belonging to a subtopic are introduced together. Furthermore, the core-periphery structure and the degree distribution suggest that all the factoids do not have a similar triggering power. Some factoids lead to the introduction of many more factoids and hence are paramount in the article development process than the factoids. The introduction of these factoids in the articles may be considered as milestones in the article evolution process. Overall, the study explains one of the reasons behind collaborative knowledge building being more efficient than individual knowledge building.

See also our coverage of a related earlier publication by the same authors at OpenSym 2018: "'Triggering' article contributions by adding factoids"

"Throw Your Hat in the Ring (of Wikipedia): Exploring Urban-Rural Disparities in Local Politicians' Information Supply"

From the abstract:[7]

This study [...] employs a dataset of politicians who ran for local elections in Japan over approximately 20 years and discovers that the creation and revisions of local politicians' pages are associated with socio-economic factors such as the employment ratio by industry and age distribution. We find that the majority of the suppliers of politicians' information are unregistered and primarily interested in politicians' pages compared to registered users. Additional analysis reveals that users who supply information about politicians before and after an election are more active on Wikipedia than the average user. The findings presented imply that the information supply on Wikipedia, which relies on voluntary contributions, may reflect regional socio-economic disparities.

"Wikipedia Citations: Reproducible Citation Extraction from Multilingual Wikipedia"

From the abstract:[8]

A total of 29.3 million citations were extracted from the English Wikipedia in May 2020. Following this one-off research project, we designed a reproducible pipeline that can process any Wikipedia dump in the cloud-based settings. To demonstrate its usability, we extracted 40.6 million citations in February 2023 and 44.7 million citations in February 2024. Furthermore, we equipped the pipeline with an adapted Wikipedia citation template translation module to process multilingual Wikipedia articles in 15 languages so that they are parsed and mapped into a generic structured citation template. This paper presents our open-source software pipeline for retrieving, classifying, and disambiguating citations on demand from a given Wikipedia dump.

"Wiki Loves iNaturalist: How Wikimedians Integrate iNaturalist Content on Wikipedia, Wikidata, and Wikimedia Commons"

"The steady growth demonstrated of iNaturalist content on Wikimedia projects: A) the number of files in the category 'INaturalist' (sic) and subcategories on Wikimedia Commons, including image and audio files; B) the number of files in the category that illustrate a page in at least one Wikimedia project (e.g., Spanish Wikipedia or Wikidata); and C) the number of times the images in the categories were viewed across Wikimedia projects. Peaks correspond to the months in which the depicted images were displayed in the "Did you know..." session on the main page of English Wikipedia. Metrics via the Commons Impact Metrics Dashboard."

From this conference abstract:[9]

With over 50 million observations per year, iNaturalist is one of the world's most successful citizen science projects, uniting millions of people worldwide in observing, sharing, and identifying nature [...]. iNaturalist and Wikipedia have much in common: they are both collaborative, large-scale, open infrastructures made by volunteer communities with long-reaching impact on human knowledge. [...] To enable the seamless upload of iNaturalist images to Wikimedia Commons (which in turn enables their reuse on Wikipedia and other Wikimedia projects), this volunteer community has developed a diverse set of open source tools [...]

References

  1. ^ Dişli, Meltem; Candela, Gustavo; Gutiérrez, Silvia; Fontenelle, Giovanna (12 December 2025). "Open Data Practices of Art Museums in Wikidata: A Compliance Assessment". Journal of Open Humanities Data. 11 71. doi:10.5334/johd.438.
  2. ^ Yasseri, Taha; Mohammadi, Saeedeh (2025-11-30), How Similar Are Grokipedia and Wikipedia? A Multi-Dimensional Textual and Structural Comparison, arXiv, doi:10.48550/arXiv.2510.26899
  3. ^ Triedman, Harold; Mantzarlis, Alexios (2025-11-12), What did Elon change? A comprehensive analysis of Grokipedia, arXiv, doi:10.48550/arXiv.2511.09685 / Code
  4. ^ Mehdizadeh, Aliakbar; Hilbert, Martin (2025-12-03), Epistemic Substitution: How Grokipedia's AI-Generated Encyclopedia Restructures Authority, arXiv, doi:10.48550/arXiv.2512.03337
  5. ^ Lewoniewski, Włodzimierz (2025-11-11). "Grokipedia vs Wikipedia: Quantitative Analysis (video)" (Blog). Lewoniewski. / Dataset
  6. ^ Chhabra, Anamika; Setia, Simran (2025-09-25). "Investigating the evolution of Wikipedia articles through underlying triggering networks". Journal of Information Science 01655515251362587. doi:10.1177/01655515251362587. ISSN 0165-5515. Closed access icon
  7. ^ Matsui, Akira; Miyazaki, Kunihiro; Murayama, Taichi (2024-05-28). "Throw Your Hat in the Ring (Of Wikipedia): Exploring Urban-Rural Disparities in Local Politicians' Information Supply". Proceedings of the International AAAI Conference on Web and Social Media. 18: 1027–1040. doi:10.1609/icwsm.v18i1.31370. ISSN 2334-0770.
  8. ^ Kokash, Natallia; Colavizza, Giovanni (2025-12-09). "Wikipedia Citations: Reproducible Citation Extraction from Multilingual Wikipedia". Quantitative Science Studies: 1–14. doi:10.1162/QSS.a.401. ISSN 2641-3337.
  9. ^ Lubiana, Tiago; Littauer, Richard; Leachman, Siobhan; Ainali, Jan; Karingamadathil, Manoj; Waagmeester, Andra; Meudt, Heidi M.; Taraborelli, Dario (2025-12-05). "Wiki Loves iNaturalist: How Wikimedians Integrate iNaturalist Content on Wikipedia, Wikidata, and Wikimedia Commons". Biodiversity Information Science and Standards. 6798855 - Advancing biodiversity goals from local to global scales using iNaturalist. Vol. 9. Pensoft Publishers. pp. –181155. doi:10.3897/biss.9.181155.
Supplementary references and notes:
  1. ^ Pereda, Javier; Willcox, Pip; Candela, Gustavo; Sanchez, Alexander; Murrieta-Flores, Patricia A. (12 March 2025). "Online cultural heritage as a social machine: a socio-technical approach to digital infrastructure and ecosystems". International Journal of Digital Humanities. 7 (1): 39–69. doi:10.1007/s42803-025-00097-6. PMC 12202677. PMID 40584139.
  2. ^ Sánchez-Cortés, Dairazalia; Burdisso, Sergio; Villatoro-Tello, Esaú; Motlicek, Petr (2024). Goeuriot, Lorraine; Mulhem, Philippe; Quénot, Georges; Schwab, Didier; Di Nunzio, Giorgio Maria; Soulier, Laure; Galuščáková, Petra; García Seco de Herrera, Alba; Faggioli, Guglielmo (eds.). "Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions". Experimental IR Meets Multilinguality, Multimodality, and Interaction. Cham: Springer Nature Switzerland: 127–138. doi:10.1007/978-3-031-71736-9_7. ISBN 978-3-031-71736-9.




Reader comments

File:You Am I - Stonehenge.jpg
Bruce Baker
cc-by-2.0
125
450
2026-01-15

Tonight I'm gonna rock you tonight

Contribute   —  
Share this
By Igordebraga, Vestrian24Bio, Ollieisanerd, and Shuipzv3
This traffic report is adapted from the Top 25 Report, prepared with commentary by Igordebraga, Vestrian24Bio, Ollieisanerd, and Shuipzv3.

While the 2025 Annual Report sees some delays, here are two of the last weeks of the year.

If the sky that we look upon should tumble and fall (December 14 to 20)

Rank Article Class Views Image Notes/about
1 Rob Reiner 11,853,790 Like his father Carl Reiner (#9), Rob Reiner started his career acting (and still returned to it on occasion, including The Wolf of Wall Street) only to make a name for himself directing, starting off strong with This Is Spinal Tap and starting an incredible run (The Sure Thing, Stand By Me, The Princess Bride, When Harry Met Sally..., Misery and A Few Good Men) that only ended with the reviled North ten years later, starting a slump aside from The American President and The Bucket List. And a few months after Reiner released the sequel Spinal Tap II: The End Continues, the world was shocked to learn he was found dead alongside wife Michele, both having been repeatedly stabbed. And worse, the possible perpetrator was their son Nick, whose struggles with mental health and drug addiction became the subject of Reiner's movie Being Charlie, which Nick co-wrote.
2 Dhurandhar 3,275,687 Aditya Dhar made India's biggest hit of the year, which despite receiving mixed-to-positive reviews from critics grossed nearly 1,000 crore (US$120 million) against a budget of 300 crore (US$35 million) to emerge as the 4th highest grossing Bollywood film of all-time and the ninth highest-grossing Indian film of all-time as of this report. The success certainly sparks wonders for the direct sequel to be released in March 2026.
3 2025 Bondi Beach shooting 1,783,606 Nearly 30 years after a massacre led Australia to enact very strict gun laws, another mass shooting happened, that also turned out to be the country's deadliest terrorist incident. Two men with Islamic State affiliation opened fire at Sydney's most famous beach, killing 15, including a child, and injuring many others, before the police killed one of the shooters and wounded another to send him to the hospital. Homemade bombs were also found in their car. Australian prime minister Anthony Albanese said it was a deliberate attack on Jewish people during the first night of Hanukkah.
4 Tracy Reiner 1,733,872 Before marrying Michele and having three children with her (the middle one possibly killed them, and the youngest found their bodies), #1 already served as a parent to the daughter of his first wife Penny Marshall (#7), going as far as adopting her. Tracy worked as an actress for decades, including in movies by both Reiner and Marshall (the latter responsible for maybe her most notable role in A League of their Own), but has since 2015 only worked with medical software.
5 Avatar: Fire and Ash 1,661,197 Three years after making a splash with sequel The Way of Water, the giant blue cat people of Avatar return trying to set the box office aflame in part 3. Along with the humans who just want to ruin the ecosystem of Pandora, Jake Sully and his family are confronted by a hostile fire-themed Na'vi tribe, the Ash People, whose leader Varang ends hooking up with Jake's archenemy Quaritch. Even positive assessments found Fire and Ash overlong and retreading some familiar ground, no matter if delivering amazing visuals and the same thrilling action James Cameron built his career upon. But it's certain that it will repeat the financial success of its predecessors, as even before the weekend started it had earned $137 million worldwide - and reaching the billion mark is necessary to offset a gigantic budget estimated at $400 million...
6 Greg Biffle 1,375,111 Another tragic death that happened during the week was this NASCAR racer. His Cessna had taken off in a small North Carolina airport and a few minutes later turned back around, only to crash attempting to land. All seven occupants died - Biffle, his wife and his two children, plus the pilot and his son.
7 Penny Marshall 1,349,343 #1's first wife, who like him was an actor (best known as Laverne DeFazio on both Happy Days and Laverne & Shirley) who became a successful director of movies such as Big, Awakenings and A League of Their Own. Marshall, who had daughter #4 from her first marriage and was the sister of another actor-director, Garry Marshall, died in 2018.
8 Wake Up Dead Man 1,308,323 For the third time, Rian Johnson created a murder mystery with an impressive cast, centered around Daniel Craig with an exaggerated Southern accent as detective Benoit Blanc. And it also fits 3 titles taken from rock songs (Knives Out is Radiohead, Glass Onion Beatles, and U2 has a "Wake Up Dead Man", even if Johnson said he actually took the title from a line of a folk song). Released on Netflix to great critical praise, Wake Up Dead Man has Blanc going to an upstate New York church where an outspoken priest was murdered, being helped in his investigation by a young priest with some traumas of his own.
9 Carl Reiner 1,250,064 #1's father was also an actor turned director, well-known for comedic work: he had a duo with Mel Brooks, created The Dick Van Dyke Show, launched the film career of Steve Martin with The Jerk, and more recently played an elderly member of Danny Ocean's gang. Carl, who died in 2020, was married to actress Estelle Reiner and along with Rob had son Lucas, a painter, and daughter Annie, a writer.
10 Anthony Joshua 1,207,996 Joshua, a two-time heavyweight boxing champion, fought Jake Paul in a professional match streamed on Netflix on December 19. Despite the heavyweight class traditionally having no upper limit, Joshua was restricted to 245 pounds (111 kg), the first time he had to make weight in his professional career. Still, he was heavily favored to win due to his bigger size and extensive experience and indeed won by knockout in the sixth round.

I have only one burning desire, let me stand next to your fire (December 21 to 27)

Rank Article Class Views Image Notes/about
1 Dhurandhar 2,857,861 This Indian Bollywood film released in the year-end has made big Box Office waves; just like last year's year-end release Pushpa 2, which was from Tollywood and went on to be the 3rd highest-grossing Indian film of all-time; just like that this film as of this week has grossed 1,051.6 crore (US$120 million) becoming only the 9th Indian film to cross the 1,000 crore (US$120 million) mark and currently ranks as the 7th highest-grossing Indian film of all-time. This commercial success has set high bars of hope for its direct sequel set to be released in March 2026.
2 Avatar: Fire and Ash 2,579,269 16 years ago James Cameron introduced the world to Pandora and some Na'vi tribes through the 2009 film where, humans from earth wanted to disrupt the ecosystem by digging minerals; and Jake Sully sided with the Na'vi and permanently ended up in his blue Avatar. 3 years ago he expanded the world of Pandora further by introducing a new water-themed Na'vi tribe while humans continued their invasion in Avatar: The Way of Water. Now, he's back for a third time with the introduction of a fire-themed Na'vi tribe. The film has so far grossed $566 million worldwide against a budget of $400 million and if it (I'm sure it would) manages go past the billion dollar mark we can expect to see at least two more films in the franchise in 2029 and 2031 respectively.
3 James Ransone 2,035,098 An actor best known for TV drama (The Wire, Generation Kill), and horror films (It Chapter Two, Sinister The Black Phone), along with being one of the thieves in Inside Man, who hanged himself at the age of 46.
4 Jeffrey Epstein 1,320,562 After all the requests for "release the Epstein files", the goverment obliged and made public most of what emerged in the prosecution of the late financier involved in sex trafficking and other crimes. Many of the documents were extensively redacted (albeit a few in a lazy manner that users managed to "unblack" the text with Photoshop), but still offered a look into the members and techniques of Epstein's trafficking ring, and even mentions of the U.S. president who was a known friend of Epstein.
5 Stranger Things season 5 1,296,712 The conclusion of Netflix's top show, Stranger Things, is imminent with an unusual release schedule by the way. Volume 1 of the final season (first four episodes) were released on November 26 ahead of Thanksgiving; volume 2 (episode five, six, seven) were released last Thursday on the occasion of Christmas; and the finale will release next Thursday on the occasion of New Year. The whole world of fandom is looking forward with high anticipation for how will it all end...?
6 Tylor Chase 1,217,631 This child actor best known for Ned's Declassified School Survival Guide was discovered to be living in homelessness, and during the week was sent to a detox facility, though he was released after 36 hours.
7 Chris Rea 1,209,663 The acclaimed English singer-songwriter and guitarist died aged 74 on 22 December. Rea's hits include "Fool (If You Think It's Over)", "Josephine" and the seasonal classic, "Driving Home For Christmas". Rea's death close to the 25th joins him into a small group of musicians well-known for Christmas songs who died on or near the holiday, including George Michael, Dean Martin, etc.
8 Deaths in 2025 1,095,514 You wonder why your life is screaming
Wonder why we're Humans Being!
9 Marty Supreme 1,056,911 This sports comedy-drama film produced and directed by Josh Safdie who co-wrote the script with Ronald Bronstein. It stars Timothée Chalamet as the lead character whose story is loosely inspired by the life and career of American table tennis player Marty Reisman. The film premiered at the 2025 New York Film Festival and was released in US by A24 last Thursday. The film has received critical acclaim, and has received 3 Golden Globe nods, 8 Critics' Choice nods, 6 ASTRA nods and much more..!
10 Vince Zampella 1,030,406 An American video game designer who died at age 55 after his Ferrari veered off the road and hit a concrete barrier, and one can see how much of a resume he had just by the 3 companies he founded: Infinity Ward of Call of Duty fame, Respawn Entertainment of games such as Titanfall, Apex Legends, and Star Wars Jedi: Fallen Order, and Ripple Effect Studios that was one of the developers of this year's Battlefield 6.

And gold is the reason for the wars we wage (December 28 to January 3)

Rank Article Class Views Image Notes/about
1 Brigitte Bardot 2,665,704 One of the biggest beauty symbols of the 20th century (even inspiring her own term, sex kitten), a French actress who between 1952 and 1973 was in 47 films, including And God Created Woman, The Truth and Viva Maria!, while also recording a few albums, and after moving away from the screen advocated for animal rights with the Brigitte Bardot Foundation, and got into hot water for controversial statements regarding Muslims, homosexuals, and even her own son. In any case, Bardot died at 91 right as the last week of 2025 started, leading to many tributes.
2 Nicolás Maduro 2,567,518 After Hugo Chávez died in 2013, this guy was promoted to president of Venezuela (#9). His government has been just as controversial and authoritarian as that of Chávez, characterized by electoral fraud (the one in 2018 even had the Organization of American States trying to overturn the results, leading the leader of the opposition to attempt filling in, and a crisis ensued), human rights abuses, corruption, censorship and severe economic hardship. During the last year of Donald Trump's first presidency, the US government offered $15 million for any information that would lead to Maduro's arrest. The first of Trump's second had that value raising to $50 million and multiple sanctions on Venezuela, before 2026 started with a military operation in Caracas (#7) that captured Maduro and his wife, who were sent to New York to be prosecuted.
3 Dhurandhar 2,084,600 Ranveer Singh and Sara Arjun are two of the stars in this Bollywood action thriller that in just one month became the highest-grossing film of the year in India.
4 Stranger Things 1,951,459 Netflix decided to turn a few New Year's parties into watch parties for the extra-long series finale for one of their most successful shows, and also provided a limited theatrical release (selling over a million tickets and earning over $25 million!) to witness the main characters raiding the Upside Down to kill the evil Vecna and destroy the parallel dimension. While reception was mostly positive, there's always something for certain viewers to complain about (this time around there was questioning about too many people surviving, as if those characters hadn't suffered enough in 5 seasons and also needed to die!). And even if the show ended, a return to Hawkins is still expected with the animated spin-off Stranger Things: Tales from '85, announced to be reminiscent of old Saturday-morning cartoons.
5 Stranger Things season 5 1,803,599
6 Avatar: Fire and Ash 1,730,351 Jake Sully, who abandoned his humanity to join a race of giant blue cat-like aliens, still tries to prevent greedy humans from destroying the ecological paradise of Pandora. The subtitle Fire and Ash notes a new and popular feature, a tribe of fire-themed Na'vi led by Oona Chaplin as the cruel Varang, but viewers and critics alike found the movie quite similar to its predecessor The Way of Water, from the excessive length to again having a climax centered around destroying whalers. But given it still provided incredible spectacle, audiences still went in droves and made the whole Avatar trilogy be billion dollar movies. Whether Fire and Ash will make enough to surpass Zootopia 2, currently at $1.6 billion, is to be seen; but it certainly won't match either the $2 billion of its predecessors, or the take of the Chinese cartoon that was the highest-grossing movie of 2025.
7 2026 United States strikes in Venezuela 1,106,540 Just like Iraq in 2003, the United States controversially go after a country with large oil reserves (#9) and an authoritarian government (#2). Covert operations and seizure of tankers happened in December, and on the night of January 2 Trump ordered aerial strikes, targeting antennas and active military bases around Venezuelan capital Caracas, and in the hours before dawn it was announced Maduro and his wife had been captured by US forces and were being flown to a trial in New York City.
8 Marty Supreme 1,065,997 After Zendaya took up tennis in Challengers, the other lead in Dune: Part 2, Timothée Chalamet, is a table tennis player in Marty Supreme, done by half the brothers who did Uncut Gems (the other one went for a much more violent sport in The Smashing Machine). Widely acclaimed and already making the rounds in the awards circuit, the movie opened on Christmas in the United States, finishing its weekend at third behind #6 and Zootopia 2, and has already recouped its budget with $71 million worldwide.
9 Venezuela 1,054,549 Vast petroleum reserves should have led to the development of this South American country. Instead over two decades under authoritarian presidents (Hugo Chávez from 1999 to 2013, #2 ever since) combined with the oil price drops during the 2008 financial crisis led to a socioeconomical and political crisis during the last 15 years, marked by shortages, hyperinflation and millions leaving the country. United States–Venezuela relations worsened last year, with travel bans, the country being designated a Foreign Terrorist Organization, and airstrikes on boats allegedly owned by drug traffickers, and as 2026 started a land operation happened (#7).
10 List of highest-grossing Indian films 1,051,204 2025 added five entries to this list, that regular readers of the Report might recognize: #3, Kantara: Chapter 1, Chhaava, Saiyaara and Coolie.

Exclusions

Most edited articles

For the December 4 – January 4 period, per this database report.

Title Revisions About
2025 Bondi Beach shooting 2801 In Australia's deadliest terror attack and the second-deadliest mass shooting, one Indian expatriate and his son inspired by Islamic State ideology opened fire in a Sydney beach while also throwing homemade bombs that didn't detonate, killing 15 people. Once the police intervened, the father died and the son was critically injured, and after spending two days in a coma is now imprisoned, with a trial set for April.
Deaths in 2025 2106 Should come as no surprise this will be the most viewed article of the year in the upcoming Annual Report.
2026 United States strikes in Venezuela 1761 2026 barely started and already got an international conflict with American troops taking Venezuelan president Nicolas Maduro to be tried in New York.
Rob Reiner 1412 A late addition to said Annual Report is a death in the year's closing days, an actor-director best known for an incredible seven movie stretch between 1984 and 1992 (interrupted by the reviled North in 1994) murdered alongside his wife by their troublemaking son.
2026 PDC World Darts Championship 1267 From 11 December to January 3, the Alexandra Palace in London received the best dart throwers in the world. Luke Littler successfully defended his title, defeating Gian van Veen in the final.
2025 SEA Games 1221 The biennial multi-sport event involving participants from the 11 countries of Southeast Asia were held from December 9 to 27 in Thailand.
Indonesia at the 2025 SEA Games 1136 Three competitors of the above also were heavily edited: Indonesia, who finished only behind the hosts in the medal count, along with fourth place Malaysia and sixth place Philippines.
Malaysia at the 2025 SEA Games 976
Philippines at the 2025 SEA Games 915
Dhurandhar 1104 Another late year Annual Report inclusion, a Bollywood release that in less than a month became the highest-grossing Indian movie of 2025.
Avatar: Fire and Ash 1007 Avatar: The Way of Water managed to enter the 2022 Report in less than a month in theaters, but part 3 couldn't get that same amount of pageviews. Not that it didn't earn much attention otherwise, given Fire and Ash surpassed $1 billion worldwide.
2025 Brown University shooting 994 Unlike in Australia, mass shootings are sadly very common in the United States. A gunman entered a building of the Brown University School of Engineering and killed two students and wounded nine others as they attended a review session in preparation for final exams. Two days later physics professor Nuno Loureiro was killed in his apartment. After three more days, the police found the body of the possible perpetrator, Claudio Manuel Neves Valente, a former Brown PhD student who had studied with Loureiro in their homeland of Portugal, who had killed himself one day after killing Loureiro. Given it involved an immigrant, Trump used the incident to suspend the Diversity Immigrant Visa, but it's possible the executive branch doesn't have the authority to suspend the issuance of "green cards" without the approval of Congress.
Bigg Boss (Tamil TV series) season 9 987 One of the Indian editions of Big Brother continues to roll.
Augustus 727 A Featured Article Review (that started back in April!) is leading to extensive work on the Roman emperor who followed Julius Caesar, and for whom the month of August was named.
2026 Bangladeshi general election 721 One of India's neighbors will choose 300 of its 350 representatives in February, and changes will certainly happen given the party that won the last four elections was banned after a revolution last year.



Reader comments

File:Punch (1841) (14771193731).jpg
Punch
PD
2026-01-15

Oh come on man.

Contribute   —  
Share this
By JPxG
Placeholder alt text

"Dude, seriously? Another hat? What does this one even do? 'Pending changes reviewer'? What the fuck are you ever going to use that for?"



Reader comments

If articles have been updated, you may need to refresh the single-page edition.



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0