The Signpost

File:Hindenburg disaster, 1937.jpg
Murray Becker/AP
pd
300
Community view

Six Wikipedians' thoughts on Grokipedia, and the humanity of it all

Contribute   —  
Share this
By Smallbones, Jess Wade, Steven Pruitt, Mary Mark Ockerbloom, Oltrepier, Betty Wills, and JPxG
This article includes the opinions of six Wikipedians about Elon Musk's new encyclopedia, Grokipedia. These opinions have been lightly edited for length and grammar and links to articles have been added.

On October 27, 2025, Elon Musk, the world's richest person, introduced his encyclopedia, named Grokipedia, which promptly crashed. The next day, it was re-introduced with about 850,000 articles, many of which were taken directly from Wikipedia. Other articles look very similar to Wikipedia articles, presumably because the AI bot that wrote the articles was trained with data from Wikipedia. The mainstream media reacted as if Grokipedia had crashed and burned – see the In the media section of this issue for a summary of the many articles. Of course, this was only version 0.1 of Grokipedia, so it may be too early to condemn it to the ash heap of history. It’s not one of the worst catastrophes in the world. Not yet, anyway.

User Rhododendrites has published an academically-oriented paper about the risks of Wikipedia in Tech Policy Press, which we have re-published in the Opinion section.

For this column, however, The Signpost asked six Wikipedians about their respective views of Grokipedia.

A scientist speaks

TKTK
Wade in 2017. Photo by Dave Guttridge, CC BY-SA 4.0

Jess Wade (GR) is a physicist at Imperial College London and has created over 1,200 articles about women scientists on Wikipedia throughout the years.

One of the many wonders of Wikipedia is that it is created by people, for people. Wikipedia pages are concise, carefully cited, and balanced – Grokipedia pages are repetitive, sloppy, and reflect Musk's own political biases. Wikipedia pages are the result of groups of anonymous nerds who value intellectual integrity and precision; if something is presented as a fact, it is likely to have been verified through a bunch of independent sources. The same cannot be said for Grokipedia, which is as accurate as all other Large Language Models; statistical machines optimised on what the internet says is probable, rather than what is actually true.

While much of the content on Grokipedia is lifted from Wikipedia, the Grokipedia pages are longer, sloppier, and partisan. This can be seen by comparing the biographies of Meredith Whittaker (GR), Timnit Gebru (GR) and Joy Buolamwini (GR), academics who champion the ethical and transparent development of technology, on both platforms. For each researcher, Grokipedia adds thousands of words on "Controversies and criticisms," hiding its own biases in sentences that start "proponents of …" and "critics have also," presenting one-sided opinions (from interviews and social media posts) as facts. Grokipedia is reality through Musk's lens, manipulated narratives presented in a format that people have come to trust.

Where's the opera?

TKTK
Pruitt in 2022. Photo by Fuzheado, CC BY-SA 4.0

Steven Pruitt (GR), known as Ser Amantio di Nicolao on Wikipedia, has made over six million edits and loves Italian opera. He might have ignored Grokipedia if The Signpost hadn't asked for his opinion.

We've seen similar post-Wiki sites crop up before, and none of them have had any particular staying power. There's no reason to think that Grokipedia will be any different.

I looked up a handful of articles related to early nineteenth-century opera, mostly Italian, including several articles I created – Fanny Eckerlin, Le nozze in villa, Caroline Unger – as well as Isabella Colbran (which I did not). None have been transferred over, indeed, most of the articles I searched for were missing on Grokipedia.

I also looked up the article about myself. The article is quite a bit more prolix than that on the English Wikipedia, and not so well-organized. The wording is not identical to the English language Wikipedia. AI has been used to rewrite, or rework, large swaths of it, and introduced a handful of minor errors. The article is quite a bit longer than that on the English Wikipedia, due to the introduction of semi-extraneous, often critical, information. It goes into some detail talking about criticisms of my work, though really nothing concrete: there is an entire section, with several subsections, titled "Criticisms and debates". It's nothing I haven't heard before, but it seems to be stitched together from blogs, forums, and comments.

Grokipedia is half-baked at best, and I don't see much of a future for it in its current state. I doubt it will have much staying power without a severe overhaul.

Can the machine keep up with us?

Vysotsky is a newly retired Dutch academic librarian, who wrote the Serendipity column for The Signpost for about two years. He has contributed over 12,000 photos to Wikimedia Commons and has been editing Wikipedia since 2007. Here he comments on the article about the 2025 Dutch general election (GR):

This is Grokipedia's method of operation, according to Grok: "How it works: Articles are automatically generated based on training data, with Grok integration for real-time updates and fact-checking. Users can submit suggestions, but there's no crowdsourcing like Wikipedia." Well, I fact-checked the "real-time updates". I know it’s only Grokipedia v.0.1, but updating the outcome of Dutch national elections shouldn't be too hard for artificial intelligence. It’s data, after all.

But no: the Dutch general elections were held on 29 October. The next day, 30 October, the overall outcome was clear: Liberal Democrats (D66) won 26 seats, Nationalist anti-migration (PVV) 26 seats, Conservative Liberal party (VVD) 22 seats, Social Democrats+Greens (GL-PvdA) 20, Christian Democrats 18 seats, ten other parties won the remaining seats.

There was only one point unclear: which party had obtained most votes? The Liberal Democrats, or the Nationalist party? That became clear on 1 November: the Liberal Democrats got most votes – and could take the lead in trying to form a new government. English Wikipedia reported the results on 30 October (with 99.7% of the votes counted). As of 4 November, Grokipedia still gives a two-week old prediction ("Projected Seats (Latest Aggregate, Oct 2025)“): PVV 40, GL-PvdA 24, CDA 24, D66 17, VVD 15). On top of the page: "Fact-checked by Grok last month". So much for the speed of fact-checking by Grok. The leader of D66 and projected Prime Minister, Rob Jetten, isn't mentioned in their article at all; his party D66 only once, in the table with predictions.

Most importantly: this reveals why we need humans, who are curious and dedicated to topics close to their heart, wanting to report on the matter as soon as official results are made available. Bots are neither curious, nor dedicated.

What is in a baby bottle?

Mary Mark Ockerbloom has edited Wikipedia for almost 20 years, works as a paid Wikipedian in Residence for educational, scientific, and cultural organizations, and organizes the Philadelphia WikiSalon for new editors and others who wish to develop their editing skills.

Imagine you're a mother-to-be wondering whether to use baby bottles to feed expressed milk or formula. Would you rather read a Wikipedia article curated by humans, or an AI-generated one from Grokipedia? Baby bottle (GR) is one of many articles I've substantially rewritten as a Wikipedian in Residence. While working on it, I asked myself, "What information would someone be trying to find when they read this article?"

Answering that question requires a deep understanding of our concern as humans. LLMs are unlikely to do this well. They rely on the sources they are given, whether good, bad, or indifferent. They lack the underlying world knowledge that humans use to assess and prioritize information.

When I rewrote "Baby bottle", I added over 10,000 words and 154 references. I focused on things parents might want to know, like design considerations, materials, safety and use of baby bottles.

Grokipedia's first paragraph describes a baby bottle as having three typical components. Wikipedia cites a fourth, the protective cap used to keep bottles clean and prevent spills.

Grokipedia's second paragraph is one rambling sentence that begins with "prehistoric ceramic bottles", jumps to high levels of mortality in the 19th century, and concludes that safety has improved since then. This disjointed treatment of past events reflects LLMs' lack of real-world understanding of time. If you ask an LLM what is happening "today", it looks at millions of statements where the word "today" was used. Its answer may reflect what was said last week or ten years ago. It doesn't understand that "today" has meaning based on when it is asked.

Grokipedia's sentences are long, disjointed, and bombastic. Some of it reads like advertising. I'm thankful that Wikipedia editors have worked steadfastly to remove promotionally-toned additions to the "Baby bottle" page. I know which page I'd rather read, if I was a new mom.

Grokipedia did something better than Wikipedia

User Oltrepier mainly edits the Italian-language Wikipedia and is a Signpost reporter, too. He did find something that Grokipedia has done better than Wikipedia. In fact, the enWiki article on the Detention of Johan Floderus (GR), which Oltrepier himself created in 2023, is getting out of date.

The Grokipedia article, including the sources, is definitely more up-to-date than the one on Wikipedia, but it's also verbose and drags on and on. The language used by Grok can be clunky, with strange word choices – for example, "documented cases exceeding 66 victims". Right from the start, the article focuses less on the actual key events involving the EU diplomat Johan Floderus and more on the hostage diplomacy used by Iran in recent years, to the point where it reads more like a political speech than an encyclopedia entry. It definitely doesn't help to feature phrases such as, "This persistence highlights the judiciary's subordination to political imperatives, where legal facades mask bargaining tactics amid the regime's prioritization of ideological control over impartial justice".

AI can sometimes help

TKTK
Wills (right) with Tanya Tucker in 2019. Photo by Mike Klem, CC BY-SA 4.0

Betty Wills, known as Atsme on Wikipedia since 2011, also founded Justapedia, a Wikipedia fork which resembles Wikipedia more than Grokipedia. Justapedia welcomes both conservative and liberal editors, according to Wills. With a little help from Grok 4 beta, she summarizes the difference between Wikipedia and Grokipedia as follows:

Grokipedia is not human, can't relate to the human condition, and can’t initiate doubt as it's an LLM. Garbage in, garbage out

  • It amplifies the biases in its training data (e.g., overrepresenting Western perspectives). It needs human oversight and prompts like "Analyze your response for bias."
  • It will hallucinate plausible but false information (e.g., inventing non-existent historical events). It puts language fluency over accuracy, with no built-in fact-checking.
  • It cites Quora, The Daily Mail (UK), Britannica, and Biography.com. Like Grok, ChatGPT, and the other AI bots, it will also cite Reddit and Wikipedia.
  • It can't judge notability or importance and doesn't have the level of originality/creativity needed to create the kinds of new articles that will make it competitive with other encyclopedias.
  • Even if it neutralizes what it considers "bias", it's not trustworthy. It doesn't have Wikipedia's New Page Patrol, or even editors helping to keep fake articles out.

The Steele dossier article (GR) shows the differences in perspectives between Grokipedia and Wikipedia. Grokipedia tries to remove perceived biases to achieve a "neutral" POV, rather than covering all notable POVs.

Consuming AI-generated information is like quelling a growling stomach by downing a Whopper and fries from a drive-thru versus savoring a well-prepared, five course meal in an upscale restaurant.

Unless it's too unimportant

I am Jake P. X. Gotts (known here by the initials JPxG); as the editor-in-chief of the Signpost, I was reading this article for a pre-publication copyedit and became curious. Well, nobody asked for my opinion, but here it is anyway:

Some years ago I wrote the Wikipedia article Powder House Island, here listed as a Featured Article, about an artificial island in the Detroit River built in the late 1880s to circumvent a court order forbidding storing explosives on an island a few hundred feet over. The whole island is barely big enough for a few trees, and its whole history constitutes a handful of events.

In the grand scheme of things, it's not a very important place, and they aren't very important events, but it's a neat little place and the story is interesting. It involved substantial amounts of research and analysis. Both my article and the island itself, incidentally: it was made to carry out the construction of a large shipping channel in the Detroit River. This channel served the busy port of a 19th- and 20th-century city bustling with every type of heavy industry. But Detroit's riverfront bustles no more; the world of industry is shaped differently now, and there is no place for it there.

I realize that current LLM systems have their limitations − I created WP:LLM − but I can find no solace with my head in the sand. I think it is straightforward to interpolate a trendline on something that sufficed as a party favor five years ago, a parlor trick four years ago, a research topic three years ago, an investment area two years ago, and a major economic influence today. At some point, it is inevitable that the process of 100% manual writing I carried out in this article (and the process I carry out here at the Signpost) will become an antiquated curiosity, as my previous occupations as a manual welder and manual forklift driver are presently becoming, and as the occupations of my forefathers as manual blacksmiths and manual harvesters long ere became.

So I went to see what the new computer god thought of Powder House Island, this sphinx of cement and silicon and backpropagation and attention heads — and —

I guess it hasn't had time to think everything through yet, which is convenient, because neither have I.

Conclusion

When the Grok chatbot, Betty Wills, and five other Wikipedians send out much the same message, it's hard to ignore. Grokipedia is extremely flawed, perhaps fatally so, because it's controlled by one biased person with extreme views and because it lacks human understanding and the human touch. Using AI to write an encyclopedia means that the "writers" do not think for themselves, cannot recognize a notable topic or a reliable source, hallucinate "facts", do not question their own writing, and cannot eliminate bias or find a neutral point of view.

What's Grokipedia missing? In a word, humanity.


Signpost
In this issue
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.
Six Wikipedians were asked, one editor (myself) ingloriously plopped his own section in at the end without being asked. jp×g🗯️ 21:15, 10 November 2025 (UTC)[reply]
Ahh, I see. I got confused because I missed the part With a little help from Grok 4 beta in the section AI can sometimes help. —⁠andrybak (talk) 21:59, 10 November 2025 (UTC)[reply]
Does Atsme still identify as a Wikipedian? Axem Titanium (talk) 03:18, 11 November 2025 (UTC)[reply]
She is still a Wikipedian. She hasn't been blocked since 2015 and the total time she's spent blocked was less than 10 days. There's no rule that you can't be a Wikipedian if you run a Wiki-fork and it's always been a rule that you can do whatever you want with Wikipedia articles as long as you give credit to where it came from. The "right to fork" has long been viewed as a protection for Wiki editors - whatever the powers that be try to do to us, we've still got the "true Wikipedia" (however you define that) and can continue improving it. Whoever tries to shut us down (pick your favorite boogie-man:DT, JW, EM, WMF, ArbCom, Sb, UN - whoever). They can not shut us down. In that sense Wikipedia is immortal. Thanks to @Atsme: for proving that. Smallbones(smalltalk) 14:23, 11 November 2025 (UTC)[reply]
I don't disagree with any of that, but it's not quite what I was asking. My question was if she still (self-)identifies as a Wikipedian. Axem Titanium (talk) 09:25, 13 November 2025 (UTC)[reply]
@Rodejong: Have you checked the HTML in your signature? I think it may be doing something strange to the page formatting. jp×g🗯️ 21:16, 10 November 2025 (UTC)[reply]
Yes, I can see now that it is -- you have neglected to close a small tag, I think. jp×g🗯️ 21:16, 10 November 2025 (UTC)[reply]
Yep, I noticed this attempt at the top of Grok's article on Steven Pruitt:
![Steven Pruitt in 2022. He is a middle-aged man with dark blond hair wearing an orange collar shirt.](./assets/Steven_Pruitt_-Depths_of_Wikipedia_DC-2022-05-27cropped Funcrunch (talk) 23:03, 10 November 2025 (UTC)[reply]
Interesting: on Commons, File:Steven Pruitt - Depths of Wikipedia DC - 2022-05-27 (cropped).jpg exists. ~2025-38162-37 (talk) 16:05, 3 December 2025 (UTC)[reply]
I also think that we're devoting far more time and headspace to thinking about Grokipedia than it deserves, frankly...but perhaps that's just me being more peppery than usual because I did not have a happy week last week. --Ser Amantio di NicolaoChe dicono a Signa?Lo dicono a Signa. 21:37, 10 November 2025 (UTC)[reply]
Not at all, I think most of us feel the same way. If I want to look up something, I'd rather look up on Wikipedia than on an AI rewritten dump, because its narcissistic owner has a grudge against established institutions (which Wikipedia is after 20 years right?) and doesn't care about the quality, like he has done with X.  Kind regards, ✍️ Rodejong💬talk 23:43, 10 November 2025 (UTC)[reply]
Until the current LLMs. I would say that we were making a mistake by not setting a limit to growth. WMF has increased wikis from 1 to 20 to 400 and we have not set a limit to how many articles especially on the amount of stubs. The downside is that there is a finite resource of people prepared to spend time editing, and nearly all those wikis have a handful of editors, a majority of poorly translated once off articles, and editor templates backlogs going back many years Wakelamp d[@-@]b (talk) 15:16, 11 November 2025 (UTC)[reply]
Your point actually somewhat reminds me of why I think Conservapedia failed. It is not an encyclopedia but a selective propaganda machine. I think I have only came across a handful of articles that were not written to make a political point. Grokipedia is better, it has actual articles unlike Conservapedia, but it still definitely has that political point-making aspect to it. ✶Quxyz✶ (talk) 12:48, 13 November 2025 (UTC)[reply]
@Quxyz: That's sort of what I'm getting at without getting at it. There's a lot of talk about Grokipedia because of who's behind it, and frankly, I don't think that matters. What matters is motive. If we are truly looking to the future and wondering about projects meant to rival Wikipedia, I don't think we need to be worried about anything with an agenda. The only thing that will come close to having any staying power is any project that aims to be truly encyclopedic. Otherwise I think it's just noise. --Ser Amantio di NicolaoChe dicono a Signa?Lo dicono a Signa. 22:06, 15 November 2025 (UTC)[reply]

On FoP subject

Grokipedia does seem to have convincing coverage on Freedom of Panorama at first (but likely to use online forums and social media comments as sources), but several errors can be seen on the "comparative national laws" section. Some of them, I have added in the description section of the screenshot upload I made yesterday (PhST/UTC+8). JWilz12345 (Talk|Contrib's.) 02:34, 11 November 2025 (UTC)[reply]


As an aid to editors

Grok has limitations, and the biggest is that it is a single voice. But we have problems as well that LLMs could help with

Grok and other LLM can also help editors. I don't suggest that any of these be built into Wikipedia, but

re point 1: I've been looking at the old maintenance templates recently, I don't think I would trust a LLM to work through them. They're not that complex all things considered, but I would consider verifying correctness in addressing those templates to be a priority, especially for the older ones. Alpha3031 (tc) 05:15, 12 November 2025 (UTC)[reply]

I'll take a contrarian view, versus the Six Wikipedians' thoughts. I agree it shows great promise as an aid to editors. I looked at its seven edits to my article (click "See edits", upper right) and found them interesting, and some were even helpful. I feel like I've looked through a window into the future. Those who are curious may look at my analysis and edits to the Wikipedia version. – wbm1058 (talk) 01:21, 13 November 2025 (UTC)[reply]

Wbm1058, I like your approach in the "my analysis" link. We can learn from Grok and AI with appropriate guardrails and discipline. -- GreenC 00:11, 14 November 2025 (UTC)[reply]
I admit, I enjoyed the photos and "real names", a welcome occasional break from the customary use of pseudonyms. Also some interesting observations were made.
What I am really waiting for, I guess, is to see how things go for Grokipedia. How will it evolve? How will it be received and accepted (or not)? I understand that I might have to wait years, and the Signpost won't necessarily be where I learn about it. Bruce leverett (talk) 16:17, 17 November 2025 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0