In Common Knowledge: An Ethnography of Wikipedia, Dariusz Jemielniak—User:Pundit on the English and Polish Wikipedias and a steward—discusses Wikipedia from the standpoint of an experienced editor and administrator who is also a university professor specializing in management and organizations. In Virtual Reality: Just Because the Internet Told You, How Do You Know It's True?, journalism professor and author Charles Seife presents a more broadly themed work reminding us to question the reliability of information found throughout the Internet; he cites Wikipedia as a prime example of a website whose contents contain enough misinformation to warrant caution before relying on the information on the site.
Jemielniak's book is an academic discussion of Wikipedia; he does not aim to present either a "how-to" guide for editors and readers or a complete history of the project. He states that his "book is a result of long-term, reflexive participative ethnographic research" performed as a "native anthropologist." (p. 193) (The word "ethnographic" in this context refers not to ethnicity in the quasi-racial sense, but to the study of a subgroup of the population—here, the subgroup that actively edits Wikipedia.) By this, Jemielniak means that he has spent several years as a Wikipedian, has introspected about his experiences throughout that time through the lens of his academic background, and has now written up his findings and conclusions. I don't think he means that he became active in Wikipedia for the purpose of doing research about it, although it seems quite possible that he started thinking about combining his editing hobby and his professional interests fairly early in his wiki-career.
I cannot pretend to evaluate Common Knowledge as a work of anthropology or of organizational management science. As a general reader and a Wikipedian, I found the book interesting as a compilation of incidents in Wikipedia's history, some of which I was already familiar with and some of which were new to me, and as a reminder of some issues the project faces as it moves forward. Non-academic readers may find the book lacking in a unifying theme, beyond that Wikipedia plays an important role in the world today that warrants academic study of its culture and communities. Jemielniak recently stated (on a Wikipediocracy thread) that "I wrote this book for academic research purposes, I absolutely have no hope of high sales (and honestly, I'll be surprised if it goes beyond 500 copies)." The book has been praised by Jimmy Wales, Clay Shirky, Jonathan Zittrain, and Zygmunt Bauman and it deserves to sell well over 500 copies, but it won't make be making the wiki-best-seller list either.
The eight chapters of Common Knowledge discuss basic rules governing Wikipedia, different roles contributors take on within the project, dispute resolution processes, and the nature of project leadership. The topics are illustrated with examples of disputes or controversies drawn primarily from English Wikipedia history (though controversies about actions by Jimbo Wales on Wikimedia Commons and Wikiversity are also mentioned). The incidents Jemielniak discusses are presented in detail and accurately, but some of them are ten years old and don't necessarily reflect the project's practices or realities today. For example, Jemielniak reviews the bitter and protracted disagreement on En-WP regarding when the historical German-language name "Danzig" should be used for the city now located in Poland and known as Gdańsk. Perhaps aided by his own geographical and historical background, he does an excellent job of presenting the history of the dispute, surveying the arguments for the different points of view, and explaining why the dispute-resolution process ultimately reached the result it did. He does not, however, discuss whether the Wikipedia of 2014 would address the same issue, if it were arising anew, in the same fashion that the much younger Wikipedia of 2003-2004 did.
Jemielniak also doesn't spend much time discussing how lessons learned from Wikipedia dispute-resolution experiences can be used to minimize future disputes or to improve future decision-making. I find this unfortunate, but I can't call it a fault of the book, both because ethnography is descriptive rather than prescriptive, and more importantly because the failure to take stock of dispute-resolution successes and failures has struck me for years as a project-wide myopia. In the 13½ years of English Wikipedia there have been, in round numbers, a billion edit-wars, yet no one knows whether most edit-wars get resolved by civil discussion reaching a consensus on the optimal wording, or by one side's giving up and wandering away (or sometimes by everyone's ultimately losing interest and wandering away). Similarly, the English Wikipedia Arbitration Committee has decided several hundred cases since 2004, and community discussions on noticeboards have resolved thousands more content and conduct disputes, yet no one ever seems to have gone back and conducted any systematic review of which approaches to dispute-resolution worked better than others. That's a different book that ought to be written, although it too risks selling fewer than 500 copies.
Speaking of ArbCom (which I'm prone to do since I've served on ours since 2008), Jemielniak mentions the Arbitration Committees of both the Polish Wikipedia and the English Wikipedia. He opens the book with an account of a Polish Wikipedia arbitration case that resulted in his being blocked from Po-WP for one day. He claims that in retrospect he accepts the ruling against him, but his account of the dispute makes that ruling sound terribly unfair—a cynical gesture of evenhandedness, but meted out to editors who didn't deserve to be treated evenhandedly. (But of course those of us who can't read Polish will never hear the other side of the story.)
The book's mentions of En-WP ArbCom are sound, but dated. He discusses the historical origin of the Committee as an extension of the original authority of Jimmy Wales, and cites a handful of Committee decisions, the most recent of which is an unusual case-motion from 2009. He does not spend much time on the current role of the Committee. That's actually a very defensible omission, because at least on English Wikipedia (I can't speak for other projects), while ArbCom has other responsibilities (some of which most of us don't particularly want), the importance of the Arbitration Committee as an arbitration committee has radically declined in the past few years. (I've discussed this decline here.) So Jemielniak's not spending nearly as much space discussing arbitration as one might expect in a book about Wikipedia hierarchies, leadership, and dispute resolution turns out to be a reasonable decision, but one that is not explained.
Although the academic style of Common Knowledge (and the price of the book) will deter some readers, Wikipedians who want a taste of Jemielniak's thinking about the project can find it in a recent article he contributed to Slate, "The Unbearable Bureaucracy of Wikipedia". In this article, aimed at a general rather than an academic audience, Jemielniak posits that Wikipedia's "increasingly legalistic atmosphere is making it impossible to attract and keep the new editors the site needs." It's a thoughtful article that identifies a significant issue, and its more direct approach accompanied by concrete suggestions make this article more accessible than Common Knowledge for non-specialist readers. All of us who want Wikipedia to thrive, which requires that the project welcome newcomers and facilitate their becoming regular editors, can hope for more such wisdom from this Pundit.
By contrast to Jemielniak's academic treatment specific to Wikipedia, Charles Seife—the author of Zero, Alpha and Omega, and Proofiness—has written a more broadly themed book about the unreliability of information found throughout the Internet. "Just because the Internet told you," the subtitle asks, "how do you know it's true?" Now at one level, the fact that the Internet contains a fair amount of misinformation is not breaking news; "Someone is wrong on the internet" became a meme and then a cliché for a reason. Lots of us think we're sophisticated enough to avoid falling into the kinds of traps that Seife warns us about—but the warnings in Seife's book are important and timely nevertheless.
Wikipedia is just one of the many online sources of bad information that Seife discusses, but for obvious reasons it's the one I'll focus on here. Seife catalogs a dozen instances in which deliberate misinformation was introduced into Wikipedia. Such misinformation is inserted into Wikipedia, perhaps every day, by a miscellaneous array of pranksters, hoaxers, vandals, defamers, and in a few instances by Wikipedia critics conducting so-called "breaching experiments" to see how long a falsehood placed in Wikipedia stays in Wikipedia. (Such experiments are not permitted; see also Wikipedia:Do not create hoaxes.) Some of Seife's examples will be well-known to "Signpost" readers, such as the Colbert-inspired tripling of elephants and the Bicholim Conflict; others were new to me, such as AC Omonia Nicosia and the Edward Owens hoax.
Experienced Wikipedians are well-aware of this problem, as are our critics. English Wikipedia, in what can equally be considered admirable self-criticism or self-absorbed navel-gazing, contains discussions of hoaxes on Wikipedia; we also have a lengthy List of hoaxes on Wikipedia; and another compilation recently appeared on a critic site here. (Wikipediocracy link)
Misinformation in the media has always been with us (Tom Burnham's books were favorites of mine growing up, and I'm mildly dismayed that Burnham's name comes up a redlink), but it certainly is possible to spread false information more rapidly online than it was in the analog era. Of course, it is possible to spread correct information more rapidly as well. A particular problem is misinformation posted on Wikipedia—and elsewhere all over the Internet—with the purpose of doing harm to someone. (A prime example of this sort of thing is the Qworty fiasco that unfolded last year.) Any falsehoods in article content damage the credibility and usefulness of the encyclopedia we are collaboratively writing, but intentional falsehoods posted by a subject's personal or political or ideological enemies with the malicious intent to defame or damage a living person do so tenfold. I am confident that well over 99% of Wikipedia pages are free of intentional falsehoods—yet no one can deny that Wikipedia articles must still contain far too many lies, damn lies, and sadistics.
Neither Seife nor Jemielniak say much about the biographies of living persons policy and its enforcement, although many Wikipedians, including myself, have long thought fair treatment of our article subjects to be the central ethical issue affecting the project. I know that when I've been defamed online I didn't enjoy it, and that Wikipedia BLP subjects feel the same way when their number-one Google hit has been edited in nasty ways by their personal or political or ideological enemies. (The good news is that when I or others spot defamation on Wikipedia we are often able to do something about it; I've often wished that I had an "edit" and a "delete" button that I could use on the rest of the Internet.)
Seife's discussion of misinformation on Wikipedia focuses on intentionally false information, but a greater number of inaccuracies are introduced by editors who make honest mistakes than by hoaxers and vandals. Sometimes mistakes are made by good editors who inadvertently type the wrong word or misread a source. Other times, we encounter a good-faith editor who wants to help build Wikipedia but, at least in a given topic-area, simply doesn't know what he or she is talking about. Wikipedia has no systematic system of quality control beyond surmounting the bar for deletion, at least until one seeks to bring an article to the mainpage or have it rated (at which point various sorts of flyspecking take place—some of which can be overdone, but that's another discussion). On English Wikipedia today, there are dedicated noticeboards to address conflict-of-interest issues, evaluate the reliability of sources, solve copyright problems (some quite abstruse), keep fringe theories in check, and put a stop to edit-warring. I've never seen anyone wonder why there's no dedicated noticeboard where one goes for help in figuring out whether questionable information in an article is accurate or not.
Despite the falsehoods he identifies, all of which have now been removed, Seife acknowledges that "by some measures one can argue that Wikipedia is roughly as accurate as its paper-and-ink competitors." (p. 29) He cites the well-known 2005 Nature article comparing the accuracy of Wikipedia's scientific content to that of a canonical, traditional reference source, the Encyclopedia Britannica. One continues to read of comparisons of Wikipedia with traditional library reference books (see Reliability of Wikipedia). The Wikipedia community should certainly aspire for our encyclopedia to land on the favorable side of such comparisons. I think that on balance it does.
But "Wikipedia vs. Britannica" is no longer the right question, or at least not the only right question. At least equally relevant today is how Wikipedia's completeness and fairness and accuracy compare, not only to traditional media sources, but to the other information available on the Internet. Wikipedia has evolved as part of, not independent of, the Internet as a whole. And it is the Internet as a whole, not just Wikipedia, that has changed the population's information-searching habits, so that today when one needs or wants to look something up, one does so on the computer or a handheld device rather than in a book or a (hard-copy) journal or newspaper. In the unlikely event that Wikipedia (and all of its mirrors and derivatives) were to disappear tomorrow (and not be replaced by a similar site), our readers from schoolchildren to senior citizens would not revert to the habits of 25 years ago and start trooping to the library or even the reference shelves in their living rooms when they wanted to check a fact. (I am not saying this is a good thing or a bad thing, though it has elements of both; it is simply a truth.)
Instead, people in the wikiless world would still perform the same Google searches that today bring up their subject's Wikipedia article as a top-ranking hit. They would find the same results, minus Wikipedia, and they would look at the other top-ranking hits on their subject instead. Would those pages, on average, provide better-written, better-sourced, more accurate, and more fair coverage of their subject than the corresponding Wikipedia pages? And to the extent the answer is yes, how do we link the best of that content to become accessible from Wikipedia? A future Wikipedia scholar may wish to focus more on these questions (and produce another 495-copy-selling book).
Seife rather kindly refrains from discussing in the book, as an example of a questionable Wikipedia page, his own BLP. Predictably, that page is the first Google hit on Seife's name (his own webpage at NYU is second). Unfortunately, the article bears a prominent, disfiguring banner at the top of the page, proclaiming that:
Now, no well-informed reader of Wikipedia would take this pronouncement alleging that Charles Seife is an ill-written article as a reflection against Charles Seife. (If anything, the obvious circular reasoning suggests sloppiness in the crafting of the tag.) After all, the reader would know that Charles Seife wouldn't have written the article and, as a matter of our conflict-of-interest guidelines, is discouraged from editing the article at all, much less improving its overall editorial quality. Nonetheless, it isn't exactly encouraging that in the 13 months since an anonymous IP editor added that tag, no one has improved the article enough to resolve the quality concern and remove the tag. If I were notable enough to warrant a Wikipedia BLP and this were the state of it for over a year, I think I'd have the right to be ticked off. (Cynical aside to editors interested in Wikipedia's public relations: improve the BLPs of journalists likely to cover us.)
Meanwhile, in a recent radio interview—which is well worth listening to—Seife claims that Wikipedia gets four or five facts of his life wrong (not controversial claims, he says, just basic facts, though he doesn't name them), which knowing about the COI guideline he didn't fix. (Aside to Charles Seife: let me know about the non-controversial fixes needed and I'll make them myself. You won't need to go to The New Yorker à la Philip Roth.)
The bottom line on these two books: Wikipedians should read (and think carefully about) Jemielniak's Slate article, but only the hardier ones among us will gain the full benefit of his book, although all of us should thank him for writing it. More Wikipedians will enjoy Seife's book, though only a sliver of it is about Wikipedia, and perhaps everyone should listen to his radio interview, although for many of us both the book and interview will reinforce, rather than challenge, our existing views about the reliability of the information that surrounds us.
Discuss this story