The Signpost

A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.

"Monetary materialities of peer-produced knowledge: the case of Wikipedia and Its Tensions with Paid Labour"

Reviewed by Nicolas Jullien

This article[1] discusses the links between paid efforts and voluntary efforts in the development of Wikipedia, focusing on the question of paid editing. It stresses the fact that Wikipedia is a mixed economy that results partly from paid labor (the technostructure and the people in charge of maintaining it, and those who defend the project in court, i.e. the paid employees of the WF).

The core of the article discusses, based on the debate about Wiki-PR (a company which was paid by firms to "edit" their EN-Wikipedia pages), and the impact it had on Wikipedia policy. It sheds light on the discussion between the Foundation, which expressed a more strict interpretation of the rules, and the contributors, especially from non-English Wikipedias, that took a more "pragmatic" approach. Paid editors provided help to the smaller projects in terms of creation of knowledge. The analysis, which views Wikipedia as a sort of communist organization, is less convincing, as is the fact that the authors did not compare this debate with what happens in FLOSS (free-libre open-source software) or in the non-digital world (the Foundation, or the local community groups), which are other example of the co-existence of voluntary and paid work.

"The Swedish Wikipedia gender gap"

Reviewed by Piotr Konieczny

This masters thesis[2] focuses on the Swedish Wikipedia and its gender gap. It quantifies data and provides information about why Swedish women are not contributing to the project. The author collected data through a questionnaire advertised in December 2014 on the Swedish Wikipedia through a project-wide banner (promotion that an average researcher can only dream about when it comes to English Wikipedia). The paper estimates the Swedish Wikipedia gender gap in the form of the percentage of female editors at between 13% to 19%, based on the self-reported data from Wikipedia account profiles and answers to the questionnaire. More interesting is the analysis of the activity of the accounts: the self-declared male accounts are several times more active then the female accounts, with the authors estimating that only about 5% of the site's content is written by women. Contrary to some prior research (most of which focused on the English Wikipedia), the Swedish Wikipedia's editors and readers do not perceive Wikipedia as a place where sexist comments are significant, though about a third agree that general conflicts between editors do take place. Nonetheless, women are less likely than men to think (1) that Wikipedia is welcoming to beginners; (2) that everyone gets treated equally, regardless of gender; (3) that editing means taking on conflicts. Women are more likely than men to acknowledge the existence of sexist comments. In the author's own words, "women have more concerns about the community being sexist and not welcoming, and do not expect conflict as part of editing to the same degree as men", though the author also notes that statistical tests suggest that "the differences in opinion between gender groups do not differ [sic] greatly".

The author concludes that there is no evidence that the Swedish Wikipedia's readers have any preconceived negative notions about the Wikipedia community (such as "it is sexist") that should inhibit potential women contributors from editing and thus contribute to the gender gap. He states: "Significant differences in perceived competence were found. Women report 'I’m not competent enough' as a strong contributing factor to them not editing more than twice as often as men." The author suggests that because women often perceive, whether correctly or not, that they have lower computer skills than men, and see Wikipedia as a website which requires above-average computer skills, this (rather than an unfriendly, sexist community) may be the most significant factor affecting their lack of contributions. (Cf. related coverage: "Mind the skills gap: the role of Internet know-how and gender in differentiated contributions to Wikipedia'", "Does advertising the gender gap help or hurt Wikipedia?")

Test of 300k citations: how verifiable is "verifiable" in practice?

Reviewed by Tilman Bayer

Four researchers from Dartmouth College have taken the requirement of "verifiability", one of Wikipedia's core content policies, literally. Their preprint[3] examines 295,800 citations from the 5000 most viewed articles on the English Wikipedia (out of a larger set of 23 million citations extracted from a July 2014 dump). These comprised both inline citations (footnotes) and "free citations" (those not related to any particular part of the article). The authors conclude that

"while the quality of references in the overall sample is reasonably high, verifiability varies significantly by article, particularly when emphasizing the use of standard digital identifiers and taking into account the practical availability of referenced sources."

Unsurprisingly, the study did not examine whether the cited documents actually match the information in the articles. Rather, it concerns the question whether the citation enables the reader to carry out this verification. The authors argue that

"simply providing citations and references does not automatically guarantee verifiability. Whether or not provided references and citations are accessible ... is just as important as providing the reference or citation in the first place. There are many ways that an online information source might provide citations and references and still be difficult to verify."

They divide these difficulties into two categories: "technical verifiability" and "practical verifiability."

Technical verifiability is defined as "the extent to which a reference provides supporting information that permits automated technical validation of the existence of the referenced material, based on existing technical standards or conventions," concretely ISBNs, DOIs and Google Books IDs. The study found that:

  • "Out of 37,269 book citations, 29,736 book citations (79.8%) had valid ISBNs, while 3,145 (8.4%) of book citations had invalid ISBNs, and 4,388 book citations (11.8%) contained no ISBN information."
  • "Out of 14,081 Google Books-containing citations, 3,159 (22.4%) contained invalid Google Books IDs."
  • "presence or absence of a Digital Object Identifier (DOI) was noted for any reference tagged as‘journal’, ‘study’, ‘dissertation’, ‘paper’, ‘document’, or similar. Out of 41,244 of these citations, only 5,337 (12.9%) contained neither a DOI or a link to a known open access journal."

Practical verifiability is defined as "the extent to which referenced material is accessible to someone encountering the reference." In particular, the authors point out that information supported by a paywalled journal article "is practically unverifiable to someone without the additional means to access the supporting journal article. Similarly, if an ISBN is present but refers to a book that only has one extant copy in a library thousands of miles away, then the information it supports is practically unverifiable to someone without the additional means to access the supporting book." Apparently the authors found it difficult to translate these notions into criteria that would lend themselves to a large scale quantitative analysis, and settled for two rather narrowly defined but still interesting aspects:

  • "Journal citations linking to ‘arXiv' and 'PubMed Central (PMC)' were taken to be open access, while all others were marked unconfirmed. 5,275 of the journal citations out of 41,244 (12.8%) belonged to this confirmed open access category, while 30,632 (74.3%) contained some digital identifier but were not confirmed to be open."
  • "Out of the 10,922 working Google Books links, most (7,749, or 71.0%) are partially viewable with samples, while 1,359 (12.4%) are fully viewable and 1,814 (16.6%) are not viewable at all."

The preprint also contains a literature overview about information quality on Wikipedia, which does the topic scant justice (e.g. of the only three mentioned systematic studies of article accuracy, one is the well-known but over a decade old Nature study, another is a 2014 article whose methodology and conclusions have been described as very questionable, see also below).

With some caveats, e.g. that the quality of the 5000 most-viewed English Wikipedia articles might differ from the quality of the average article, the authors conclude that "from the perspective of overall quality of references in Wikipedia, these findings might seem encouraging", but are concerned that many citations are not practically verifiable.

Twelve years of Wikipedia research

Reviewed by Tilman Bayer

This short (two-page) paper[4] presents "preliminary results that characterize the research done on and using Wikipedia since 2002". It is based on a dataset of 3582 results of a Scopus search in November 2013 (for the term "Wikipedia" in title, abstract and keywords), largely relying on the abstracts of these publications. 641 of them were discarded as unrelated. Of the remaining 2968, the relevance for Wikipedia was judged as "major" for 2301 and as "minor" for 667.

Examining a dichotomy that is familiar to the editors of this newsletter too (which, for example, usually does not cover papers that merely rely on Wikipedia as a text corpus, even though these are numerous in fields such as computer linguistics), the authors write:

"In terms of topic, there were almost an equal number of items about Wikipedia (1431, 48%) as there were using Wikipedia (1537, 52%)",

defining the latter as employing "Wikipedia either as a source/resource for other research or used Wikipedia to test the feasibility and applicability of tools or methods developed for purposes not directly related to Wikipedia". Those papers only began appearing in 2005, but overtook the "about" category in 2009 and have remained in the majority since." (See also coverage of a presentation at Wikimania 2013 that likewise traced publication numbers over the years – based on Google Scholar instead of Scopus – and dated the first appearance of "Wikipedia as a corpus" research to 2005, too: "Keynote on applicable Wikipedia research")

The researchers classified publications by their methodology, into "social/theoretical" (including "analyses and visualizations of Wikipedia") and "technological" (in the "about" category, this classification was reserved to "tools developed for improving Wikipedia"), and found that:

"the technological approach was considerably more popular (1856 items, 63%) compared to the social approach (1112 items, 37%). ... we see that at first the social aspects were emphasized, but since 2007 papers on technological aspects are much more frequent."

The authors extended their search beyond Scopus to Web of Science and the ACM Digital Library for an examination of how the overall volume of published Wikipedia research has developed over time. The resulting chart indicates that the fast growth of earlier years leveled off, with even some decrease in 2013, the last year examined.

Further criticism of study that had criticized accuracy of medical Wikipedia articles

Reviewed by Tilman Bayer

Three letters to the editor of the Journal of the American Osteopathic Association adds to criticism of an article[supp 1] by Hasty et al. that had appeared in the same journal earlier, and was widely covered in the media with headline phrases such as "90% of [Wikipedia's] medical entries are inaccurate".

Like editors from WikiProject Medicine at the time, the writers of the first letter[5] lament that the paper's authors "have not made their dataset public, so it is impossible to confirm the veracity of their conclusions"; however, "they did share with us a small subset of their dataset on major depressive disorder. We closely examined two statements from Wikipedia that the researchers identified as inaccurate." After outlining that the peer-reviewed literature on these two issues is "rife with debate", and pointing out that some of it supports rather than contradicts the information on Wikipedia, they state that "It seems problematic to conclude that statements made in Wikipedia are wrong based on peer-reviewed literature", also quoting the editors of Nature observing that "peer review per se provides only a minimal assurance of quality". (On another occasion, the lead author had revealed a third Wikipedia statement that according to the study contradicted the peer-reviewed literature and which he described as dangerously wrong; however, it was in agreement with the hypertension guidelines of the UK National Institute for Health and Care Excellence (NICE).[supp 2])

The letter writers highlight the fact that the study relied on "third-year residents with no specific expertise [to] correctly ascertain the accuracy of claims made on Wikipedia" in this way. In a response[6], Hasty et al. acknowledged that the peer-reviewed literature contained diverging viewpoints on the topic, but held that "if Wikipedia articles are considered review articles, then it would be expected that major controversial points would be discussed rather than presented from one perspective."

The second letter[7] criticizes that "Because Hasty et al did not identify a specified number of assertions for each condition and did not measure whether Wikipedia and peer-reviewed literature were correct or not, respectively, their use of the McNemar test to compare Wikipedia vs peer-reviewed medical literature was inappropriate." A third letter also criticized the usage of this statistical test, adding that "I believe that the study here was incorrectly analyzed and inappropriately published through the same peer-review process that Hasty et al are holding to such high esteem. "[8] In their response[6] Hasty et al. defended their method, while acknowledging that "for greater clarity" some tables should have been labeled differently.

With such severe criticism from several independent sources, it is hard not to see this 2014 paper by Hasty et al. as discredited. Unfortunately, it continues to be occasionally cited in the literature (as mentioned in the review of the "verifiability" paper above) and in the media.

Briefly

The attention economy of Wikipedia articles on news topics

Reviewed by Tilman Bayer
Comparison of topic attention (red and gold lines: average and median pageview numbers to neighboring pages, black line: traffic to the page itself) and creation of new pages linked to the topic (vertical black segments) for an expected event (2012 Summer Olympics, top) and an unexpected event (Hurricane Sandy, bottom). In the graphs on the right, "white nodes represent the neighbor articles predating 2012; colored nodes correspond to neighbors created in 2012. The size of the nodes is proportional to their yearly traffic volume; ... New articles tend to be peripheral to these networks."

A paper[9] in Scientific Reports examined how the public attention to a news topic relates to the pageviews of the Wikipedia article about that topic, and the creation dates of related articles. As proxy for the general attention to the topic, the authors use traffic to pages "neighboring" the main article about the topic itself (i.e. linking to and linked from it), including the time before it was created. From the (CC BY licensed) paper:

"Our analysis is focused on the year 2012. We collected the neighbors of 93,491 pages created during that year. ... Which kinds of articles precede or follow demand for information? In Table 1 we list a few articles with the largest positive and negative bursts. Topics that precede demand (ΔV/V > 0) tend to be about current and possibly unexpected events, such as a military operation in the Middle East and the killing of the US ambassador to Libya. These articles are created almost instantaneously with the event, to meet the subsequent demand. Articles that follow demand (ΔV/V < 0) tend to be created in the context of topics that already attract significant attention, such as elections, sport competitions, and anniversaries. For example, the page about Titanic survivor Rhoda Abbott was created in the wake of the 100th anniversary of the sinking."


A Swiss perspective on Wikipedia and academia

Reviewed by Piotr Konieczny

This conference paper[10] states in its abstract an intent to broadly analyze and present all aspects of Wikipedia use in education. Unfortunately, it fails to do so. For the first four and half pages, the paper explains what Wikipedia is, with next to no discussion of the extensive literature on the use of Wikipedia in education or its perceptions in academia. There is a single paragraph of original research, based on the interview of three Swiss Wikipedians; there is little explanation of why those people where interviewed, nor are there any findings beyond description of their brief editing history. The paper ends with some general conclusions. Given the semi-formal style of the paper, this reviewer finds that it resembles an undergraduate student paper of some kind, and it unfortunately adds nothing substantial to the existing literature on Wikipedia, education and academia.

Other recent publications

A list of other recent publications that could not be covered in time for this issue – contributions are always welcome for reviewing or summarizing newly published research.

From Joseph Priestley's A Chart of Biography (1765), referenced in this paper about biography networks on Wikidata

References

  1. ^ Lund, Arwid; Venäläinen, Juhana (17 February 2016). "Monetary materialities of peer-produced knowledge: the case of Wikipedia and its tensions with paid labour". tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society. 14 (1): 78–98. ISSN 1726-670X.
  2. ^ Helgeson, Björn (2015). "The Swedish Wikipedia gender gap". Stockholm, Sweden: Royal Institute of Technology. {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Harder, Reed H.; Velasco, Alfredo J.; Evans, Michael S.; Rockmore, Daniel N. (18 September 2015). "Measuring Verifiability in Online Information". arXiv:1509.05631.
  4. ^ Bar-Ilan, Judit; Noa Aharony (2014). "Twelve years of Wikipedia research". Proceedings of the 2014 ACM Conference on Web Science. WebSci '14. New York, NY, USA: ACM. pp. 243–244. doi:10.1145/2615569.2615643. ISBN 978-1-4503-2622-3. Closed access icon
  5. ^ Leo, Jonathan; Lacasse, Jeffrey (October 2014). "Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions II". The Journal of the American Osteopathic Association. 114 (10): 761–764. doi:10.7556/jaoa.2014.147. ISSN 0098-6151.
  6. ^ a b Hasty, Robert; Garbalosa, Ryan; Suciu, Gabriel (October 2014). "Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Condition [Response]". The Journal of the American Osteopathic Association. 114 (10): 766–767. doi:10.7556/jaoa.2014.150. ISSN 0098-6151.
  7. ^ Chen, George; Xiong, Yi (October 2014). "Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions III". The Journal of the American Osteopathic Association. 114 (10): 764–765. doi:10.7556/jaoa.2014.148. ISSN 0098-6151.
  8. ^ Gurzell, Eric (October 2014). "Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions IV". The Journal of the American Osteopathic Association. 114 (10): 765–766. doi:10.7556/jaoa.2014.149. ISSN 0098-6151.
  9. ^ Ciampaglia, Giovanni Luca; Flammini, Alessandro; Menczer, Filippo (19 May 2015). "The production of information in the attention economy". Scientific Reports. 5: 9452. doi:10.1038/srep09452. ISSN 2045-2322.
  10. ^ Timo Staub, Thomas Hodel (2015). "WIKIPEDIA vs. ACADEMIA. An investigation into the role of the Internet in education, with a special focus on collaborative editing tools such as Wikipedia" (PDF). The 11th International Scientific Conference eLearning and Software for Education Bucharest, April 23–24, 2015. doi:10.12753/2066-026X-15-001.
  11. ^ Warncke-Wang, Morten; Ayukaev, Vladislav R.; Hecht, Brent; Terveen, Loren G. (2015). "The success and failure of quality improvement projects in peer production communities". Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. CSCW '15. New York, NY, USA: ACM. pp. 743–756. doi:10.1145/2675133.2675241. ISBN 978-1-4503-2922-4. Closed access icon (Author's copy)
  12. ^ Matias, J. Nathan; Diehl, Sophie; Zuckerman, Ethan (2015). "Passing on: reader-sourcing gender diversity in Wikipedia". Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. CHI EA '15. New York, NY, USA: ACM. pp. 1073–1078. doi:10.1145/2702613.2732907. ISBN 978-1-4503-3146-3. Closed access icon (Author's copy)
  13. ^ Goldfarb, Doron; Merkl, Dieter; Schich, Maximilian (22 June 2015). "Quantifying cultural histories via person networks in Wikipedia". arXiv:1506.06580.
  14. ^ Katz, Gilad; Rokach, Lior (5 January 2016). "Wikiometrics: a Wikipedia-based ranking system". arXiv:1601.01058.
  15. ^ Katz, Gilad; Shapira, Bracha (13 August 2015). "Enabling complex Wikipedia queries – technical report". arXiv:1508.03298.
  16. ^ Yam, Shing-Chung Jonathan (2015). "Wisdom of the crowd: Wikipedia controversies and coordinating policies" (PDF). Journal for the Liberal Arts And Sciences. 20 (1). ISSN 2167-3756.
  17. ^ Phillips, Murray G. (7 October 2015). "Wikipedia and history: a worthwhile partnership in the digital era?". Rethinking History. 0 (0): 1–21. doi:10.1080/13642529.2015.1091566. ISSN 1364-2529. Closed access icon
  18. ^ Bick, Eckhard (2014). "Translating the Swedish Wikipedia into Danish" (PDF). Swedish Language Technology Conference 2014.
  19. ^ Bissig, Fabian (22 October 2015). "Drawing questions from Wikidata" (PDF). Zurich, Switzerland: Distributed Computing Group; Computer Engineering and Networks Laboratory – ETH Zurich. {{cite journal}}: Cite journal requires |journal= (help)
  20. ^ Shapira, Bracha; Ofek, Nir; Makarenkov, Victor (2015). "Exploiting Wikipedia for information retrieval tasks". Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '15. New York, NY, USA: ACM. pp. 1137–1140. doi:10.1145/2766462.2767879. ISBN 978-1-4503-3621-5. Closed access icon
  21. ^ HENKES, D. (2015). "Relation between Wikipedia edits and news published" (info:eu-repo/semantics/bachelorThesis). (student essay)
  22. ^ Garrison, John C. (5 October 2015). "Getting a "quick fix": first-year college students' use of Wikipedia". First Monday. 20 (10). doi:10.5210/fm.v20i10. ISSN 1396-0466.
Supplementary references:
  1. ^ Hasty, Robert T.; Garbalosa, Ryan C.; Barbato, Vincenzo A.; Valdes, Pedro J.; Powers, David W.; Hernandez, Emmanuel; John, Jones S.; Suciu, Gabriel; Qureshi, Farheen; Popa-Radu, Matei; Jose, Sergio San; Drexler, Nathaniel; Patankar, Rohan; Paz, Jose R.; King, Christopher W.; Gerber, Hilary N.; Valladares, Michael G.; Somji, Alyaz A. (1 May 2014). "Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions". JAOA: Journal of the American Osteopathic Association. 114 (5): 368–373. doi:10.7556/jaoa.2014.035. ISSN 0098-6151. PMID 24778001.
  2. ^ Anwesh Chatterjee, Robin M.T. Cooke, Ian Furst, James Heilman: Is Wikipedia’s medical content really 90% wrong? Cochrane blog, 23 June 2014
+ Add a comment

Discuss this story

Test of 300k citations: how verifiable is "verifiable" in practice?

  • Regarding Test of 300k citations: how verifiable is "verifiable" in practice?: Just a few thoughts while reading: Hope no one misextrapolates to ideas like "we shouldn't cite any source that isn't free (costless)". Most high-quality books post-1922 and many (probably most) high-quality journals are not free (costless). Regarding ISBNs, in my experience, most books from before circa 1970 (when ISBNs began) do not have any ISBN retroactively assigned. If a new print run or edition occurred, that's the only way an associated ISBN will exist. As for LCCN and OCLC, those are often available, at least for English-language books. Regarding DOIs, I know from experience that encountering journal articles that don't have a DOI is a common occurrence, especially with articles from before circa 2005. My point with all this is that the goals are laudable (use identifiers when possible, cite free sources when possible), but "when possible" is the key phrase, and it isn't always possible. Quercus solaris (talk) 02:45, 8 March 2016 (UTC)Reply[reply]
  • Quercus solaris' comment is absolutely spot on. Verifiability is an important policy but it is a means to an end. The Wikimedia Foundation mantra is "Our mission is to provide free access to the sum of all human knowledge." - taking human knowledge that would otherwise have to be paid for and making it freely available is an important part of that. If WP:V, or indeed any of our policies, becomes detrimental to achieving that mission then it is the policy (or its implementation), not the mission statement, that needs to be reconsidered. WaggersTALK 10:30, 8 March 2016 (UTC)Reply[reply]
  • I don't agree that a paywalled journal article is practically unverifiable. Journals are available through libraries, which can also obtain books on inter-library loan. I am not sure how often readers (as opposed to reviewers) are interested in the sources. Whereas the whole idea is that we are making hard-to-find knowledge widely available. I have spent a lot of time tracking down those hard-to-find sources. Hawkeye7 (talk) 10:32, 8 March 2016 (UTC)Reply[reply]
  • Actually they are discussing the "accessibility of verification". In their text they originally use the term "practical accessibility" but then jump to "practical verifiability". I think this is a mistake. Likewise their scoring system is unsound, in that accessibility is as much a feature of the querent as of the Wikipedia page, i.e. their research would be much more useful if we could input our own weighting into their scale. e.g. I live in London and free access to the British Library. Lack of an ISBN number in a pre-1970 book is not a barrier to access, although lack of disposable free time could be. This would then show that accessibility is not evenly distributed through society, and should also include access to electricity and access to the internet. This is not to say their research is useless, far from it, but it needs to be put in a social context and a more robust methodological framework. i.e. Wikipedia constitutes an apparatus (see Karen Barad) with which we can examine accessibility and if we then place other factors (power, connectivity, disposable free time, data costs, disposable money) outside and around their methodology, rather than assuming an unexamined concept of the querent embodying unstated a priori assumptions. Leutha (talk) 11:12, 8 March 2016 (UTC)Reply[reply]
    • Those are very good points, Leutha. "Anyone" has generally been understood to include only "anyone with the time, energy, and interest to actually make a serious attempt", not just "anyone who can click a link". WhatamIdoing (talk) 04:52, 21 March 2016 (UTC)Reply[reply]
  • ISBN numbers are nothing more or less than barcodes for booksellers. At best, the inclusion of such trivia is needless duplication of the actual information needed to locate a book (author, title, publisher, publication date). At worst it is publisher spam that clogs footnotes, making essential information less easy to read and internalize. It is ridiculous to posit that ISBN numbers are in any way a metric of verifiability. Carrite (talk) 13:31, 8 March 2016 (UTC)Reply[reply]
    • While presence or absence of ISBNs is a weak proxy for verifiability, they are still useful for bibliographic reasons. All the best: Rich Farmbrough, 14:34, 10 March 2016 (UTC).Reply[reply]
  • Agree with all these: "verifiable" was never intended to mean "verifiable by anyone", or we would would be restricted to web refs (and google books previews are often only available in some countries and not others). Especially in places like the Medical wikiproject, or individual talk-pages, you often see requests from those without library or journal access to supply or check things, which if put in the right place are usually successful. There's a central page for such requests somewhere. Johnbod (talk) 14:23, 9 March 2016 (UTC)Reply[reply]
  • As the reviewer, I'm very happy to see this great discussion (we have also notified the paper's authors of it, as we routinely do with all reviewed publications). I agree that WP:V is often interpreted as being agnostic about what the authors call practical and technical verifiability. However (and that's why the review said that they take the policy "literally"), they correctly refer to its first sentence, which in its current version reads:
"In Wikipedia, verifiability means that anyone using the encyclopedia can check that the information comes from a reliable source."
That directly contradicts Johnbod's comment above (although of course it does not say that doing so has to be equally easy for everyone). Also, in defense of the authors with regard to Leutha's valid point, they do acknowledge when talking about paywalled papers that accessibility depends on the querent ("someone without the additional means").
I agree with Waggers that verifiability is a means to an end: maintaining or improving the quality of the information on Wikipedia. But IMHO that is an argument for taking concerns about practical verifiability more seriously, based on my own experience as volunteer editor who since many years has spent a lot of his editing time on upholding said quality by vetting edits, often by looking up what the cited source says. It is frequently overlooked that the accuracy of information in Wikipedia is not solely a function of the accuracy of the source that was cited initially, but also depends on how effectively this information is subsequently being protected from being adulterated intentionally or unintentionally (or from being mis-cited in the first place - many hoaxers have had success by citing sources that sounded highly reliable but were very hard to access). In that respect, sources that are paywalled or otherwise difficult to access actually do lead to inferior information quality in the long run, compared to open access sources containing the same information.
Regards, Tbayer (WMF) (talk) 02:54, 10 March 2016 (UTC)Reply[reply]
See "The Resource Exchange is a WikiProject dedicated to organizing and sharing the vast resources available to Wikipedians, to aid in verification."
The interpretation of "anyone using the encyclopedia can check" has always been pragmatic. It is perfectly admissible, for example, to cite text on a plaque that is displayed in public. That does not mean that it is practical for anyone to go there and read it, just as documents in Kew or the British Library may not be digitally available. But it is certainly possible, in a hypothetical sense. What is not verifiable is "personal knowledge", "personal communication", "what I saw through my microscope", "what a friend told me" or "what I read through magic glasses". In other media these are all perfectly good sources. Not here. That is what the verifiability doctrine is about.
All the best: Rich Farmbrough, 14:42, 10 March 2016 (UTC).Reply[reply]
It is also usually interpreted to mean publicly available, generally meaning published, or at least in a public library archive, rather than privately held as papers, private polling data, commercial documents and records, unreleased government papers, unpublished research etc. Johnbod (talk) 15:55, 10 March 2016 (UTC)Reply[reply]
Tilman, could you pass a link to Wikipedia:Reliable sources/Cost to the authors? This supplement to the policy explains in a fairly direct way exactly how limited that "anyone" is. WhatamIdoing (talk) 04:52, 21 March 2016 (UTC)Reply[reply]
WhatamIdoing, we already pointed them to this review and the talk page (as we routinely do); you can find the email address of the corresponding author on the first page in case you have further input for them.
That said, I'm not super certain how important it is to wikilawyer with them about the exact interpretation of the policy - as already indicated in the headline and first sentence of the review, I think it's more useful take their rather verbatim interpretation as a starting point that leads to some empirical results that are interesting and relevant in their own right. Regards, Tbayer (WMF) (talk) 21:01, 27 March 2016 (UTC)Reply[reply]
Speaking again as an editor who frequently attempts to check cited sources, the Resource Exchange is an awesome project and certainly useful, but too cumbersome in many situations. The Wikipedia Library gives more direct access, although its coverage is not universal either. Regards, Tbayer (WMF) (talk) 21:01, 27 March 2016 (UTC)Reply[reply]
  • See the discussion on "effective use" in Community informatics. Also see this talk by Michael Gurstein (from 18:50). What he says about Open Data also applies to Open Access Leutha (talk) 11:29, 21 March 2016 (UTC)Reply[reply]
  • It troubles me a bit when people seem to have a simplistic idea that *all* information can/should cost nothing. It amounts to a claim that *all* intellectual property is theft (echoing "Property is theft!"). But realistically, consider people who spend months of time, and travel expenses, researching a nonfiction book, such as a history or biography. How does an enterprise like that get paid for if that author can't get a book advance from a publishing company? An advance that can only be paid for by future sales revenue of the book? Well maybe that author could pay out of his own pocket for his research, people might reply. Yes, maybe; maybe. But maybe it is dampening/constraining to the output/production of good new information if we insist that it *all* must cost nothing. Please understand that in general I am a fan of open access journals and affordable books. But I am also realistic about the downsides of a situation where no one can get paid to research things, write explanations of things, edit such writing, and so on. Quercus solaris (talk) 00:26, 12 March 2016 (UTC)Reply[reply]
I may need to reread the paper, but I don't recall the authors advocating for that simplistic idea. (And the whole debate about open access is more nuanced than than for sure, plus your comparison with that anarchist slogan seems rely on a problematic equating of physical and intellectual property.) It's certainly possible to acknowledge that e.g. scientific researchers need to get paid while at the same time not denying that citing paywalled papers on Wikipedia can (even if they may sometimes be of higher quality than freely available alternatives) also have detrimental effects on Wikipedia's longterm information quality, by making the volunteer work of checking the accuracy of edits much harder. Regards, Tbayer (WMF) (talk) 21:01, 27 March 2016 (UTC)Reply[reply]
  • The paper has "ISBN numbers can be checked numerically for validity using check-digit algorithms for either their 10 or 13 digit versions [23]. ISBNs found with Wikipedia citations in the ‘book’ reference type specified in the Wikipedia markup were tested according to these algorithms. Out of 37,269 book citations, 29,736 book citations (79.8%) had valid ISBNs, while 3,145 (8.4%) of book citations had invalid ISBNs ..."
    I have checked thousands of ISBNs, and in my experience the fraction with invalid check digits is a lot less than 8%. The paper used the WP dump of 7 July 2014, and refers to the article on Glycerol as having an invalid ISBN. ("... , the greatest gain in article rank was a 3,318 spot jump by “Glycerol” from rank 3,891 to rank 573. This article’s only ISBN was invalid ...") The ISBN was added here, and seems to have been unchanged at 7/7/14, and now, as ISBN 3527306730. It is valid. Am I missing something? Mr Stephen (talk) 22:19, 27 March 2016 (UTC)Reply[reply]
    • Mr Stephen, the ISBN is correct, and the entry appears in WorldCat[1]. Looking at the paper, it appears that their idea of a "valid ISBN" is any ISBN that has a valid checksum digit ("ISBN numbers can be checked numerically for validity using check-digit algorithms for either their 10 or 13 digit versions"), and that they used http://www.hahnlibrary.net/libraries/isbncalc.html to validate the checksums. This one is correct according to the resource they used to test checksums (and others). Perhaps they had a problem with their script? WhatamIdoing (talk) 00:04, 28 March 2016 (UTC)Reply[reply]

PS: the study has just been featured in The Atlantic, where the authors also propose some sort of browser plugin that displays a rating of each citation's practical verifiability. Regards, Tbayer (WMF) (talk) 19:44, 22 April 2016 (UTC)Reply[reply]

  • Regarding paid labor, any dialog on this issue should incorporate Hexatekin's great article, "Labor and the New Encylopedia," and her discussion of free digital labor, especially as it relates to Wikipedia. I understand that Wikipedia will never pay its editors but I think there should be less mystery and enthralled devotion to the concept of donating your time and efforts for free -- especially given the complicated issues that Wikimedia faces going forward. -- BrillLyle (talk) 03:04, 8 March 2016 (UTC)Reply[reply]
@BrillLyle: @Hexatekin: Indeed, thanks for the link, which would have made a nice addition to the review! There is also an interesting recent blog post (in French) by Alexander Doria which expresses skepticism at the notion that Wikipedia editing tasks (specifically, RC patrol) can be as understood as "digital labor" in the same sense as Amazon Mechanical Turk tasks: https://scoms.hypotheses.org/625
Regards, Tbayer (WMF) (talk) 21:10, 27 March 2016 (UTC)Reply[reply]
@Tbayer (WMF): Thank you for sharing this article. I wish I spoke French but I think I get the general idea. Maybe the terminology of "digital labor" has different definitions, iterations, etc. I wonder how best to describe Wikipedia editing. It sure feels like free labor to me! Thanks again for the link and the thoughts... -- Erika aka BrillLyle (talk) 23:45, 27 March 2016 (UTC)Reply[reply]

The attention economy of Wikipedia articles on news topics

  • Chart (b) looks like a pregnancy scan. I can see a head near the top!  — Amakuru (talk) 09:54, 8 March 2016 (UTC)Reply[reply]
And the other bump? Collect (talk) 14:57, 10 March 2016 (UTC)Reply[reply]

Life Expectancy

  • It appears that most academics do not achieve "notability" until work done well after the person is in their 30s, while most athletes who are not famous by their 30s never achieve fame, and few artists achieve fame (other than Grandma Moses) after their 30s. Thus, one would expect the results reported without even looking at Wikipedia :(. In short, that study appears to verge on the "Captain Obvious" level. What they ought have done was look at people who reached at least (say) the age of 50, and determine life expectancy of groups from that point. Famous academics who died before the age of 30 is close to a null set, as far as I can tell. Collect (talk) 14:57, 10 March 2016 (UTC)Reply[reply]
    You mean somebody like Harry K. Daghlian, Jr., Evariste Galois or Henry Mosely? Hawkeye7 (talk) 23:34, 10 March 2016 (UTC)Reply[reply]
    By numbers, most famous academics have been older than 30 when they achieve their fame -- that you can find exceptions is wertlos - I did not say "all." The average NFL player in 2013 was under 26 years old - with the average for the oldest team was under 28.USA Today AIP stats have the average age of doctoral recipients in Physics in the US being over 30 in 2011. Average age at which a Nobel laureate is given: 59.NobelPrize.org ] So yes - the average athlete becomes famous at a much younger age than "academics" become famous. Collect (talk) 00:41, 11 March 2016 (UTC)Reply[reply]





       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0