The Signpost

Recent research

Barnstars work; Wiktionary assessed; cleanup tags counted; finding expert admins; discussion peaks; Wikipedia citations in academic publications; and more

Contribute  —  
Share this
By Lambiam, Piotr Konieczny, Jodi.a.schneider, Amir E. Aharoni, Dario Taraborelli, Tilman Bayer, Steven Walling, Giovanni Luca Ciampaglia and Protonk
A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, edited jointly with the Wikimedia Research Committee and republished as the Wikimedia Research Newsletter.
The relative number of edits by Wikipedians who had randomly received barnstars (red) and by the control group whose members hadn't (blue).

Recognition may sustain user participation

To gain insight in what makes Wikipedia tick, two researchers from the Sociology Department at Stony Brook University conducted an experiment with barnstars.[1] They were surprised by what they found.

Professor Arnout van de Rijt and graduate student Michael Restivo wanted to test the hypothesis according to which receiving recognition for one's work in an informal peer-based environment such as Wikipedia has a positive effect on productivity. To test their hypothesis, they determined the top 1% most productive English Wikipedia users among the currently active editors who had yet to receive their first barnstar. From that group they took a random sample of 200 users. Then they randomly split the sample into an experimental group and a control group, each consisting of 100 users. They awarded a barnstar to each user in the experimental group; the users in the control group were not given a barnstar. The researchers found their hypothesis confirmed: the productivity of the users in the experimental group was significantly higher than that of the control group. What really took the researchers by surprise was how long-lasting the effect was. They followed the two groups for 90 days, observing that the increase in contribution level for the group of barnstar recipients persisted, almost unabated, for the full observation period.

One major factor the experiment did not take into account was whether it mattered who delivered barnstars and whether they were anonymous, registered, or known members of the Wikipedia community. During the experiment, it was noted on the Administrator's noticeboard/Incidents page that a seemingly random IP editor was "handing out barnstars", which led to some suspicion from Wikipedians. The thread was closed after User:Mike Restivo confirmed he accidentally logged out when delivering the barnstars. He did not, however, declare his status as a researcher, and the group's paper does not disclose that the behavior was considered unusual enough to warrant such a discussion thread.

Can Wiktionary rival traditional lexicons?

Wiktionary received an extensive assessment[2] as a potential rival to expert-built lexicons.

A chapter titled "Wiktionary: a new rival for expert-built lexicons?"[2] in a collection on electronic lexicography to appear with Oxford University Press contains a description and critical assessment of Wikipedia's second oldest sister project (which will celebrate its 10th anniversary in December this year) – subtitled "Exploring the possibilities of collaborative lexicography", which it calls a "fundamentally new paradigm for compiling lexicons".

The article describes in detail the technical and community features of Wiktionary. Though it is not immediately clear, the article's focus is on several language editions and not just English (as often happens in research about Wikipedia and its sister projects). The article gives a comprehensive account of the coverage of the world's languages by the various Wiktionary language editions. There is a critical analysis of Wiktionary's content, first with what appears to be a thorough statistical comparison with other dictionaries and wordnets, including an examination of the overlaps in the lexemes covered, which the authors found to be surprisingly small.

Number of native terms (p.17) Wiktionary wordnets Roget's Thesaurus OpenThesaurus
English language 352,865 148,730 (WordNet) 59,391
German language 83,399 85,211 (GermaNet) 58,208
Russian language 133,435 130,062 (Russian WordNet)

The article notes an important characteristic of open wiki projects: they allow "updating of the lexicons immediately, without being restricted to certain release cycles as is the case for expert-built lexicons" (p. 18). Though this characteristic is obvious to experienced Wikimedians, it is frequently overlooked. The discussion of the organization of polysemy and homonymy is comprehensive, although limited to the English Wiktionary. Other language editions may do it differently. The article notes that "it is a serious problem to distinguish well-crafted entries from those that need substantial revision by the community", which is good constructive criticism. The paragraphs about "sense ordering" make some vague claims (e.g. "Although there is no specific guideline for the sense ordering in Wiktionary, we observed that the first entry is often the most frequently used one") which could be interesting and useful from a community perspective, but offers little actionable evidence and should be investigated further. The paper's conclusions identify some of the features that enable Wiktionary to rival expert-built lexicons: "We believe that its unique structure and collaboratively constructed contents are particularly useful for a wide range of dictionary users", listing eight such groups – among them "Laypeople who want to quickly look up the definition of an unknown term or search for a forum to ask a question on a certain usage or meaning."

On a critical note, the last paragraph says "we believe that collaborative lexicography will not replace traditional lexicographic theories, but will provide a different viewpoint that can improve and contribute to the lexicography of the future. Thus, Wiktionary is a rival to expert-built lexicons – no more, no less", which sounds a bit contradictory. The authors also note that "Lepore (2006: 87) raised a criticism about the large-scale import of lexicon entries from copyright-expired dictionaries such as Webster's New International Dictionary". It would be nice if the authors would write at least a short explanation of the problem that Lepore described. But the actual article[3] mentions Wiktionary only very briefly. For the most part, the article is a good academic-grade presentation of Wiktionary: it is very general and does not dive too much into details; it makes a few vague statements, but they present a good starting point for further research.

Wikipedia as an academic publisher?

Can Wikipedia integrate with open-access scholarly publishing?

Xiao and Askin (2012) looked at whether academic papers could be published on Wikipedia.[4] The paper compares the publishing process on Wikipedia to that of an open-access journal, concluding that Wikipedia's model of publishing research seems superior, particularly in terms of publicity, cost and timeliness.

The biggest challenges for academic contributions to Wikipedia, they found, revolve around the level of acceptance of Wikipedia in academia, poor integration with academic databases, and technical and conceptual differences between an academic article and an encyclopedic one. However, the paper suffers from several problems. It correctly observes that the closest a Wikipedia article comes to a "final", fully peer-reviewed status is after having passed the featured article candidate process, but makes no mention of intermediary steps in Wikipedia's assessment project, such as B-class, Good Article and A-class reviews; nor is the assessment project itself mentioned. Despite its focus on the featured-article process, no previous academic work on featured articles is cited (although quite a few have been published). Crucially, the paper disregards the most relevant of Wikipedia's policies, no original research. Thus, the study fails to consider whether Wikipedia would want to publish academic articles without their undergoing changes to bring them closer to encyclopedic style – a topic that already has become an issue numerous times on the site, in particular regarding difficulties encountered by some educational projects. In the end, the paper, while a well-intentioned piece, seems to illustrate that university researchers can have a quite different understanding of what Wikipedia is than those more closely connected with the project.

In other news, however, a scientific journal appears to have found a viable way to publish peer-reviewed articles on Wikipedia: The open access journal PLoS Computational Biology has announced[5] that it is starting to publish "Topic Pages" - peer-reviewed texts about specific topics, which are published both in the journal and as a new article on Wikipedia. It is hoped that the Wikipedia versions will be updated and improved by the Wikipedia community. The first example is about circular permutation in proteins.

Wikipedia citations in American law reviews

Volume 1 of the Harvard Law Review (1887–1888).

The article "A Jester's Promenade: Citations to Wikipedia in Law Reviews , 2002–2008" concerns the issue of citations of Wikipedia in US law reviews and the appropriateness of this practice.[6] The article seems to be well researched, and its author, law reference/research librarian Daniel J. Baker, demonstrates familiarity with the mechanics of Wikipedia (such as the permanent links). For the period 2002–08, Baker identified 1540 law-review articles that contain at least one citation of Wikipedia – most in law reviews dealing with general and "popular" subject matter, with a significant proportion originating from authors with academic credentials.

The article notes that 2006 marked the peak of that trend, attributing it (thereby demonstrating some familiarity with Wikipedia's history) to a delayed reaction to the Seigenthaler incident and the Essjay Controversy. (Since the article's data analysis ends in 2008, the question of whether this trend has rebounded in recent years is left unanswered.)

The author is highly critical of Wikipedia's reliability, arguing that a source that "anyone can edit" – and where much of the information is not verified – should not be used in works that may influence legal decisions. Thus Baker calls for stricter rules in legal publishing, in particular that Wikipedia should not be cited. In a more surprising argument, the paper suggests that if information exists on Wikipedia, it should be treated as common knowledge, and thus does not require referencing (a recommendation that follows a 2009 one – Brett Deforest Maxfield, "Ethics, politics and securities law: how unethical people are using politics to undermine the integrity of our courts and financial markets", 35 OHIO N.U. L. REV. 243, 293 (2009)). This argument does, however, raise the question of whether no citation at all is truly better than a citation to Wikipedia; if such a recommendation were followed, it could lead to a proliferation of uncited claims in law review journals that would be assumed (without any verification) to rely on "common knowledge" as represented in the "do not cite" Wikipedia.

One in four of articles tagged as flawed, most often for verifiability issues

A paper titled "A Breakdown of Quality Flaws in Wikipedia"[7] examines cleanup tags on the English Wikipedia (using a January 2011 dump), finding that 27.53% of articles are tagged with at least one of altogether 388 different cleanup templates. In a 2011 conference poster [8] (a version of which was summarized in an earlier edition of this newsletter), the authors analyzed – together with a third collaborator – a 2010 dump of the English Wikipedia for a smaller set of tags, arriving at a much lower ratio: "8.52% [of articles] have been tagged to contain at least one of the 70 flaws". Using a classification of Wikipedia articles into 24 overlapping topic areas (derived from Category:Main topic classifications), the highest ratio of tagged articles were found in the "Computers" (48.51%), "Belief" (46.33%) and "Business" (39.99%) topics; the lowest were in "Geography" (19.83%), "Agriculture" (22.57%) and "Nature" (23.93%). Of the 388 tags on the more complete list, "307 refer to an article as a whole and 81 to a particular text fragment". As another original contribution of the paper, the authors offer an organization of the existing cleanup tags into "12 general flaw types" – the most frequent being "Verifiability" (19.46% of articles have been tagged with one of the corresponding templates), "Wiki tech" (e.g. the "orphan", "wikify" or "uncategorized" templates; 5.47% of articles) and "General cleanup" (2.01%).

Time evolution of Wikipedia discussions

Kaltenbrunner and Laniado look at the time evolution of Wikipedia discussions, and how it correlates to editing activity, based on 9.4 million comments from the March 12, 2010 dump.[9] Peaks in commenting and peaks in editing often co-occur (for sufficiently large peaks of 20 comments, 63% of the time) within two days. They show the articles with the longest comment peaks and most edit peaks, and the 20 slowest and 20 fastest discussions.

The authors note that a single, heavy editor can be responsible for edit peaks but not comment peaks; peaks in the discussion activity seem to indicate more widespread interest by multiple people. They find that "the fastest growing discussions are more likely to have long lasting edit peaks" and that some editing peaks are associated with event anniversaries. They use the Barack Obama article as a case study, showing peaks in comments and editing due to news events as well as to internal Wikipedia events (such as an editor poll or article protection). Current events are often edited and discussed in nearly real-time in contrast to articles about historical or scientific facts.

They use the h-index to assess the complexity of a discussion, and they chart the growth rate of the discussions. For instance, they find that the discussion pages of the three most recent US Presidents show a constant growth in complexity but that the rate of growth varies: Bill Clinton's talk page took 332 days to increase h-index by one, while George W. Bush's took only 71 days.

They envision more sophisticated algorithms showing the relative growth in edits and discussions. Their ideas for future work are intriguing – for instance, the question of how to determine article maturity and the level of consensus, based on the network dynamics. (AcaWiki summary)

APWeb2012 papers on admin networks, mitigating language bias and finding "minority information"

Several of the accepted papers of this month's Asia-Pacific Web Conference APWeb2012 concerned Wikipedia:

Briefly

Biographical social network of the connections between persons present in at least 13 of the 15 largest language Wikipedias, as described in Aragón et al.[29]

References

  1. ^ Restivo, M. & van de Rijt, A. (2012). Experimental Study of Informal Rewards in Peer Production. PLoS ONE 7(3): e34358. PDFDOI Open access icon
  2. ^ a b Meyer, C. M., & Gurevych, I. (2012). Wiktionary: a new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography. In S. Granger & M. Paquot (Eds.), Electronic Lexicography. Oxford: Oxford University Press. PDF Open access icon
  3. ^ Lepore, J. (2006). Noah's Mark, The New Yorker, November 6, 2006, pp. 78-86. HTML Open access icon
  4. ^ Xiao, L., & Askin, N. (2012). Wikipedia for Academic Publishing: Advantages and Challenges. Online Information Review, 36(3), 2. Emerald Group Publishing Limited. HTML Closed access icon
  5. ^ Wodak, S. J.; Mietchen, D.; Collings, A. M.; Russell, R. B.; Bourne, P. E. (2012). "Topic Pages: PLoS Computational Biology Meets Wikipedia". PLOS Computational Biology. 8 (3): e1002446. Bibcode:2012PLSCB...8E2446W. doi:10.1371/journal.pcbi.1002446. PMC 3315447. PMID 22479174. Open access icon
  6. ^ Baker, D. J. (2012). A Jester's Promenade: Citations to Wikipedia in Law Reviews, 2002–2008. I/S: A Journal of Law and Policy for the Information Society, 7(2):1–44. PDF Open access icon
  7. ^ Anderka, M., & Stein, B. (2012). A breakdown of quality flaws in Wikipedia. Proceedings of the 2nd Joint WICOW/AIRWeb Workshop on Web Quality – WebQuality '12 (p. 11). New York: ACM Press. DOIPDF Open access icon
  8. ^ Anderka, M., Stein, B., & Lipka, N. (2011). Towards automatic quality assurance in Wikipedia. Proceedings of the 20th international conference companion on World Wide Web – WWW '11. New York: ACM Press. DOIPDF Open access icon
  9. ^ Kaltenbrunner, A., & Laniado, D. (2012). There is No Deadline – Time Evolution of Wikipedia Discussions. ArXiV. Computers and Society; Physics and Society. PDF Open access icon
  10. ^ Yousaf, J., Li, J., Zhang, H., & Hou, L. (2012). Exploration and Visualization of Administrator Network in Wikipedia. In: Q. Z. Sheng, G. Wang, C. S. Jensen, & G. Xu (Eds.), Web Technologies and Applications, Lecture Notes in Computer Science 7235:46-59. Berlin, Heidelberg: Springer Berlin Heidelberg. DOI Closed access icon
  11. ^ Hattori, Y., & Nadamoto, A. (2012). Search for Minority Information from Wikipedia Based on Similarity of Majority Information. In: Q. Z. Sheng, G. Wang, C. S. Jensen, & G. Xu (Eds.) Web Technologies and Applications, Lecture Notes in Computer Science 7235:158-169. Berlin, Heidelberg: Springer Berlin Heidelberg. DOI Closed access icon
  12. ^ Fujiwara, Y., Suzuki, Y., Konishi, Y., Nadamoto, A., Sheng, Q., Wang, G., Jensen, C., et al. (2012). Extracting Difference Information from Multilingual Wikipedia. In: Q. Z. Sheng, G. Wang, C. S. Jensen, & G. Xu (Eds.) Web Technologies and Applications, Lecture Notes in Computer Science 7235:496-503. Berlin, Heidelberg: Springer Berlin Heidelberg. DOIClosed access icon
  13. ^ a b Halfaker, A. (2012). Kids these days: the quality of new Wikipedia editors over time. Wikimedia Foundation blog. HTML Open access icon
  14. ^ Ciampaglia, G. L. (2011). User participation and community formation in peer production systems. PhD Thesis, Università della Svizzera Italiana PDF Open access icon
  15. ^ a b Hyland, A. (2012). Comparing article quality by article class and article feedback ratings. Wikipedia. HTML Open access icon
  16. ^ Huggett, S. (2012). The influence of free encyclopedias on science. Research Trends, (27). HTMLOpen access icon
  17. ^ Elder, D., Westbrook, R. N., & Reilly, M. (2012). Wikipedia Lover, Not a Hater: Harnessing Wikipedia to Increase the Discoverability of Library Resources. Journal of Web Librarianship, 6(1), 32-44. Routledge. DOI Closed access icon
  18. ^ Karkulahti, O., & Kangasharju, J. (2012). Surveying Wikipedia activity: Collaboration, commercialism, and culture. The International Conference on Information Network 2012 (pp. 384-389). IEEE. DOI Closed access icon
  19. ^ Yasseri, T., Kornai, A., Kertész, J. (2012). A practical approach to language complexity: a Wikipedia case study. ArXiv. Computation and Language. PDF Open access icon
  20. ^ Graham, M. (2012). Mapping Wikipedia edits from South America. Zero Geography. HTML Open access icon
  21. ^ Lincoln, M. (2012). Death and Change Tracking : Wikipedia Edit Bursts. PDF Open access icon
  22. ^ Atzori, M., & Zaniolo, C. (2012). SWiPE: Searching wikipedia by example. Proceedings of the 21st international conference companion on World Wide Web - WWW '12 Companion (p. 309). New York, New York, USA: ACM Press. DOIPDF Open access icon
  23. ^ DiStaso, M. W. (2012). Measuring Public Relations Wikipedia Engagement: How Bright is the Rule? Public Relations Journal, 6(2) HTML Open access icon
  24. ^ Gray, D. M., & Peltier, J. (2012). Is Wikipedia Reliable Tool for Marketing Educators and Students? A Surprising Heck Yes! Marketing Always Evolving. 34th Annual International Collegiate Conference. PDF Open access icon
  25. ^ Leithner, A., Maurer-Ertl, W., Glehr, M., Friesenbichler, J., Leithner, K., & Windhager, R. (2012). Wikipedia and Osteosarcoma: An educational opportunity and professional responsibility for Emsos. Journal of Bone & Joint Surgery, BR, 94-B(SUPP XIV), 13. British Editorial Society of Bone and Joint Surgery. HTML Closed access icon
  26. ^ Sormunen, E., Eriksson, H., & Kurkipäa, T. (2012). Wikipedia and wikis as forums of information literacy instruction in schools. The Road to Information Literacy: Librarians as Facilitators of Learning. IFLA 2012 Congress Satellite Meeting (pp. 1-23). PDF Open access icon
  27. ^ Peng, H.-K., Zhang, Y., Pirolli, P., & Hogg, T. (2012). Thermodynamic Principles in Social Collaborations. ArXiV. Physics and Society. PDF Open access icon
  28. ^ Kane, G. C., & Ransbotham, S. (2012). Collaborative Development in Wikipedia. ArXiv. PDF Open access icon
  29. ^ a b Aragón, P., Kaltenbrunner, A., Laniado, D., & Volkovich, Y. (2012). Biographical Social Networks on Wikipedia - A cross-cultural study of links that made history. ArXiV. Computers and Society; Physics and Society, PDF Open access icon
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.
  • It appears the PRSA study (a) was bought into the PRSA Journal, i.e. they were paid to include it (b) went through a different "peer review" mechanism to the one usually used, so as to be able to be branded "peer-reviewed." And a CREWE member (Robert Lawton) has explicitly stated his intent to use said "peer review" as an excuse to use it as a "reliable source" for Wikipedia purposes. When I have more details nailed down I'll be making a blog post about this. But the tl;dr is that it was, from its inception, precisely the sort of brazen, cynical PR attempt to warp Wikipedia policies that people worry about from corporate editors - David Gerard (talk) 10:20, 1 May 2012 (UTC)[reply]
Thanks for pointing to the 2010 paper and Signpost article (which I wrote myself). However, both were already mentioned in the text that you removed ("... reported in a 2010 viewpoint article in the Journal of the American Medical Informatics Association (JAMIA) (Signpost coverage)"). From this it should have been obvious that it had been a deliberate decision to include this item; the publication as such is certainly recent enough to be in scope (J Bone Joint Surg Br 2012 vol. 94-B no. SUPP XIV 13); and for better or worse this 2012 abstract will be read by people; it makes sense to give them the context that you noted. Therefore I have reinserted the item (modifying the wording a bit regarding the abstract). Regards, Tbayer (WMF) (talk) 22:42, 1 May 2012 (UTC)[reply]
  • Wow! We have 388 different clean-up tags? I had no idea! --bodnotbod (talk) 12:38, 2 May 2012 (UTC)[reply]
    • We probably have a clean-up tag to clean up clean-up tags! Resolute 20:25, 3 May 2012 (UTC)[reply]
      • I have considered trying to clean it up, but I'm not sure where to start. Probably a RFC or working group, as you wouldn't want to go to TFD/CfD without a very good idea of what to change. Those 388 tags would probably assign article to just as many cats, even ignoring the per month cats. The silliest systems are where BLP templates/cats don't match the non-BLP versions. The WP:CLEANUP WikiProject might need a restart/reinvigoration. The-Pope (talk) 16:56, 4 May 2012 (UTC)[reply]
  • "To test their hypothesis, they determined the top 1% most productive English Wikipedia users among the currently active editors who had yet to receive their first barnstar."
    Well right there they are introducing a strong bias into the selection process by pre-screening high productivity editors who had not received any barnstar-style praise. It is clearly not a representative sample. Ergo I'm pretty dubious about the result. Regards, RJH (talk) 16:06, 4 May 2012 (UTC)[reply]
    • It is a valid result for highly productive editors. You are free to extrapolate this to less productive users, and I'd be interested in an argument why that should be dismissed. Nageh (talk) 17:15, 4 May 2012 (UTC)[reply]
      • It is a valid result for highly productive editors that have never received a barnstar. One might ask why they never received a barnstar. Was it because their work was generally not of a distinctive and/or quality nature? Does the undistinguished nature of their contributions make them predisposed to have a higher need for peer recognition? This is unclear, but mathematically it appears to be a biased selection. (Note that I'm not making claims about the quality of the selection pool: I'm just saying that these are unknown variables that are not accounted for by the control sample.) Regards, RJH (talk) 20:45, 4 May 2012 (UTC)[reply]
        • That is a fair point. Still, I'd be surprised if the result could not be somewhat meaningfully extrapolated to other editor groups (moreover as it may be a bit hard to assess the motivational aspect of barnstars on editors who are more often receiving barnstars anyway). What I am saying is that the result should be interpreted with some healthy bit of caution, but concluding that the result is pretty dubious seems a bit much to me. Nageh (talk) 21:02, 4 May 2012 (UTC)[reply]
          • Fair enough. I agree that peer recognition is good for the project and can be motivational. Guess I'm just getting skeptical at my age. Thanks. Regards, RJH (talk) 21:06, 4 May 2012 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0