A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.
"Mind the skills gap: the role of Internet know-how and gender in differentiated contributions to Wikipedia"
This article[1] contributes to the discussion on gender inequalities on Wikipedia. The authors take a novel approach of looking for answers outside the Wikipedia community, thus also tying their research into the analysis of new editors recruitment, motivations, and barriers to contribute. The authors focus their analysis on the role of Internet experiences and skills, and their lack among certain groups. The authors study whether the level of one's skills in digital literacy is related to their chance of becoming a Wikipedia editor, by surveying 547 young adults (aged 21–22) – students at a (presumably American) university, the most used convenience sample in academia. The survey was carried out in 2009, with a follow-up wave in 2012. The students were asked about their socioeconomic and demographic background, as well as about their level of digital literacy skills. The authors report that "the average respondent's confidence in editing Wikipedia is relatively low" but that "about one in eight students had been given an assignment in class at some point either to edit or create a new entry on Wikipedia" – which likely suggests that the (undisclosed by authors) university was one where at least one member of the faculty participated in the Wikipedia:Education Program. The vast majority (99%) of respondents reported having read an entry on Wikipedia, and over a quarter (28%) have had some experience editing it (interestingly, even when controlling for students who were assigned to edit Wikipedia, the former number is still as high as 20%).
Regarding the gender gap issues, women are much less likely to have contributed to Wikipedia than men (21% to 38%), and that becomes even more divergent when controlling for student assignments (13% to 32%). The authors find an indication of gender gap affecting the likelihood of Wikipedia's contributions: students who are white, economically affluent, male and Internet-experienced are more likely to edit than others. The strongest and statistically significant predictor variables, however, are Internet skills and gender, and regression models show that variables such as race, ethnicity, socioeconomic status, time availability, Internet experience, and confidence in editing Wikipedia are not significant. The authors find that the gender becomes more significant as one's digital literacy increases. At a low level of Internet skills, the likelihood of one's contribution to Wikipedia is low, regardless of gender. As one's skills increase, males became much more likely to contribute, but women fall behind. The authors find that women tend to have lower Internet skills than men, which helps explain a part of the Wikipedia gender gap: to contribute to Wikipedia, one needs to have a certain level of digital literacy, and the digital gap is reducing the number of women who have the required level of skills. The authors crucially admit that "why women, on average, report lower level understanding of Internet-related terms remains a puzzle. Although studies with detailed data about actual skills based on performance tests suggest no gender differences in the observed skills, research that looks at self-rated know-how consistently finds gender variation with real consequences for online behavior". This suggests that while men and women have, in reality, similar skills, women are much less confident about them, which in turns makes them much less confident about contributing to (or trying to contribute to) Wikipedia. This, however, is a hypothesis to be confirmed by future research. In the end, the authors do feel confident enough to conclude that "gender and Internet skills likely have a relatively mild interaction with each other, reinforcing the gender gap at the high end of the Internet skills spectrum." In conclusion, this reviewer finds this study to be a highly valuable one, both for the literature on gender gap and online communities, and for the Wikipedia community and WMF efforts to reduce this gap in our environment.
In nutritional articles, academic citations rise while news media citations decrease
A study published in First Monday[2] analyzed the development of the referencing of 45 articles over nine topic groups related to health and nutrition over a period of five years (2007–2011) (unfortunately, the authors are not very clear on which particular articles were analyzed, and tend to use the concepts of an article and topic group in a rather confusing manner). Authors coded for references (3,029 total), information on editing history, and search ranking in Google, Bing and Yahoo! search engines. The study confirmed that Wikipedia articles are highly ranked by all search engines, with Yahoo! actually being even more "Wikipedia-friendly" than Google. The author shows that (as expected) the articles improve in quality (or at least, number and density of references) over time. Crucially, the authors show that the overall percentage of mainstream news media references has decreased, while references to academic publications increased over that time. By the end of the study period, only the article on (or topic group of?) trans fat contained more references to news sources than to academic publications. The authors overall support the description of Wikipedia as a source aiming for reliability, though they are hesitant to call it reliable, pointing out that for example 15% of analyzed references were coded as "outside the main reference type categories or... not be clearly determined". The authors conclude, commendably, that "Wikipedia needs to be high on the agenda for health communication researchers and practitioners" and that "communications professionals in the health field need to be much more actively involved in ensuring that the content on Wikipedia is reliable and well-sourced with reliable references".
Wikipedia user session timing compared with other online activities
In a recent preprint titled "User Session Identification Based on Strong Regularities in Inter-activity Time"[3], Halfaker and team from the Wikimedia Foundation's Analytics department and the GroupLens Lab ask whether there is some way we can talk about contributions in terms of "sessions" rather than atomic operations, in all collaborative work online. The researchers would like to answer "yes," and that a "session" can be defined as the operations conducted until "a good rule-of-thumb inactivity threshold of about 1 hour" is reached, regardless if you're editing Wikipedia, viewing Wikipedia, rating movies, searching AOL, or playing League of Legends. You may recall that Halfaker and Geiger came to a similar conclusion about "edit sessions" in a 2013 paper, but now the idea is to cement that fact as a universal heuristic across many domains. Opposition to this idea has been that session length thresholds will always be arbitrary, or that a session deviates from completing a task that might extend beyond someone logging off for a night.
To bolster their argument, the authors use empirical data collected from seven datasets to test the hypothesis. The method employed is to take the log-normal time between user events, and then fit a bimodal distribution to the histogram. Once we have a two-humped histogram, we simply find the point which makes half the data "within" session and the other half "between" session.
AOL search data, Cyclopath route-getting requests, and Wikipedia viewing (from the desktop, mobile and apps) seem to fit bimodally. Together their the threshold is in the range of 29 to 115 minutes, but all would not be far off of an hour, say the authors. Yet when it comes to Wikipedia editing, OpenStreetMap editing, and MovieLens reviewing and searching, a bimodal 1-hour fit is good, but can be further explained by a trimodal model. In the case of the first two activities the third category is the wikibreak, and in the latter it is the ease the site make in rating movies in quick succession.
Even trimodally though, "this strategy for identifying session thresholds is not universally suitable for all user-initiated events". For instance they show League of Legends, which has modal peaks at 5 minutes and one day. As a reviewer this is easy to describe from a player's perspective. If you play 5 games in a row, which takes 5 minutes queueing between games, and then repeat it daily, you get the histogram seen where the 5 minute peak is about 5 times as tall as the day peak. Stack Overflow does not easily fit into their model at all with a threshold of 335 minutes. The authors claim this is from the high quality edits expected at Stack Overflow.
Overall the authors conclude that one hour seems to suffice as a rule of thumb. But does it? The issue is that a goodness of fit with the bimodal models is not presented. This leaves outliers like Stack Overflow either able to be modeled but not compliant with the one hour rule, when they could just potentially not be describable using the proposed heuristic.
Briefly
"Wikimedia Movement in European countries as an example of civil participation": This Polish-language book chapter[4] (with an English abstract) looks at the Wikimedia community as a social movement. In the first subchapter, it argues that the Wikimedia movement is a type of new social movement which is fighting for equal access to free education. The bulk of subsequent subchapters consist of describing the European Wikimedia projects through tables listing whether they exist, estimated size in articles, members, etc., and briefly describing their activities such as involvement in the Wikipedia Loves Monuments initiative or with the GLAM sector. The book chapter is interesting as clearly placing itself in the relatively small body of literature that describe Wikipedia/Wikimedia as a social movement. Unfortunately it is primarily a descriptive rather than an analytical piece, and does not provide any significant theoretical justification for calling the Wikimedia movement a social movement, a weakness amplified by the fact that this work fails to engage with the prior relevant body of Wikipedia research, and is only very loosely connected to the literature on social movements.
Ranking public domain authors using Wikipedia data: This article[5] proposes a way to combine Wikipedia and Online Books Page data, for the purpose of identifying the most notable (important, popular, read) authors whose work is about to enter the public domain, in order to facilitate and prioritize digitization of their works. The following information from the authors' Wikipedia articles are used: "article length, article age in days, time elapsed since last revision, revision rate during article’s life, article text (200 topic weights derived from a topic model), category count, translation count, redirect count, estimated views per day, presence of translation for the 10 Wikipedias with the most translations, presence of bibliographic identifier (GND, ISNI, LCCN, VIAF), article quality classification ("Good Article" and "Featured Article"), presence of protected classification, indicator for decade of death for decades 1910–1950, and interactions between article age and all features." The proposed algorithm may be of interest to members of WikiProject Books, WikiProject Libraries, WikiProject Open, and related projects, as a means of generating an importance rating and selecting underdeveloped articles for development.
"Mining cross-cultural relations from Wikipedia - A study of 31 European food cultures"[6]: The authors use Pierre Bourdieu's theories to analyze cultural similarities and differences between 31 European countries, by looking at the differences between articles on various national cuisines across 27 different European-language Wikipedias. They find that the existence, quality and links of studied Wikipedia articles can be correlated with data from the European Social Survey on cross-cultural ties between European countries. In addition to expected findings (all cultures are interested in their own cuisine first, then in famous ones such as French cuisine and in those of their neighbours), the article does present some interesting data, for example noting that the articles on Turkish cuisine are relatively well-developed on numerous Wikipedias, which could be explained by long-term and significant in size migration of Turkish people to various European countries, and the resulting interest in Turkish cuisine in those countries. The authors also find that significant differences do exist between different language Wikipedias, as different cuisines can be very differently described on different projects, thus reinforcing the theory that knowledge can be significantly influenced by one's culture. For Wikipedia editors, this is a reminder that all language editions suffer from significant biases, and that articles in different language editions can be and usually are significantly different.
Dissertation on automatic quality assessment: A recent PhD dissertation[7] by Oliver Ferschke at the Technical University of Darmstadt "shows how natural language processing approaches can be used to assist information quality management on a massive scale" on Wikipedia. As the first main contribution, the author highlights his definition of a "comprehensive article quality model that aims to consolidate both the quality of writing and the quality criteria defined in multiple Wikipedia guidelines and policies into a single model. The model comprises 23 dimensions segmented into the four layers of intrinsic quality, contextual quality, writing quality and organizational quality." Secondly, the dissertation presents methods for automatically detecting quality flaws (overlapping with previous publications co-authored by Ferschke), and evaluates them on a "novel corpus of Wikipedia articles with neutrality and style flaws". Thirdly, the dissertation presents "an approach for automatically segmenting and tagging the user contributions on article Talk pages to improve work coordination among Wikipedians. These unstructured discussion pages are not easy to navigate and information is likely to get lost over time in the discussion archives."
39% of talk page threads contain wrong indentations: Ferschke's "English Wikipedia Discussions Corpus" ("EWDC") is used in a paper[8], to be presented at the 28th Pacific Asia Conference on Language, Information and Computing next month. In the paper, his doctoral adviser Irina Gurevych and another author construct an method to detect adjacency pairs (a user comment that responds to another) by analyzing the content, in particular detecting "lexical pairs" (giving the examples "(why, because)" and "(?, yes)"), validated against human annotation. As a side result, they observe that "Incorrect indentation (i.e., indentation that implies a reply-to relation with the wrong post) is quite common in longer discussions in the EWDC. In an analysis of 5 random threads longer than 10 turns each, shown in Table 1, we found that 29 of 74 total turns, or 39%±14pp of an average thread, had indentation that misidentified the turn to which they were a reply."
Which talk page comment refers to which edit?: Another paper co-authored by Gurevych, titled "Automatically Detecting Corresponding Edit-Turn-Pairs in Wikipedia"[9] uses machine learning to automatically identify talk page comments about a particular article edit.
Other recent publications
A list of other recent publications that could not be covered in time for this issue – contributions are always welcome for reviewing or summarizing newly published research.
"Development of a semantic data collection tool. : The Wikidata Project as a step towards the semantic web."[11] (bachelor thesis)
"To Use or Not to Use? The Credibility of Wikipedia"[12]
"Indexing and Analyzing Wikipedia's Current Events Portal, the Daily News Summaries by the Crowd"[13] From the abstract: "Wikipedia's Current Events Portal (WCEP) is a special part of Wikipedia that focuses on daily summaries of news events. ...First, we provide descriptive analysis of the collected news events. Second, we compare between the news summaries created by the WCEP crowd and the ones created by professional journalists on the same topics. Finally, we analyze the revision logs of news events over the past 7 years in order to characterize the WCEP crowd and their activities. The results show that WCEP has reached a stable state in terms of the volume of contributions as well as the size of its crowd..."
^Halfaker, Aaron; Oliver Keyes; Daniel Kluver; Jacob Thebault-Spieker; Tien Nguyen; Kenneth Shores; Anuradha Uduwage; Morten Warncke-Wang (2014-11-11). "User Session Identification Based on Strong Regularities in Inter-activity Time". arXiv:1411.2878.
^Patryk Korzeniecki: Ruch Wikimediów w państwach europejskich jako przykład aktywności obywatelskiej (Wikimedia Movement in European countries as an example of civil participation). Chapter 6 in: Joachim Osiński, Joanna Zuzanna Popławska (eds.): Oblicza spoleczenstwa obywatelskiego. WARSAW SCHOOL OF ECONOMICS PRESS, WARSAW 2014
^
Riddell, Allen B. (2014-11-08). "Public Domain Rank: Identifying Notable Individuals with the Wisdom of the Crowd". arXiv:1411.2180.
^Laufer, Paul; Claudia Wagner; Fabian Flöck; Markus Strohmaier (2014-11-17). "Mining cross-cultural relations from Wikipedia - A study of 31 European food cultures". arXiv:1411.4484.
^Emily K. Jamison, Iryna Gurevych: Adjacency Pair Recognition in Wikipedia Discussions using Lexical Pairs. PDF
^Johannes Daxenberger and Iryna Gurevych: Automatically Detecting Corresponding Edit-Turn-Pairs in Wikipedia
[ http://acl2014.org/acl2014/P14-2/pdf/P14-2031.pdf PDF] Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 187–192, Baltimore, Maryland, USA, June 23-25 2014.
Discuss this story