A paper titled "Is Wikipedia Easy to Understand?: A Study Beyond Conventional Readability Metrics" presents a novel concept for quantitatively examining how difficult readers may find a Wikipedia article due to lack of knowledge about the concepts used.
The authors first note that the concerns about the readability of Wikipedia articles had already been examined in various earlier studies, but only based on relatively simplistic readability scores such as the Flesch-Kincaid test that "output the US grade level, i.e., the number of education years required to understand a piece of text [but] only take into account the surface level parameters like the average number of syllables, average number of words, and average number of sentences to calculate the readability score".
Instead, they aim to rely on "cognitive [...] theories [which] describe that the knowledge present in the text must be coherent with the background knowledge of the reader [...] the information present must resonate with the background knowledge of the reader." Noting that "Wikipedia encounters readers with varied educational backgrounds", they posit that "to execute successful text comprehension on Wikipedia, we must not assume any background knowledge from the reader. The information present in the Wikipedia articles should be self-suﬃcient in knowledge". This leads the author to a concept termed "knowledge gaps", i.e. "missing pieces of information in the text, which hinder the comprehension of the underlying text. [...] For example, while explaining the area and perimeter of a circle, the concept of radius must be explained to the reader."
In order to enable a quantitative study, knowledge gaps are operationalized by first partitioning a Wikipedia article into segments using a method known as semantic word embedding (described in somewhat more detail in a 2020 paper by some of the same authors, reviewed below). LDA (Latent Dirichlet Allocation) is used to determine the topic of each segment, technically a probabilty distribution over a set of keywords. The Hellinger distance between these probability distributions for subsequent segments is used to define knowledge gaps: "If the Hellinger distance between any two subsequent segments is more than 0.5, then there is a knowledge gap between the two segments. The threshold of 0.5 has been deﬁned empirically. The knowledge gap parameter is the ratio of the number of knowledge gaps to the total number of segments in the Wikipedia article."
The authors apply this metric to a random sample of 6000 English Wikipedia articles, consisting of 1000 Featured, Good, B-class, C-class, Start-class and stub articles each, with an additional "1000 random articles from the rest of the categories except category A articles". They find that "featured articles experience the least knowledge gap parameter followed by Good Articles", concluding that "Featured articles are self-suﬃcient in knowledge, leaving little scope for knowledge gaps." Likewise, the knowledge gaps score increases progressing to B-Class, C-Class, Start and Stub articles.
Three of the authors had already developed the concept of knowledge gaps as applied to wiki articles in a paper presented at OpenSym 2020.
The main theme of this earlier paper was the integration of a wiki and a Q&A forum (like StackExchange), such that "whenever a user encounters a question in a wiki article, he can ask the question on the corresponding QnA page". As evidence that there exists a "coherent activity between articles and the corresponding talk pages" on the English Wikipedia, the authors examined a random sample of 100 Featured articles, comparing the daily editing activity of articles and their talk pages (measured in the number of sentences added). They found "coherence in the activity of Wikipedia articles and their respective talk pages [, which] establishes the importance of discussion forums-like talk page for a Wikipedia article. But, it should be noted that only 27% of the Wikipedia articles have talk pages [citing a publication from 2012]". Also, they note that on Wikipedia "only 10% of the total posts on talk pages are by the readers seeking information", making Wikipedia a less suitable model for the integration of wikis and Q&A forums.
The paper goes on to instead apply this concept of Q&A integration to a MOOC wiki (supporting a Python course for computer science students), named "JOCWiki". The aforementioned semantic segmentation method was applied to the pages of this course wiki, resulting in segments that were "at least 6 sentences long, which roughly amounts to 100 words per segment". The purpose of segmentation in this case to determine whether a particular student question in the forum was referring to content on the wiki, and if yes, whether the question can be answered based on that segment. If not, the section was deemed to have a knowledge gap. Latent Dirichlet Allocation (LDA) was used to assign a topic to each segment and each question, and matching topics were interpreted as the question referring to that section. More specifically, Hellinger distance (a measure of the similarity between two probability distributions) was used to quantitatively estimate both whether a particular question matched a particular text segment, and whether it could be answered from that segment - a somewhat crude measure.
The paper goes on to discuss the concept of "triggering", generally defined as a "process by which an idea or a piece of information cascades the generation of more ideas", or in case of wiki pages, addition of new information. (The authors had already examined this for Wikipedia in a 2018 paper, likewise presented at OpenSym - see our coverage: "Triggering" article contributions by adding factoids.) As an example of triggering, "in the 'New York City' article of Wikipedia, the wiki link of financial center is created in the seventh revision, and subsequently, in the twelfth revision, wiki link of New York Stock Exchange is added. These two wiki links added in neighboring revisions are semantically similar." The concept of triggers is applied to the interaction between wiki pages and talk page/Q&A forum, in both directions - either questions triggering article edits ("Q→A triggers), or article edits triggering new questions ("A→Q triggers"). The authors posit that "if there are sufficient Q→A triggers, then it leads to better knowledge building in the wiki article and lesser A→Q triggers." And on the other hand "if there are relatively [fewer] Q→A triggers, this leads to knowledge deficiency in the article giving scope for more A→Q triggers".
The study then compared JOCWiki and Wikipedia regarding the frequencies of these Q→A and A→Q triggers, on a small sample of three topics covered both in the course and on Wikipedia, finding evidence that "there are very few triggers generated by talk pages in Wikipedia" (consistent the aforementioned earlier research result that only 10% of posts on Wikipedia talk pages are reader questions, as they are "mostly used by the editors to discuss the improvements in the article"). In other words, Wikipedia may be more prone to knowledge deficiencies than JOCWiki with its QnA forum that "is used by all types of users (editors and readers) to discuss the concepts as well as improvements to the article [and] provides a more conducive environment to induce triggers."
Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.
From the abstract:
"we have reconstructed networks of personal communication (direct messaging) between Wikipedia editors gathered in so called Wikiprojects - teams of contributors who focus on articles within specific topical areas. We found that effective projects exchange larger volume of direct messages and that their communication structure allows for complex coordination: for sharing of information locally through selective ties, and at the same time globally across the whole group. To verify how these network measures relate to the subjective perception of importance of group communication we conducted semi-structured interviews with members of selected projects. Our interviewees used direct communication for providing feedback, for maintaining close relations and for tapping on the social capital of the Wikipedia community."
Compare our earlier coverage of a contrasting result: "All Talk: How Increasing Interpersonal Communication on Wikis May Not Enhance Productivity", which has since been published in peer-reviewed form.