Article evaluations

Evaluations of articles proliferate

In the aftermath of The Guardian's effort at rating Wikipedia articles (see archived story), a number of new evaluations sprang up this past week. Both Wikipedia editors and outside sources took stabs at critiquing the quality of the content.

Last Monday, the South African newspaper the Mail & Guardian published a story online in which it ran a very similar evaluation to its British cousin, asking a variety of experts to rate specific articles on a scale ranging from 0 to 10. The articles chosen were on subjects specifically related to South Africa. Scores this time were slightly better, mostly because the lowest rating was 2/10 for Media in South Africa (as opposed to 0/10 for Haute couture), and the articles on the national rugby teams actually received perfect grades.

Comparing with the competition

On Wednesday, technology news site CNET published its evaluation, which involved comparing Wikipedia against competing encyclopedia software. For its story, CNET stacked Wikipedia up against the 2006 versions of Encarta and Encyclopædia Britannica, available on DVD.

Unlike The Guardian, CNET did not rate articles and focused more on comparing features rather than specific encyclopedia content. In addition to the feature comparison table, each encyclopedia received a more in-depth review that highlighted various strengths and weaknesses. The summary of the detailed review for Wikipedia read, "Wikipedia offers rich, frequently updated information, but you might need to verify some of its facts."

The free availability of Wikipedia (assuming one has internet access) was cited as "an enormous advantage" specifically because it does not use up significant computer resources or interfere with other programs such as a firewall or antivirus software. Points criticized included the lack of resources specifically for children when compared to Encarta and Britannica, as well as the "uninspiring interface". Perhaps surprisingly, considering the frequency of questions seeking guidance in using Wikipedia, the reviewer noted that the organic development of support information produced help pages that were "perhaps more useful" than those provided by the software encyclopedias.

Random quality checks

Meanwhile, Kosebamse started a flurry of attempts to study Wikipedia quality based on a random selection of articles. Acknowledging that it was a "totally unscientific investigation", he repeated a test he had conducted back in March by using Special:Random twenty consecutive times and seeing what came up. His summary of the results suggested that, as he put it, "the average quality of our content has not much improved" since March or even earlier.

A few other people also did their own tests using this technique, with similar results. Carnildo performed one of the more detailed examinations, going through a total of 100 articles. One criticism raised about these studies was that taking a random selection of pages at different points in time would not be a valid comparison, since the later sample would catch newer articles that had not had enough time for improvement. This problem could be addressed by conducting a longitudinal survey over time, and Carnildo indicated that he would revisit his sample in the future to see how the articles changed.

+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.
About User:Kosebamse/Twenty-random-pages test. It appears that he has older tested pages... perhaps an evaluation on these pages might be in order? Sort of like a before/after evaluation. I do agree that some of our content is pretty terrible. I should know: I've written some of it. The problem, of course, is that if you focus on an article (like I tend to do) then all other articles don't get edited. For instance, MDAC and Windows 2000 took me a long time to write because they were pretty technical. Exploding whale, on the other hand, did not because it wasn't technical and because (suprisingly) there is quite a bit of information on this that's very accessible. Bottom line: research is hard. Writing articles is hard. I love every moment of it! - Ta bu shi da yu 07:06, 15 November 2005 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0