The Signpost

Dispatches

Content reviewers crucial to setting standards

Contribute  —  
Share this
By Dr pda
Related articles
Reviewing content
How busy was 2008?
16 February 2009
Reviewing featured picture candidates
24 January 2009
How to start reviewing
7 April 2008


Content review processes such as Featured article candidates (FAC), Featured list candidates (FLC), Good article nominations (GAN) and Peer reviews (PR) are at the core of establishing and maintaining high standards for Wikipedia articles, and provide valuable feedback on how to achieve these standards. Reviewers in these processes tend to gain significant respect in the community for their work. Despite the prestige of the job, such reviewers are in short supply, and 2009 saw a reduction in reviewer participation by most measures.

Featured articles represent Wikipedia's best work, and achieve this status after a review open to the whole Wikipedia community. Editors can support the article's promotion if they believe it meets all the criteria, or oppose it by providing examples of instances where it does not. The featured article director or his delegates will promote an article if consensus in favour of promotion has been reached among the reviewers after a reasonable time.

In 2009, 522 articles were promoted to Featured article (FA) status, while 157 articles had featured status removed via the Featured article review (FAR) process. The net increase, 365 featured articles, is almost 40% down on the 2008 figure of 576.[1] This trend has been evident throughout 2009; the rate of promotion has slowed, because it is taking longer to get sufficient reviews for a given featured article candidate (FAC) to determine consensus to promote the article or not. The decline in reviewer activity has been noted several times throughout the past year on the talk page associated with the FAC process, and is backed up by an analysis of the figures.

Summary:

In 2009 there were 991 FACs (522 successful, 469 unsuccessful), which attracted a total of 9,409 reviews. 1,434 editors were involved with the FAC process, of whom 224 were nominators only, 302 were both nominators and reviewers, and 908 were reviewers only. A successful FAC had, on average, reviews from 12 different people, while an unsuccessful FAC had reviews from 9. In 78% of all FACs, one of these reviewers was Ealdgyth who reviewed the sources used for reliability.[2] By contrast in 2008 there were 1,328 FACs (719 successful, 609 unsuccessful), which attracted a total of 12,743 reviews. 1,987 editors were involved with the FAC process, of whom 87 were nominators only, 258 were both nominators and reviewers, and 1,642 were reviewers only. A successful FAC had, on average, reviews from 11 different people, while an unsuccessful FAC reviews from 9. Once again Ealdgyth provided sterling service, commenting on reliability of sources for 66% of all 2008 FACs.[2]

Thus compared to 2008, there were 28% fewer people participating in the FAC process in 2009, which led to 26% fewer reviews. However there were in fact 35% fewer people providing reviews; the number of editors nominating an article but not reviewing others increased by a factor of 2.5, or 250%.

Articles can also lose featured status through the Featured article review process. Editors who believe an article no longer meets the featured article criteria can list it at FAR. Ideally one or more editors will take on the task of bringing it up to standard. The FAR process showed a similar decline in participation in 2009. Last year there were 219 FARs (157 demoted, 62 kept), and 767 editors participated in reviews. In 2008 there were 263 FARs (143 demoted, 120 kept), and 1129 editors participated. The number of editors participating thus dropped by 32% in 2009.[3]

Summary:

Similar processes to FAC and FAR exist for primarily list-based content—featured list candidates (FLC) and featured list removal candidates (FLRC). In 2009, 500 lists were promoted to Featured list (FL) status, while 83 lists had featured status removed via the FLRC process. The net increase, 417 featured lists, is down compared to the 2008 value of 669.[4] In 2009 there were 574 reviewers and nominators, while in 2008 there were 743.[5]

FLRC bucked the trend, having 235 people involved in 114 reviews, compared to 179 in 72 reviews in 2008.[5] The increased number of lists having their featured status reviewed is possibly a consequence of the large growth of the featured list process in 2008.

Good articles

Summary:

Good articles (GA) must meet a less stringent set of criteria than featured articles. The review process also differs—promotion to GA only requires a review from one editor who was not a significant contributor to the article. The number of Good articles (GA) increased by 2,151 over 2009. This is down 11% on the net increase of 2,416 in 2008. There are currently 8,104 Good articles, 1.8 times the number of featured articles and lists.[6] The total number of nominators and reviewers in this process is also down compared to 2008—1351 compared to 1809, a drop of 25%.[7]

A-Class review

Summary:

On the Wikipedia 1.0 assessment scale there is a level between FA-Class and GA-Class—A-Class articles. An A-Class rating may be awarded by a WikiProject whose scope covers that article; the process is determined by each WikiProject. This contrasts with the centralised (i.e. not WikiProject-based) processes for Featured articles etc. A small number of WikiProjects have active formal A-Class review systems.[8] Of these half dozen A-Class review departments, that of the Military History WikiProject is the largest, processing 220 A-Class reviews in 2009. This is an increase on the 155 reviews processed in 2008, however the number of participants in the process (nominators plus reviewers) has remained steady; 144 in 2009, compared to 140 in 2008.[9]

Peer review

Summary:

Peer review (PR) differs from the previously discussed processes in that it does not result in the awarding of a particular status to the article; instead it is a means for editors to solicit suggestions for improving an article. Peer review is often recommended as a way of attracting the attention of previously uninvolved editors to spot problems which might not be apparent to those closer to the article. Once again this requires reviewers.

In 2009 a peer review was requested for 1,478 articles, resulting in 2,062 reviews. Of these, 891, or 43%, were carried out by just three editors—Ruhrfisch (343), Finetooth (332) and Brianboulton (216).[10] They were assisted by a further 730 reviewers making one or more review comments. A further 503 editors nominated articles for PR but did not review others.[11] Once again, these numbers are down on last year. In 2008, 2,090 articles had a peer review. For technical reasons the number of reviewers could only be determined for the period February to December;[12] in this period 1028 editors reviewed PRs and a further 499 nominated articles for PR and did not comment on others. In the corresponding period of 2009 the numbers are 645 (37% lower) and 449 (11% lower) respectively.[11]

How can I help?

Start reviewing articles! This previous Signpost article gives suggestions for how to go about it. Perhaps start off at Peer review where "you can literally leave one sentence and help improve an article."[13] To find out more about reviewing Good Articles, you can see Wikipedia:Reviewing good articles. You can even ask for a mentor. At places like FAC or FLC you could start off by checking the criteria (What is a featured article?, What is a featured list?), then reading other people's reviews to see what sort of things to look for. If you don't feel confident enough to support or oppose initially, you can leave a comment instead.

Notes

  1. ^ Source: Wikipedia:Featured article statistics.
  2. ^ a b These figures were obtained by counting the number of links to the User or User talk namespaces from editor's signatures on the individual FAC pages. Queries like this one to the Wikipedia API provided the data in an easy-to-parse form. The nominators usernames were obtained by parsing the HTML of the monthly archive pages (e.g. Wikipedia:Featured article candidates/Featured log/January 2009 or Wikipedia:Featured article candidates/Archived nominations/January 2009, and recording the usernames listed after the string "Nominator(s)".
  3. ^ These figures were obtained by counting the number of links to the User or User talk namespaces from editor's signatures on the individual FAR pages. Queries like this one to the Wikipedia API provided the data in an easy-to-parse form. This method probably overestimates the number of users involved, as it counts links to users who, as significant contributors to the article, were notified of the FAR.
  4. ^ Source: Template:Featured list log.
  5. ^ a b These figures were obtained by counting the number of links to the User or User talk namespaces from editor's signatures on the individual FLC or FLRC pages. Queries like this one to the Wikipedia API provided the data in an easy-to-parse form. The number of reviewers cannot be separated from the number of nominators, as was done in the FA case, because the nominators were not listed in a standardised form until February 2009.
  6. ^ Source: Wikipedia:Good articles.
  7. ^ Source: Revision history statistics of Wikipedia:Good article nominations.
  8. ^ Of the 1606 WikiProjects or task forces which have created categories to hold A-Class articles (Source: Category:A-Class articles), only 320 appear to use A-Class, i.e. currently have any A-Class articles. (Source: Wikipedia Release Version Tools). Only 28 have pages in Category:WikiProject A-Class Review, indicating a formal review mechanism. Looking at these pages individually shows that only the Aviation, Ships, Military history, U.S. Roads, and possibly the Tropical cyclones Wikiprojects had active A-class review departments in 2009.
  9. ^ These figures were obtained by counting the number of links to the User or User talk namespaces from editor's signatures on the individual ACR pages. Queries like this one to the Wikipedia API provided the data in an easy-to-parse form.
  10. ^ Source: Wikipedia talk:Peer review.
  11. ^ a b These figures were obtained by counting the number of links to the User or User talk namespaces from editor's signatures on the individual PR pages. Queries like this one to the Wikipedia API provided the data in an easy-to-parse form. The nominators' usernames were obtained by finding the creator of each individual peer review page (e.g. Wikipedia:Peer review/Gilbert Foliot/archive1) using API queries like this one.
  12. ^ The category January 2008 Peer Reviews does not exist.
  13. ^ User:Ruhrfisch at Wikipedia:Wikipedia Signpost/2008-09-15/Dispatches.
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.

Kudos to Dr pda for a lot of work to put together a fine article, and to all those content reviewers who do so much invaluable work for the Wiki! Now let's get some more reviewers at all of these content review processes; promotion of articles depends on conscientous reviewers as much as it does on the content builders. SandyGeorgia (Talk) 17:23, 8 February 2010 (UTC)[reply]

Being a content builder also requires someone to be a content reviewer. For every article that someone nominates for Good Article, the nominator should be reviewing another one to keep the process balanced. For every Featured Article nomination, the nominator should review 9 to 12 more to keep the system balanced. Let's all do our share! Royalbroil 03:06, 9 February 2010 (UTC)[reply]
Nine to twelve good reviews at any content review process is asking a lot. However, I think one-for-one is a sensible approach, and two or three reviews, if done by every nominator, would be more than enough. Dabomb87 (talk) 03:37, 9 February 2010 (UTC)[reply]
I think a big problem is the standards for prose have risen so much that I often don't feel comfortable supporting an article even if I can't find anything wrong with it. I used to be more active at FAC but after reading multiple nominations where I could not formulate a coherent opinion, I stopped. Nifboy (talk) 05:00, 9 February 2010 (UTC)[reply]
A wise old school teacher once told me, "if you want to learn something, teach it". The analogy with content reviewing on WP is compelling: nominators can do no better than shine the light on other articles if they want to shine the light on their own preparation. Reviewers typically specialise, which opens up participation to a wide group of editors. Tony (talk) 06:15, 9 February 2010 (UTC)[reply]

I think a big problem here is that reviewing is a pretty thankless task, which receives little attention from other editors (particularly GA reviewing). I've reviewed a few articles, but stopped doing it as it felt like too much work for too little reward; while being a good content writer gets you a GA or FA star, being a good reviewer gets you nothing. Perhaps there should be a Reviewers' Barnstar, to encourage more participation in this area? Or perhaps we should be asking 'who reviews the reviewers?'... Robofish (talk) 14:01, 9 February 2010 (UTC)[reply]

The Content Review Medal of Merit already exists :) Dr pda (talk) 03:09, 10 February 2010 (UTC)[reply]
Well, it's not just that. I don't bother submitting articles to FAC any more because it's far too harsh. For that matter, I've not worked on an article in any detail for this exact same reason - FAC is far too harsh, and I wrote articles to try to improve Wikipedia. I'm now solely wikignoming and clearing admin backlogs because I really don't see a place for me in contributing to quality articles any more.
As an example, I worked for two years at USA PATRIOT Act, and then I submitted it with great trepidation, only to be asked why I submitted it if I put it forward if I was feeling nervous. Now why would I put so much hard work into an article if someone's going to say that? Seriously, I'm not surprised that FAs are down. - Tbsdy (formerly Ta bu shi da yu) talk 16:14, 9 February 2010 (UTC)[reply]
Let me echo Tbsdy's comments: FAC has acquired a reputation of being an aggressively critical environment. (I'm not alone in this: have a look at this exchange. That's one reason why, once upon a time, I advocated on behalf of the GA rating. What makes this critical environment even worse, it seems to concentrate on nit-picking over style & grammar while overlooking matters of content. And finally there is the issue that the FAC process is intended to encourage better writing -- a goal which gets overlooked in the zeal to restrict the FA label to only the very best writing. No one wants to run the gauntlet if there's a strong possibility of nothing to show for all of the work & frustration. -- llywrch (talk) 21:18, 10 February 2010 (UTC)[reply]
I agree completely with the above. The last FA submission I had failed on the objections of a couple of reviewers. Their objections were based on subjective ideas of linguistic style, and were not only nitpicking, but in places demonstrably wrong. If – in the time it takes me to fight through an FA – I can write a handful of GAs that perfectly meet the needs of 99% of readers, then I'd much rather do that. The FA project is turning into a prestige projects for editors chasing that shiny little star, without much consideration for the users of Wikipedia. Lampman (talk) 17:18, 12 February 2010 (UTC)[reply]

I also applaud the article for drawing attention to the review drought, but I was rather disappointed to discover that no attempt was made to include the A-class reviews that occur here in with the count. As a practical matter, while such reviews cover only a small percentage of article reviews on Wikipedia, they are still a review process. I think it irresponsible for the post to have made such a glaring omission in this story. TomStar81 (Talk) 23:54, 9 February 2010 (UTC)[reply]

Thanks for your comments. This was not an intentional slight on the good work done by A-class reviewers. As I do not belong to a WikiProject with an active A-class review department, the lack of mention of such reviews did not seem such a glaring omission (or even occur to me). A-class reviews are also run independently by the individual WikiProjects, which contrasts with the centralised nature of the processes reported on in this Dispatch. As it turns out, I believe many of the readers of the Signpost are in the same boat as me. I have looked through the 28 pages listed in Category:WikiProject A-Class Review, and found that only the Aviation, Ships, Military history, and U.S. Roads Wikiprojects had active A-class review departments. The Tropical cyclones WikiProject possibly does, though it does not maintain a chronological archive of A-class reviews so it's hard to tell. The Ships A-class review, from the few articles I checked, is just a duplicate listing of the Military History. Thus there are three or four wikiprojects out of a few hundred (I think, I can't find the exact number quickly) with active A-class review departments. WP:MILHIST is probably the largest of these, with a throughput of 220 in the last year. I presume as the Lead Coordinator of WP:MILHIST this is what occasioned your comment. From a quick look, there appear to have been 144 individuals participating in MILHIST A-class reviews in 2009, compared to 140 in 2008. These numbers include 63 editors who participated in both years, and also include the nominators of articles, as they were not listed in a standardised form until during 2009. Thus the participation in MILHIST A-class reviewers has remained stable, though we already knew that MILHIST is an exceptional WikiProject. Dr pda (talk) 03:09, 10 February 2010 (UTC)[reply]
"I presume as the Lead Coordinator of WP:MILHIST this is what occasioned your comment." To a certain degree yes, but its more about being thorough: we are not the only project supporting A-class reviews, though we are perhaps the best known; but my point was that the story was missing a vital link. I bring it up because in my capacity as a wikipedian I learned to appreciate well researched material, yet here I saw a glaring error in the omission of A-class. Even if it was only a side note, a few words about the process in this signpost story would have been appreciated to cover the 2-4 of us that do maintain A-class. TomStar81 (Talk) 06:02, 10 February 2010 (UTC)[reply]
To address your comments, I have now inserted a section on A-Class reviews into the Dispatch. Dr pda (talk) 11:17, 10 February 2010 (UTC)[reply]
Dr pda, the reason that there seems to be an overlap with ships and MILHIST is that Ships/MILHIST/Aviation have reciprocal cross-listing agreements with each other which mean that the article need only pass ACR at one of the three projects and it would be accepted by the other projects instead of having to undergo a review at each project which would be extremely time consuming for an article written on an aircraft carrier for example. -MBK004 07:18, 10 February 2010 (UTC)[reply]
Thanks. I assumed it was something like that. Dr pda (talk) 11:17, 10 February 2010 (UTC)[reply]

It strikes me that the type of reviewing has changed. Looking back at one of my early FAs, Wikipedia:Featured article candidates/Seabird, there are plenty of drive-by supports (people who come in and support without nitpicking too much). More recent FAs have fewer actual reviewers, but the reviewers tend to take a fine tooth comb to the article, seemingly much more so than in the past, although I concede this may be because it is so much harder to get a peer review to iron out problems. In all my recent reviews the main sticking points are usually related to prose and finicky style stuff. I would guess that a tightening of prose standards puts people off - I can't contribute anything meaningful in that area, so I'm scared off reviewing or offering opinions. That said, I can offer opinions on content, so I promise to start reviewing again at least from that point of view and let our more literate and stylish editors worry about that stuff. Sabine's Sunbird talk 03:42, 10 February 2010 (UTC)[reply]

This is very interesting, thanks! Is there any chance of getting the figures for 2007? It is difficult to draw reliable conclusions from only two data points. --Tango (talk) 09:49, 10 February 2010 (UTC)[reply]

I don't really have the time to do this at the moment. You may find some useful information (though not the number of reviewers) in the Dispatch I wrote this time last year: Wikipedia:Wikipedia Signpost/2009-02-16/Dispatches. The basic trends noted there were that FA's were about the same level, FL's and GA's were substantially up, and PR's were slightly down for 2008 compared to 2007. Dr pda (talk) 11:17, 10 February 2010 (UTC)[reply]
  • This article has inspired me. I will try to remember to become an active member of the reviewing community. I'm sure it will take me some time to be particularly valuable at it but I'll give it a shot. --bodnotbod (talk) 15:15, 10 February 2010 (UTC)[reply]
  • I see the crowd that wants to endorse lower standards of writing has arrived.<Narky button switched off> The oldest trick in the book, frequently deployed in the early days—when, let's face it, we let through crap with a wave of the hand—is to complain that the reviewers engage in anally retentive nit-picking of grammar and style. I'm sorry, you would be the first to complain if a film had even tiny editing glitches; film editors in training have their work nit-picked so they know how to avoid such glitches. I can sniff a slippery slope towards mediocrity. High standards in anything, whether the use of images, copyright compliance, verification, prose, or wikilinking, involves nit-picking by oneself or others. End of story. Tony (talk) 02:51, 13 February 2010 (UTC)[reply]
    This response only makes our case presented above about the atmosphere of the FA review & the attitude of its regulars. Methinks the lady protesteth her virtue too much. End of story. -- llywrch (talk) 05:51, 13 February 2010 (UTC)[reply]
  • If only they were improving the quality of the articles, that would be great. Unfortunately, the FA project has become a haven for reviewers who like to hold articles hostage until they can force through their own, highly subjective ideas of what an article should look like. No, the story is not over, but it might soon be if the project keeps alienating participants at this rate. Lampman (talk) 06:22, 13 February 2010 (UTC)[reply]
  • Funnily enough, there's a continual torrent of nominations, to the extent the the reviewers have great trouble managing it. I see no evidence that editors who want to aspire to the highest standards of the project are turning away in droves. The standard of nominations has, overall, risen over the past few years. This is very welcome, and we have the hard work and devotion of the existing reviewers and delegates (as well as Raul's example) to thank for this. If you have a personal objection to parts of the process, you're welcome to take it up on WT:FAC. Tony (talk) 08:08, 13 February 2010 (UTC)[reply]
  • Well that is simply not true. The promotion rate remains more or less constant (c. 55%), so the number of nominations has decreased at about the same rate as the number of promotions. I cannot speak for all who chose to withdraw, but from the above, and several other discussions, it's safe to assume that the insular and alienating nature of the project contributes. As for me, I do have "personal objections" to the process, but it doesn't really matter as long as the GA project does pretty much the same job much more efficiently, and without all the grief. Admittedly there's no bronze star or main page placement for GAs, but that's all about putting one's personal ambitions aside and think of what's best for Wikipedia. Lampman (talk) 12:12, 13 February 2010 (UTC)[reply]
  • To Tony: Summary:
  • Annual increase in FAs down 37%
  • FAC reviews down 26%
  • FAC reviewers down 36%
  • FAC "nominators only" up 250%
  • FAR participants down 32%
So your statement that the promotion rate "remains more or less constant (c. 55%)" is meaningless. If 2 articles were submitted and 1 passed, the "promotion rate" would still be 50% (about 55%). Xme (talk) 14:04, 13 February 2010 (UTC)[reply]
Tony, I have yet to see any FA sweeps drive. GA Sweeps is almost done and we managed to round up any that fell through the cracks. Unless some drastic measure is done by the FA team, I daresay that we can find FA that can't even meet today's GA standard. And yes, I also felt that the FAC/FAR environment are toxic enough that I simply avoid it. They focused too much on tiny details like the en-dash vs. em-dash or comma vs. semi-colons as opposed to focusing on broader picture, such as the validity of the contents. OhanaUnitedTalk page 06:54, 14 February 2010 (UTC)[reply]
  • I've noticed several interesting points in this discussion. One is that while none of the four people critical of FA, not only has no one objected to the general practice of reviewing articles in order to rate them as FAs, only one person has come forth so far to defend the current way it is done -- Tony. No one else has bothered to step up & assert that the FA review made the article stronger. Instead, it is seen as something one must endure -- a gauntlet to run -- in order for an article to be so rated. Tony appears to be very much in the minority here. Secondly, the criticism is not whether the process is harsh & adversarial: rather that it appears to be. Tony has not responded by saying that this appearance is erroneous, but by endorsing the very concept of an adversarial process (is words, "film editors in training have their work nit-picked so they know how to avoid such glitches") & claiming that we must either endorse this or "let through crap with a wave of the hand"; our choice is limited to one extreme or the other. Third & lastly, his suggestion that anyone with criticisms take this up at WT:FAC is disingenuous: when I have tried to raise detailed objections in the past with Tony's judgments, they have either been ignored or dismissed with a terse & condescending retort. I don't know if his input is an example of simple arrogance -- or an expression of denial over the very real likelihood that the Featured Articles process is becoming irrelevant. -- llywrch (talk) 06:49, 14 February 2010 (UTC)[reply]
I think the fact that you're all here suggests that the bronze star is a highly sought-after reward, and the fact that we typically have far too many nominations on the page to cope with is a testament to the need for more reviewers and the amount of feedback (including repeat visits) that in many cases is required to satisfy the criteria. No one ever said that good writing is easy; nor skilful verification, image use, wiklinking, formatting, and in some cases balance. FAC is a learning process for both nominators and reviewers. This is why we are trying to encourage more nominators to do some reviewing in a chosen field, in which they might be interested in gaining expertise. We need you. Tony (talk) 08:30, 14 February 2010 (UTC)[reply]
And you have the answer above why people won't be responding to your requests. Yet you seem to refuse to listen. I can only hope the other participants in the FA process don't follow your example. -- llywrch (talk) 22:28, 14 February 2010 (UTC)[reply]
Hi everyone. I think both Tony and the "other editors" have valid points (and no, this not just a cop-out). It's a bit extreme to say the FA process is becoming "irrelevant" when the number of open FACs is usually around 45 or higher, and reaches 60 fairly regularly (especially during the holiday season). I know WP:FAS says there were only 35 FACs at the start of this month, but that was following an atypical week in which 22 articles were promoted to FA status. Until we have a dearth of nominations and surplus of reviewers rather than the other way around, FA is not in danger of fading into irrelevancy (having TFA helps too). On the other hand, it is true that FA processes tend to focus on the nitty-gritty (i.e. style and prose) more than some would like. However, that is partially a product of many substantial reviews being done by a small group of "FA regulars" who are familiar with the general FA standards but not necessarily the subject matter. If a literature editor reviews a math FAC, they will limit themselves to commenting on things they are capable of criticizing: prose, punctuation, citation formatting, alt text etc. If you are a lit major who never got past college algebra, would you feel qualified to comment on whether whether the fundamental theorem of calculus is comprehensive or if it uses the highest-quality sources available? That said, I think FA needs to do a better job of reaching out to other parts of Wikipedia (i.e. expert editors and WikiProjects); I've seen quite a few editors who have found FA to be too insular a community. Dabomb87 (talk) 02:02, 15 February 2010 (UTC)[reply]
I do think that most articles benefit from going through FAC and being worked over by fresh-eyed editors. See the changes that took place at 2007–2008 Nazko earthquakes, for example. Dabomb87 (talk) 02:06, 15 February 2010 (UTC)[reply]
Now I know why Hitler accused me of being a pompous little ass-wipe. Tony (talk) 06:30, 15 February 2010 (UTC)[reply]
According to Godwin's Law, because Tony mentioned Hitler this discussion is at an end & he is the loser. Which is a pity, since DaBomb87 made some thoughtful points. I honestly don't know why Tony invoked Hitler at this point: no one arguing against him claimed that FA was being run in a dictatorial manner. And if I wanted to compare Tony's opinion expressed here to any political parallel, it would be that he believes an aggressively vicious & adversarial review is good, & any other approach is doubleplusungood. -- llywrch (talk) 23:16, 15 February 2010 (UTC)[reply]
Ho hum ... it would be thus if I hadn't written the spoof myself. Tony (talk) 02:03, 16 February 2010 (UTC)[reply]

It's not worth it for a POV-pusher to spend an extra 10-15 hours in the modern era to get the prose and formatting etc done when they could be writing more tripe. the 30k TFA hits were a big carrot in the old days, and some guys started their FACs with soapboxing comments about why the historical incident in question was important with a strong nationalist bent etc.... Still as for toxicity, 2006 was a very turbulent year in terms of political wiki-riots and I remember some people who were very famous headkickers, thug admins and enforcers in those days complain that the now way-outdated 2006 standards were hard and that FAC was "nasty". Those were the days of 1-line drive-by supports when it was common for 10 guys from one wikiproject to just turn up and pile on; I remember one guy casting about 50 votes in three hours. Gee some of those thug admins in those days were soft as jelly. Thank goodness there aren't FAs like in those days that just copied and paraphrased a few web-bios/enyc articles and mixed them together YellowMonkey (vote in the Southern Stars photo poll) 02:54, 16 February 2010 (UTC)[reply]

Number of reviews or reviewers is not a good measure, IMO. Reviews in past years were a lot shorter; just a support or oppose and a sentence or maybe two. Reviews now tend to be much more detailed. A word count per review page might be a good thing to also look at. I would not be surprised at all if the effort, expressed in word count, is trending higher. Number of edits to articles and byte count changed in articles during the review period would also be a good measure. But what constitutes a review has changed so a simple head count is not comparing the same thing. --mav (Urgent FACs/FARs/PRs) 00:05, 21 February 2010 (UTC)[reply]

Thanks for your comments. The main thrust of the article was that the backlogs in various content review processes experienced in 2009 were due to fewer people participating. For this the number of reviews/reviewers is an appropriate measure :) The article was also only comparing 2009 to 2008; my impression, for the processes I follow, is that the level of detail in reviews has not changed that much over this period. Certainly reviews now tend to be much more detailed than in 2006, say, however this is beyond the scope of the comparison in the article. Your point that the total reviewer effort as measured in words per review page may be increasing would also imply that fewer people are doing more work, which is in agreement with the thesis that there is a shortage of reviewers. By way of some numerical data, I have calculated the average byte count for the FAC review pages in 2008 and 2009 (This is easier to determine than word count as it is stored in the database). The results are as follows:
  • Total amount of FAC review: 21 MB in 2009, 28 MB in 2008
  • Average length of review page for successful FACs: mean=25 kB, standard deviation=19 kB in 2009, mean=23 kB, s.d.=20 kB in 2008
  • Average length of review page for unsuccessful FACs: mean=19 kB, standard deviation=16 kB in 2009, mean=20 kB, s.d.=27 kB in 2008
Thus the total effort trended downwards (from 28 to 21 MB), contrary to your expectation. The average length of review pages was also fairly constant from 2008 to 2009, as I had assumed. (The s.d. for unsuccessful FACs in 2008 is distorted by a few abnormally long FACs, e.g. Roman Catholic Church.) The other review measures you propose are problematical. Counting number of edits won't give fair results if some editors prefer to make multiple edits in individual sections while others prefer to make one single edit to the whole article. Net byte count change during the review process would also not give useful results if the amount of additions and deletions were comparable, and also would not reflect restructuring of the article by rearrangement of existing text. Dr pda (talk) 22:50, 21 February 2010 (UTC)[reply]
Thanks for your clarification and data-based approach. It is better to know the reality and (hopefully) we will face it, rather than ignore it. Xme (talk) 23:48, 21 February 2010 (UTC)[reply]
Thanks for the ad hoc analysis. The trend by all measures appears to going in the wrong direction. This is worrying. --mav (Urgent FACs/FARs/PRs) 03:34, 22 February 2010 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0