The Signpost

The GA Trophy awarded at the end of a Good Article Cup
Good articles can be identified by a green plus symbol. The plus-minus motif was not the first suggested; other ideas included a thumbs up, check mark, or ribbon.
Backlog during the third GA Cup with a 15-day simple moving average
Backlog from the end of the Second GA Cup to the end of the Third GA Cup. The blue line indicates when the Third GA cup was announced and the green line when the Third Cup began.
+ Add a comment

Discuss this story

The results of gamification

The assertion that "We now know that the GA Cup does not lead to "drive-by" passes" has no basis in fact. It may be true that insufficient reviews occur at the same rate so the Cup doesn't encourage the practice but let's remember that the number of bad reviews is increasing at the same time reviews, generally, are increasing. Doing GA reviews sucks because it's actual work; I have more fun doing GOCE drives. I think the Good Article WikiProject is key to objectively improving content whereas GOCE is by-and-large just fixing word salad, which almost anyone can do. Efforts like the GA Cup are our collective means of putting these articles to stringent standards. GA status is often, though not always, a precursor to pursuit of A-class or FA. I remain concerned that these contests (of which I am a part currently) attract editors who are still unfamiliar with proper reviewing. It's demoralizing to see bad reviews done, especially when you're competing for points. WikiProject Articles for Creation had 8 drives since 2012. The last drive saw a lot of poorly done draft reviews and the results were so skewed that the WikiProject hasn't held another drive since 2014. I would hate to have these drives ruined by bad editing and we can only rely on the judges of the competition to stay alert to malfeasance. Chris Troutman (talk) 21:17, 26 November 2016 (UTC)Reply[reply]

I'm confused by your point. If you're saying judges should stay vigilant of quick passes, I agree. That is not exclusive of the fact that there's no evidence to support a claim that quick passes are increasing, because indeed my argument is based in fact. To quote the paragraph prior to your quotation: Comparing five months before with the four months during the first GA Cup, there is no significant difference between the pass rates during or before the GA Cup ( t(504.97)=-1.788, p=0.07 ). In fact, may have actually decreased slightly, from 85% beforehand to 82% during the cup and because the p-value is close to significance. If more drive-by passes were occurring and more reviews were happening then we would see a higher rate of passage during the GA Cup. There is no significant difference. So either there was no change in the number of reviews or there was no change in the rate of drive-by passes. There clearly is an increase in the number of reviews, that's why the backlog decreased, so the only remaining reason for the result is that there was no change in the rate of passes.
This, admittedly, is an operational definition that doesn't get fully at the answer. It assumes that there was not a substantial amount of quick-passes outside of the Cup and that whatever the ratio of quick-pass to non-quick-pass was outside of the cup is the same as during the cup. You can dispute these assumptions, but based on all the data I have available to me there is no evidence that the GA Cup causes drive by passes. That claim is far from having no basis in fact, and is far more factual than anecdotal gripes. Wugapodes [thɔk] [ˈkan.ˌʧɻɪbz] 04:00, 27 November 2016 (UTC)Reply[reply]

I don't really buy into the "gamification" of this (and various similar "challenges", "drives", etc.). Maybe it really does motivate a few people, but not everyone feels competitive about this stuff. The very nature of GA, FA, DYK, ITN, etc., as "merit badges" for editors to "earn", and the drama surrounding that, led to a rancorous ArbCom case recently, and cliquish behavior at FAC has generated further pointless psychodramatics. We really need to focus on the content and improving it for readers, not on the internal wikipolitics of labels, badges, and acceptance into politicized editorial camps.

It might be more practical and productive to have a 100-point (or whatever) scale and grade articles on it to a fixed and extensive set of criteria, with FA, GA, A-class, B, C, Start, and Stub all assigned as objectively as possible based on level of compliance with these criteria (and resolving the tension of exactly what A-Class is in this scheme, which seems to vary from "below GA" to "between GA and FA" to "FA+" to "totally unrelated to GA or FA"). There are a quite a number of GA, A and probably even FA quality articles that have no such assessments, because their principal editors just don't care about (or actively don't care for) the politics and entrenched personality conflicts of our article assessment processes as they presently stand. I, for one, will probably never attempt to promote an article to FA myself directly, because of the poisonous atmosphere at FAC (which is now an order of magnitude worse than it was when I first came to that conclusion several years ago). I guess the good news is I'll have more time for GA work. :-) The more that FA, and some of the more rigid and too-few-participants A-class processes, start to work like GA historically has, the better. If, as Kaldari suggests below, the opposite is happening, with GA sliding toward FA-style "our way or the highway" insularity, then you can expect negative results and declining participation.  — SMcCandlish ¢ ≽ʌⱷ҅ʌ≼  09:48, 2 December 2016 (UTC)Reply[reply]

@SMcCandlish: I like the theory of a 100-point scale, but I'm not sure how it would cope any better than the current system with the inherent uncertainty of what a "complete" article would look like, you need to know what the end result is before you can start having a percentage of it. For instance, the Tower of London is an FA, reflecting the huge amount of material that's been published about it, whereas its "sisters" Baynard's Castle and Montfichet's Castle no longer exist and in the latter case even the exact location is not entirely certain. I took both of them up to a pretty decent standard back in 2010, bringing together more information on them than was available in any one place on the web at the time and probably anywhere bar the Museum of London library, but they're still tiny compared to the Tower of London article. As it happens the former was GAN'd by someone else in 2013, and passed with minor copyedits; the latter just needs some minor work on the lede and formatting - that reminds, me, I need to dig out some photos I took ages ago... I wasn't that bothered about taking them through the GA process, and certainly have no interest in taking them to FA. Actually I think the view of GA as "a precursor to pursuit of A-class or FA" is part of the problem, to my mind we need a lot more emphasis on the good as opposed to perfection. After all, under the guidelines of many projects even a GA article is "Useful to nearly all readers, with no obvious problems; approaching (but not equalling) the quality of a professional encyclopedia", but one FA takes as much time as ??3-5?? GAs? In fact I'd argue the real focus should be more on avoiding bad articles than polishing the already pretty good ones. I've a little mini-project on the go where I've started on getting all the Category:Towns in Kent to a decentish minimum standard, that is still nowhere near GA but at least avoids the real horrors - my working definition is restructuring them with all the sections of WP:UKTOWNS and some text and a reference in each section, plus linking in any nearby articles. So going from say this to this. Not perfect, but it's gone from an incoherent mess to somewhere in the right direction. Perhaps we could encourage people to work on the weakest articles in a set by extending the idea of GA/FA topics to C-class and B-class topics?

Age of nominations

This is very interesting read! One thing that I was looking for here that didn't get discussed was the effect of the Cup on the age of nominations - are reviews now sitting in the queue for less time than they were before these competitions started? (For those who don't know, I'm a judge in the Cup, after having competed in it the first year.)--3family6 (Talk to me | See what I have done) 05:13, 27 November 2016 (UTC)Reply[reply]

Reviewer burnout

I took part in the first GA cup. It was a new idea with a good purpose and I felt I wasn't pulling my weight by putting more GA nominations on the pile than reducing the backlog by reviewing. Towards the end of the cup, I got burned out and reduced my activity; I still do the odd review but not as many as I used to. I know some other GA stalwarts have also stopped reviews. How can we reach out to these people and get them to participate in reviews again? Ritchie333 (talk) (cont) 14:59, 27 November 2016 (UTC)Reply[reply]

I quit doing GA reviews when it became a tedious regimented process. I'm glad we have high standards for quality, but I miss the relatively informal process that we used in the old days. Kaldari (talk) 21:42, 29 November 2016 (UTC)Reply[reply]

Great article

I just wanted to say that this was a really interesting read. As someone who wasn't around for the early days of the project I'd love to learn more about how some of the other now well-established processes came to be. Sam Walton (talk) 16:13, 28 November 2016 (UTC)Reply[reply]

I second that. Although I was an active editor back in 2006 I wasn't at all involved in the GA process so it's interesting to read a succinct history. WaggersTALK 08:47, 29 November 2016 (UTC)Reply[reply]


The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0