The Signpost

In the media

Thirst traps, the fastest loading sites on the web, and the original collaborative writing

Contribute  —  
Share this
By Smallbones and HaeB

"Slate" celebrates encyclopedic selfies

A thirst trap.

In Slate, Annie Rauwerda (of Depths of Wikipedia fame) explains why "On Wikipedia, Anyone Can Be a Model". The article focuses on LittleT889, who created the article thirst trap ("a type of social media post intended to entice viewers sexually") and illustrated with a shirtless selfie of himself that has since been "viewed almost a million times" (although it was recently replaced in the article). What's more, "He adds photos of himself to all sorts of encyclopedically relevant topics, like water bottle flipping, Nae Nae, and my favorite, the Floss (dance) article, where he wears sunglasses indoors and furiously shakes his hips in front of three guitars and a bongo drum", as well as Running man (dance), Dougie, and Naruto run.

Rauwerda also managed to get in touch with other selfie contributors, such as a "20-year-old Russian university student [who said that] once he took a picture of his eye so astonishingly beautiful that an Instagram post wasn’t enough — he needed to put it on Wikipedia," and "a retired biology teacher in Germany [who] realized that Wikipedia had no good photos of female fingers [and] uploaded a snap of her own hand to Wikimedia Commons", which now illustrates the article finger. In general, the article observes that Wikipedia's "photos have an unvarnished feel and an unmistakably human charm" and that it's "immediately obvious that Wikipedia’s models are real people, not actors." It also recalls earlier media coverage of similar examples, such as a couple who has graced the high five article since 2008, and last year - now married with kids - recreated the shots for an online magazine (see earlier coverage: "The king and queen of the high 5").

Lastly, Rauwerda calls on Slate's readers to consider contributing themselves: "And even though Wikimedia Commons hosts more than 100 million pieces of media, it has some stunning gaps. There’s a big list of requested images, and some of the items are shockingly quotidian, like 'half-up hairstyle,' 'business women shaking hands,' and 'tripping' (go ahead, fall on your face for the sake of free knowledge)."

Wikipedia is the second-fastest website in the US

"TechNewsWorld" reports that "Craigslist, Wikipedia, and Zillow are the fastest-loading U.S. websites on the internet, according to a study released Monday by web design company DigitalSilk." Wikipedia came second "with an average load time of 1.40 seconds (1.6 mobile, 1.2 desktop)", well ahead of sloths such as Instagram ("The site on mobile takes a whopping 6.7 seconds to load and 4.2 seconds on desktop") or Google ("While it had a respectable mobile load time of 1.1 seconds, its desktop time of 4.6 seconds bloated the search giant’s overall performance").

This success is certainly in part due to the longtime work of the Wikimedia Foundation's recently disbanded Performance team, but also, according to one of the study's authors, due to Wikipedia's simpler design: "It’s interesting to see how websites like Wikipedia and Craigslist, which have barely changed their design and have remained largely text-based, topped our list, and the popularity of these sites shows that sometimes simplicity can work."

(The TechNewsWorld article doesn't link to the actual study and doesn't provide much detail about its methodology. But in a similar study featured by ZDNet earlier this year, DigitalSilk had used an online tool called Pingdom Website Speed Test.)

AI finding references

Nature News notes the publication of "Improving Wikipedia verifiability with AI" in Nature Machine Intelligence. "Wikipedia lives and dies by its references" the news article states, "but sometimes, those references are flawed." The neural network-based system described in the academic study, called SIDE, looks at whether Wikipedia references support the Wiki text, and proposes replacements for the weaker refs.

The paper is open access, licensed under the Creative Commons Attribution 4.0 International License, and can be downloaded at the above links.

Crowdsourcing 1858–1923

The Washington Post reviews the book Dictionary People by Sarah Ogilvie, a former lexicographer for the Oxford English Dictionary. The review, titled The most influential crowdsourcing project happened long before Wikipedia, focuses, like the book, on the roughly 3,000 OED contributors who sent in quotations showing words in use in printed texts. Only a dozen or so are actually named in the review, but 97 of these unpaid volunteers are recorded at Wikipedia's List of contributors to the Oxford English Dictionary.

Eadweard Muybridge hasn't yet made our list but two other murderers, Sir John Richardson (naturalist) and William Chester Minor do. Margaret Murray, who later became an Egyptologist and wrote The Witch-Cult in Western Europe, contributed 3,800 quotations from the Douay–Rheims Bible while growing up in India. There are many women among the 3,000 including Karl Marx's daughter Eleanor Aveling and a lesbian couple who wrote under the name Michael Field.

The implicit comparison of OED contributors to Wikipedians in the book review's title might seem exaggerated at first glance. After all, some of the OED contributors are quite unusual. But Ogilvie does play at comparing OED contributors with Wikipedians in the book's Introduction, which I just had to read on Amazon after reading the review. I'll have to read the rest of the book before drawing any firm conclusions. S

In brief

Read Wikipedia before starting this

Do you want to contribute to "In the media" by writing a story or even just an "in brief" item? Edit next week's edition in the Newsroom or leave a tip on the suggestions page.

In this issue
+ Add a comment

Discuss this story

"Slate" celebrates encyclopedic selfies

Wikipedia is the second-fastest website in the US

  • OK, I got it. With no disrespect to the performance team (which appears to have been fairly small and effective), I took the main reason for the speed of Wikipedia to be the simple design, as emphasized in the "TechNewsWorld" article. Smallbones(smalltalk) 17:44, 23 October 2023 (UTC)[reply]

AI finding references

This item could have benefited from a bit more context, e.g. the fact that the paper was already published last year in preprint form and received media attention back then. We covered it in both "In the media" ("Facebook experiments with Wikipedia fact-checking") and "Recent research" ( "Facebook/Meta research on "'Improving Wikipedia Verifiability with AI'") at the time, and the current story doesn't really offer any new information about this research project. That said, we might still run a fuller review in "Recent research" now that the published version of the paper is out. Regards, HaeB (talk) 19:36, 23 October 2023 (UTC)[reply]

Yes @HaeB: - technically this is well above my pay grade ($0). And a short paragraph couldn't possibly cover it as well as an article in Recent research, it would be great for this. Smallbones(smalltalk) 20:10, 23 October 2023 (UTC)[reply]

To follow up on a remark by Piotrus (moving here, as a more suitable location):

unlike most coverage (and research), this seems actually useful. Underlying research is here: The research say the code to reproduce the study is somewhere here: . Can anyone convert it it into a usable tool, assuming this has not been done already?

I agree this could be super interesting. Two things to be aware of though:

  • The models alone appear to require two terabytes of disk space, so the server requirements are not quite trivial (e.g. it's not clear whether one could make this a Toolforge tool). That said, perhaps the Foundation's new "Lift Wing" machine learning infrastructure could be open to hosting such projects?
  • In July 2022, the lead author stated that Note, Side is a POC [proof of concept] that shows the technology is there. To build a production system there is still lots to do. :) and the code repository hasn't seen new commits since then.

Regards, HaeB (talk) 21:03, 23 October 2023 (UTC)[reply]

Empirically we found that only using the first sentence in front of the claim and also adding the Wikipedia article’s title to the query did yield the best BM25 results. I know from my experiments that their approach, undoubtedly the best of what they tried, doesn't isolate the entire claim being sourced more than half the time. Associating a reference with the article text it is intended to support is very difficult even after examining the source in full. Sandizer (talk) 16:32, 25 October 2023 (UTC)[reply]
Interesting! Yes, I have been wondering what the state of the art is regarding this kind of entailment problem. This paper highlighted in the July issue of "Recent research" (see also talk page there) seemed to have encountered more difficulties than the authors of the Facebook/Meta paper. (By the way, in case you are interested, we would still like to run a fuller review of the paper in "Recent research", which could touch on the various issues mentioned above.) Regards, HaeB (talk) 03:42, 27 October 2023 (UTC)[reply]

Breaking news

See and The Hill. Elon Mush offers $1 Billion if Wikipedia will change it's name to "Dickypedia" for 1 year. @JPxG: I got dibs on this story for the next issue! Smallbones(smalltalk) 20:12, 23 October 2023 (UTC)[reply]

Weirder things have actually happened. ☆ Bri (talk) 20:39, 23 October 2023 (UTC)[reply]
And don't forget Joe, Montana Or the fact that somebody actually bought Twitter for $XX billion. Smallbones(smalltalk) 20:56, 23 October 2023 (UTC)[reply]
Don't we have WP:POST/TIPS and/or the Newsroom talk page for this kind of remark?
Anyway, thanks in advance for your sacrifice ;) Some background on how what triggered this one-side schoolyard row: [1]. And the Guardian is making some hay of it too [2]. Regards, HaeB (talk) 20:50, 23 October 2023 (UTC)[reply]


The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0