The Signpost

File:Luddite.jpg
Unknown
PD
50
60
500
Gallery

Are Luddaites defending the English Wikipedia?

Contribute   —  
Share this
By Svampesky and Adam Cuerden

In response to the increased prevalence of generative artificial intelligence, some editors of the English Wikipedia have introduced measures to reduce its use within the encyclopedia. Using images generated from text-to-image models on articles is often discouraged, unless the context specifically relates to artificial intelligence. A hardline Luddaite approach has not been adopted by all Wikipedians and AI-generated images are used in some articles in non-AI contexts.

Paintings in medical articles

The image guidelines generally restrict the use of images that are solely for decorative purposes, as they do not contribute meaningful information or aid the reader in understanding the topic. Despite this restriction, it appears that paintings are permitted to be included in medical articles to display human-made artistic interpretations of medical themes. They offer historical and cultural perspectives related to medical topics.

WikiProject AI Cleanup

WikiProject AI Cleanup searches for AI-generated images and evaluates their suitability for an article. If any images are deemed inappropriate, they may be removed to ensure that only relevant and suitable images are kept in articles.

Perhaps the removed "scientific" images are the worst ones, however, even if they only affected one article, Chemotactic drug-targeting:

It may also be worth considering what kind of AI art is being left in articles by the WikiProject:

AI-generated images on Wikipedia articles in non-AI contexts

Note: The following section is accurate as of the day before publication.

Policies vary between different language versions of Wikipedia. Differences in opinion among Wikipedians have resulted in the inclusion of text-to-image model-generated images on several Wikipedias, including the English Wikipedia. Many Wikipedias use Wikidata to automatically display images, which takes place beyond the scope of local projects.

S
In this issue
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.

Well, I liked the Lincoln/Anachronism image. Other than that I didn't see 1 AI image here that I liked or would have found useful in any encyclopedia. Smallbones(smalltalk) 21:16, 26 September 2024 (UTC)[reply]

It looks like there's a disagreement to whether the image should be included at Twin paradox. ☆ Bri (talk) 21:38, 26 September 2024 (UTC)[reply]

3 No discussion of diffusion engine-created images is complete without noting that the companies that own and control such programs rest their software on a foundation of unpaid labor: the unlicensed use of artists' creative work for training the software. The historical Luddites were smeared as technophobes as a way to deflect their concerns about labor expropriation by a wealthy class who held the means of production, and at least this 'Luddaite' thinks we as a project should stay far away from these images when there are still all too many unresolved issues around labor and licensing underlying much of the software involved. Hydrangeans (she/her | talk | edits) 22:25, 26 September 2024 (UTC)[reply]

Personally, I think the Willy's Chocolate Experience one is the only one that clearly belongs here, since part of that fiasco was that it used AI art and AI scripts for pretty much everything it did. Listenbourg seems to be an attempt to use AI to generate things similar to things seen elsewhere on the internet that were generated by AI, which feels a step too far, and the rest... I mean, there's moral reasons to object to AI, but perhaps a more convincing argument on Wikipedia is a variant of the problem with the scientific images I lambasted: It gives the illusion that thought was put into it. For example, does the illustration for Dagon really illustrate that short story? No, it doesn't illustrate anything that happens in it. Does the AI image of John F. Kennedy actually add anything? And so on. I'd rather have an illustration by a Wikipedian where we can presume that each element is at least using what the user knows about the subject, not just an attempt to generate an image. Adam Cuerden (talk)Has about 8.9% of all FPs. 00:43, 27 September 2024 (UTC)[reply]
While I think it reflects badly on Wikipedia that the community is often unmoved by moral reasoning—on this topic and many others—I do basically agree with you. The image at Willy's Chocolate Experience is a necessary illustration of some of actual generated imagery used in that endeavor; the rest are steps too far. (On Listenbourg, since it was one I had to give a bit of thought to: if there was a particular generated image humorously purported to be Listenbourg that gained a lot of traction and was recognizable, that might be suitable for the article. But generating an apparently new Listenbourg image doesn't sit right with me.) Hydrangeans (she/her | talk | edits) 01:05, 29 September 2024 (UTC)[reply]
unlicensed use of artists' creative work for training the software [sic] - The idea that a license is generally legally required for training AI models is currently a popular talking point among those who argue for the perceived business interests of the copyright industry. But it is much less accepted among actual legal experts. Yes, there are lots of lawsuits and some may eventually succeed on some aspects, but most have not been going well for the plaintiffs so far (see e.g. "Another claim that has been consistently dismissed by courts is that AI models are infringing derivative works of the training materials.")
I also think the labor rights framing is really misguided. For example because is is just empirically wrong to conceive these conflicts as copyright owners = scrappy artist laborers vs. AI "companies" = super rich mega corporations. Regarding the first group: Have you ever heard of Getty Images, Disney or Elsevier, widely admired for their super ethical practices? Regarding the latter: One of the first targets of the current lawsuits regarding AI image generation has been LAION, a nonprofit (German eingetragener Verein) with the central mission to democratize AI and make it publicly available.
See also Cory Doctorow on this topic, e.g. [1], who is both a professional "creative" himself (earning his living as a writer) and has long been a vocal labor rights advocate, well before the current AI debates.
Regards, HaeB (talk) 07:39, 27 September 2024 (UTC)[reply]
A solution to the unjust chokehold on intellectual property that a company like Disney exercises that also involves trampling on indie artists isn't, to my mind, much of a solution. Hydrangeans (she/her | talk | edits) 00:55, 29 September 2024 (UTC)[reply]
Nor am I much moved by any implication, inadvertent or not, that courtrooms are arbiters of the moral issue. They are arbiters of legal proceedings that are expensive and, to the inexperienced, often arcane, where it is easy to mess up and difficult to understand the audience. Hydrangeans (she/her | talk | edits) 01:12, 29 September 2024 (UTC)[reply]

It is fascinating to see these two topics side-by-side. In general it asks us what images in our articles are for. As per other commenters, I'm happy see AI images on Wikipedia be minimized as much as possible. I am quite fond of the use of paintings, though I do worry about a certain European bias in them. ~Maplestrip/Mable (chat) 09:09, 27 September 2024 (UTC)[reply]

Thanks for pointing out these images. I have removed some of them on the French wiki and labelled other as AI generated in their caption on the articles. Skimel (talk) 13:43, 27 September 2024 (UTC)[reply]

No one linked to the relevant SMBC comic yet? Polygnotus (talk) 16:26, 30 September 2024 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0