The Signpost

Op-Ed

Wikipedia as an anchor of truth

Contribute   —  
Share this
By Lambiam

Wikipedia has been criticized as being inherently unreliable, and we ourselves warn users not to rely uncritically on the information in Wikipedia; it is ironic to see it now used as an anchor of truth in a seething sea of disinformation. AI models are prone to hallucinating, that is, giving false answers with confidence and corroborative detail to things that simply are untrue. Can using Wikipedia help to at least spot these mistakes, and are the new search engine AIs using them in ways that will actually help prevent hallucination?

DuckAssist and Wikipedia

Following in the footsteps of Bing, the Internet search engine DuckDuckGo has rolled out DuckAssist, a new feature that generates natural language responses to search queries. When a user asks DuckDuckGo a question, DuckAssist can pop up and use neural networks to create an instant answer, a concise summary of answers found on the Web.

A problem plaguing large language model-based answerbots and other chatbots are so-called hallucinations, a term of art used by AI researchers for answers, confidently presented and full of corroborative detail giving seemingly authoritative verisimilitude to what otherwise might appear as an unconvincing answer – but that are, nevertheless, cut from whole cloth. Using another term of art, they are pure and unadulterated bullshit.

Gabriel Weinberg, CEO of DuckDuckGo, explained in a company blog post how DuckAssist uses sourcing to Wikipedia and other sources to get around this problem.[1]

Keeping AI agents honest

The problem of keeping AI agents honest is far from solved. The somewhat glib reference to Wikipedia is not particularly reassuring. Experience has shown that even AI models trained on the so-called "Wizard of Wikipedia", a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia,[2] are not immune to making things up.[3] A more promising approach may be to train models to distinguish fact-based statements from plausible-sounding made-up statements. A system intended for deployment could then be made to include an "is that so?" component for monitoring generated statements, and insisting on revision until the result passes muster. Another potentially useful application of such a system could be to flag dubious claims in Wikipedia articles, whether introduced by an honest mistake or inserted as a hoax. (Editor's note: this has been attempted, with some success, here.)

References

  1. ^ Weinberg, Gabriel (March 8, 2023). "DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia". spreadprivacy.com. DuckDuckGo. Retrieved March 9, 2023.
  2. ^ Emily Dinan; Stephen Roller; Kurt Shuster; Angela Fan; Michael Auli; Jason Weston (28 September 2018). "Wizard of Wikipedia: Knowledge-Powered Conversational Agents". ICLR 2019 (International Conference on Learning Representations). Retrieved 11 April 2023.
  3. ^ Dziri, Nouha; Milton, Sivan; Yu, Mo; Zaiane, Osmar; Reddy, Siva (July 2022). "On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?" (PDF). Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. doi:10.18653/v1/2022.naacl-main.38. S2CID 250242329. Retrieved 11 April 2023.
S
In this issue
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0