The Signpost

File:Washington October 2016-12.jpg
Alvesgaspar
CC 4.0 BY-SA
80
30
450
Recent research

Art museums on Wikidata; comparing three comparisons of Grokipedia and Wikipedia

Contribute   —  
Share this
By Kasia Makowska (WMPL) and Tilman Bayer


A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.

Benchmarking Data Practices of Art Museums in Wikidata

Reviewed by Kasia Makowska (WMPL)
TKTK
From the paper: "Metadata footprint of licences across institutional collections. This chart illustrates the varying proportions of artworks with documented licence or copyright status within each museum’s Wikidata records."

This discussion paper[1] is part of a "Special Collection" of the Journal of Open Humanities Data (JOHD), titled "Wikidata Across the Humanities: Datasets, Methodologies, Reuse", which focuses on Wikidata as both a tool and an object of academic research.

The paper looks at the adoption of key open data best practices, focusing on art museums in Wikidata. The work is outlined in three steps: i) selection of a sample of data repositories of such museums in Wikidata; ii) definition of open data compliance criteria; and iii) reporting the results.

For the selection of repositories, art museums (using the item “art museum”, Q207694 as the reference point) with at least 5,000 records in Wikidata were chosen, and the sample was further limited to the ten museums with the most records in Wikidata.

When it comes to defining the compliance criteria, the authors say:

(...) the work seeks to answer the following questions: 1) What criteria can be used to assess the compliance of Art museums' open data practices with Wikidata? 2) Which Art museums are most represented on Wikidata, and what is the level of maturity in their data practices and ecosystem integration? The purpose of this work is to define a set of best practices for open data publishing in Wikidata and to benchmark the current level of compliance among major Art museums. The results will provide a clear roadmap for institutions to improve their open data strategies.

Then, they define a set of data quality criteria, as described below:

The results are then reported and discussed: ten preselected institutions have been assessed based on the above criteria. A full table of results with detailed scores can be found in the paper, with a brief spoiler alert for the less patient readers:

In light of all these assessments, it can be stated that the National Gallery of Art demonstrates the highest level of open data compliance maturity and can be considered a best practice example.

When discussing the results, the authors clearly and transparently outline the limitations of their work, in scope and coverage, and point out additional topics to consider as extension of this work. Interestingly, they mention two criteria (the provision of machine-readable metadata and clear licensing information) which do not form part of the assessment in the paper. This is because analysis shows these to be "not binary properties of an institution, but rather emergent characteristics of digital collections", which is followed by a proposal to reframe them as quantifiable "metadata footprints". The paper also provides an interesting analysis using the copyright status property on Wikidata, with a chart clearly illustrating artwork with documented license or copyright status within each museum's Wikidata records (see above).

In summary, this work provides a really useful benchmark of practices for museums willing to start using Wikidata to enrich and reuse their digital collections. Speaking from an affiliate perspective, such work is a valuable guide for speaking with GLAM institutions, presenting them with good practice examples and suggesting space for improvement.

A final note from the authors highlights another important use for such research:

More importantly, because it clearly highlights the geographical bias in Wikidata, it can also be seen as a call to action: all the top museums in Wikidata (by number of records) are located in the Global North[supp 1]. This is not a coincidence, but rather a reflection of the material and institutional resources required for the sustained digital cultural work that facilitates integration with platforms like Wikidata. This disparity, however, risks creating and reinforcing digital silos that reproduce the unequal global distribution of knowledge. By mapping this limitation, our article aims to raise awareness of this inequity and contribute to scholarly and practical efforts to diversify the digital cultural sphere.

Comparing comparisons of Grokipedia vs. Wikipedia by three different research teams

Reviewed by Tilman Bayer

On October 27, Elon Musk's company xAI launched Grokipedia, an AI-generated encyclopedia designed to rival Wikipedia by addressing its alleged biases, errors, and ideological slants. As summarized in recent Signpost issues (see here, here and here), it immediately attracted a lot of commentary by journalists and pundits, many of them highlighting examples of Grokipedia's own apparent right-wing biases.

At the same time, various academic researchers embarked on more systematic analyses of Grokipedia, resulting in several preprint publications already. These include at least three comparisons with Wikipedia, making this an interesting experiment showing how different research teams may tackle the same kind of questions:

"We define the "Epistemic Profile" of an encyclopedia article not merely as a bibliography, but as the structural composition of its testimonial network. As a practical implementation, we approximate this theoretical goal by mapping which institutions (e.g., Academic, Governmental, Corporate) an encyclopedia grants the authority to speak, as reflected in cited sources."

A fourth analysis, by Włodzimierz Lewoniewski of Poznań University of Economics and Business (author of various other academic publications about Wikipedia), was provided in form of a blog post and video[5] that compare Grokipedia with Wikipedia editions in 16 languages, by listing the number of articles each encyclopedia has in a number of different topics.

Data

As promised in its title ("A comprehensive analysis of Grokipedia"), the Cornell team's paper is based on the largest Grokipedia dataset:

"We scraped Grokipedia from 28 to 30 October 2025 [...] using parallel processing on Google Cloud, routing requests through a proxy. In total, we were to successfully scrape 883,858 articles from the service, representing 99.8% of all published Grokipedia articles."

The Dublin team scraped a partial sample:

"We analyzed the 20,000 most-edited English-language Wikipedia articles as of October 2025, identified via cumulative edit counts. Prior research shows that heavily edited entries correlate strongly with controversy, topical salience, and social polarization [...] we excluded all list-style pages as well as titles that were date- or year-like rather than topical. [...] For each remaining title, we retrieved the corresponding entries from Wikipedia and Grokipedia [...] HTML pages were downloaded between 5–11 November 2025 [...] with polite delays and a standard user-agent header [...] we retained only article pairs in which both platforms produced at least 500 words of clean prose. Of the original 20,000 target titles, 17,790 matched pairs met these criteria and formed the final analytical sample."

The Davis team contented itself with the smallest dataset:

To establish a baseline of high-interest and contentious topics, we compiled a list of the 100 most-revised articles on English Wikipedia. Corresponding articles were harvested from both Wikipedia and Grokipedia to create a comparative dataset. [... a] filtration process yielded a final parallel corpus of 72 matched article pairs [...].

Unlike the other two teams, the Cornell researchers also recorded whether each Grokipedia article was marked as "adapted from Wikipedia" under its CC license:

496,058 of the Grokipedia entries that we were able to scrape displayed a Creative Commons Attribution-ShareAlike license (56% of the total) while 385,139 do not.

(They note that "CC-licensed articles on Grokipedia contain a public log of edits that Grok made to the source Wikipedia article, and non-CC-licensed articles do not. We were unable to scrape this information on our first attempt".)

The Cornell team is the only one to have released its data, in form of a 1.72 GB dataset on Hugging Face. (The Davis researchers state that theirs is available upon request.) All three were drawing from the initial "0.1" version of Grokipedia, which around November 20 was replaced by version "0.2" whose content appears to differ substantially (it now also accepts proposed edits from users). Therefore the Cornell dataset might already be seen as an important historical artefact (although it only provides the former Grokipedia articles in a somewhat mangled "chunked" form, see below; other scrapes have been made available by others, and the Archive Team has begun preserving much or all of the site on the Wayback Machine).

Lewoniewski observes that as of around November 1:

Almost all articles in Grokipedia have corresponding articles in English Wikipedia. 24,288 article titles from Grokipedia were matched to corresponding Wikipedia articles through the redirect analysis. However, 3,536 Grokipedia articles [... have] no direct match to any title in English Wikipedia.

Article length and citation density

The Dublin team found that:

"Overall, Grokipedia entries are systematically longer than their Wikipedia counterparts. While Wikipedia contains a larger number of very short articles, Grokipedia articles exhibit a pronounced peak around 7,000 words, indicating that most Grokipedia entries are substantially more verbose. [...] Grokipedia articles average 7,662 words versus 6,280 on Wikipedia, and they exhibit far fewer explicit references, links, and headings per thousand words."

The Davis team similarly observed that:

"While Grokipedia produces articles that are, on average, longer than their Wikipedia counterparts, they exhibit lower citation per article and, therefore, also notably lower citation density."

The Cornell paper found that:

Grokipedia articles are significantly longer than their corresponding Wikipedia counterparts. Approximately 96% of Grokipedia contain as many or more text chunks than their Wikipedia counterparts. Similarly, if we parse out article structure and measure article length in terms of its outline structure, we can see that the median non-CC-licensed Grokipedia article is approximately 4.6 times longer than its Wikipedia counterpart, and some Grokipedia articles are dozens of times longer [...].

Source analysis: Reliability, political leanings, and "institutional nature"

The Dublin team evaluated the political leanings of the cited sources using the "News Media Bias and Factuality" dataset[supp 2]. As summarized in a December 8 Twitter thread by one of the authors:

"When we looked more closely into the political shift, analysing the references used in both Wikipedia and Grokipedia, we found: Grokipedia is shifted to the right, compared to Wikipedia. On average, the shift is not huge, and Grokipedia is still left-leaning, like Wikipedia. BUT Religion and History are dramatically pushed to the Right."

(The paper mentions that citations were rated "only when a numeric bias was available for a domain or its brand variant (e.g., bbc.com/bbc.co.uk)" in the "News Media Bias and Factuality" dataset. Unfortunately it doesn't disclose how many citations this excluded. As found by the Davis authors - see below - both encyclopedias include a large share of non-news citations.)

The Cornell researchers first evaluated the reliability of both encyclopedias' citations according to the ratings in Wikipedia's own "perennial sources list":

[...] “generally reliable” sources make up a far larger proportion of Wikipedia citations (12.7%) than “generally unreliable” (2.9%) or “blacklisted” (0.04%) sources [...]. “Generally reliable” sources are cited in roughly 2 of 5 (41.1%) of Wikipedia articles, as opposed to roughly 1 in 5 (21.8%) articles citing “generally unreliable” sources and 1 in 167 (0.6%) articles citing “blacklisted” sources. Grokipedia’s citation practices appear to be less in line with Wikipedia editorial norms. “Generally reliable” sources make up 7.7% of citations on Grokipedia (a relative decrease of 39%), “generally unreliable” sources are 5.4% of citations (a relative increase of 86%), and “blacklisted” sources make up 0.1% of citations (a relative increase of 275%). At the article level, the increase is even more drastically visible: 5.5% of Grokipedia articles contain at least one citation to “blacklisted” sources—a ninefold increase in prevalence compared to Wikipedia.

Of course, it is unsurprising that Wikipedia adheres better to its own sourcing standards than other encyclopedias. As a result, the Cornell authors repeated this analysis with a dataset of quality ratings of news website domains from an academic paper (Lin et al.), with results that are "roughly in line with those that relied on English Wikipedia’s Perennial Source list":

English Wikipedia is more likely to cite domains on the 0.6 and above higher end of Lin et al.’s quality score (27.4% of all citations) than Grokipedia (21.3% of all citations). Low quality domains—which we define as having quality scores between 0.0 and 0.2—make up three times the share of total citations on Grokipedia than Wikipedia. Even though this share is relatively small (0.03% of the total), it means Grokipedia includes 12,522 citations to domains deemed of very low credibility. Websites in this category include the white nationalist forum Stormfront, the antivaccine conspiracy website Natural News, and the fringe website InfoWars. None of these domains are cited at all on Wikipedia; they have 180 citations on Musk’s service.

The Cornell authors cautioned that:

A limitation with both source quality scores is that they don’t rate the majority of citations used on either service. What we can say at this stage is that Grokipedia is both more capacious in its citations—almost doubling Wikipedia’s total—and more ecumenical in its approach, including many more sources across all quality buckets.

In contrast, the Davis paper's two research questions about sourcing differences between Grokipedia and Wikipedia eschewed a direct analysis of the quality or political orientation of the citations:

RQ2: Is there a qualitative difference in the institutional nature of referenced sources?

RQ3: Is there a difference in how diverse article topics are epistemologically sourced?

Rather than relying on external datasets like the Dublin and Cornell authors (and thus having to limit their conclusions to only those citations covered by these datasets), the Davis authors were able to classify every citation in their dataset. This was achieved by "develop[ing] and appl[ying] a systematic content coding scheme based on [their own] 'Citation Content Coding Manual' [...] to assign each unique citation to exactly one of eight mutually exclusive categories", such as "Academic & Scholarly", "Government & Official", or "User-Generated (UGC)". This scheme was then automatically applied (by Gemini Flash 2.5, aided by an extensive coding manual and vetted against a manual classification of a small sample) to classify the roughly 50,000 citations in the entire dataset.

Regarding RQ2, the results revealed

a fundamental divergence in the substrate of authority used by each platform. Wikipedia is anchored by a dual foundation of "News & Journalism" and "Academic & Scholarly" sources. Together, these two categories account for approximately 64.7% of the global corpus [...] In Grokipedia, the reliance on "News & Journalism" remains robust, merely being reduced by 20 percent. However, the "Academic" pillar drops significantly, experiencing a 3-fold reduction [...]. Grokipedia substitutes scholarly sources with an increase in citations to Corporate & Commercial, Reference & Tertiary, Government & Official, Opinion & Advocacy, –all increasing by almost 50 percent of their Wikipedia share– and especially NGOs/Think Tanks (whose share increases by 3x), and User-Generated Content (UGC) sources (whose share increases by 4x) [...]

To investigate RQ3, the Davis authors manually classified the 72 articles in their corpus by topic area, finding that:

[Wikipedia] alters its sourcing hierarchy based on the nature of the topic. For "Politics & Conflict" and "General Knowledge & Society," Wikipedia relies heavily on Academic & Scholarly sources. Conversely, for "Sports & Athletics" and "Media & Entertainment," the academic band shrinks, and the platform pivots appropriately to News & Journalism, which dominates the citations. In contrast, Grokipedia [...] exhibits a fundamental restructuring of authority in high-stakes domains. While it mirrors Wikipedia’s news-heavy approach for entertainment topics, the "Academic & Scholarly" band is critically depleted, especially in "Politics & Conflict," where Grokipedia substitutes this with a massive influx of Government & Official sources and NGO/Think Tank reports.

Similarity of content between Grokipedia and Wikipedia

The Cornell and Dublin teams also ventured beyond citations to directly compare the text of both encyclopedias. Both first split each article into smaller text segments and then applied quantitative text similarity measures to these.

Specifically, the Cornell researchers:

for both the Grokipedia and Wikipedia corpora [...] extracted the plaintext content of each article in 250-token chunks, with a 100-token overlap between chunks.

This method seems a bit crude, as the resulting chunks (arbitrary example) cut across sentences and paragraphs, i.e. contain lots of mangled sentences. In contrast, the Dublin team used an established NLP tool to split the text while keeping these intact:

Each cleaned article was tokenized into sentences and words using nltk’s Punkt tokenizer.

The Cornell team then:

embedded each of these [chunks] using Google’s EmbeddingGemma [14], a state-of-the-art 300M parameter embedding model. Once we had embeddings, we calculated the within-article pairwise cosine similarity for each chunk [i.e. between pairs of chunks from the Grokipedia and Wikipedia article about the same topic]. This allows us to meaningfully discuss metrics like content similarity (filtered by various factors), average article similarity (aggregated across chunks), and more.

In contrast, the Dublin team employed a whole "suite of eight similarity measures grouped into four conceptual domains": lexical similarity (e.g. "cosine similarity of TF–IDF vectors"), n-gram overlap, semantic similarity (including based on LLM embeddings, similar to the Cornell team, albeit using older and smaller models), and stylistic similarity (aggregating differences in various simpler metrics such as sentence lengths and readability scores).

As one would expect, the Cornell team found that Grokipedia's "adapted from Wikipedia" articles were more similar to their Wikipedia counterpart than those without that notice:

non-CC-licensed entries on Grokipedia [... have] a mean chunk similarity to their Wikipedia equivalents of 0.77. The similarity for entries with the license is more heavily distributed towards the far end of the spectrum, with a much higher mean chunk similarity of 0.90.

Interestingly, their chunk similarity analysis also seems to function as a plagiarism detector of sorts:

Grokipedia articles with very high average chunk similarity to their corresponding Wikipedia article include verbatim transcriptions. These articles appear in both the CC-licensed and non-CC-licensed subsets of the data; that is, identical articles (or chunks) do not necessarily carry an attribution to Wikipedia or a CC license. For instance, Table 1 shows two excerpts from Grokipedia entries that have exact matches on equivalent Wikipedia articles. The entry for the Mejia Thermal Power Station is not CC-licensed, whereas the one for Sono Sachiko, a 19th century member of the Japanese imperial family, attributes Wikipedia.

Note that, in the non-CC-licensed Mejia Thermal Power Station page, the first sentences on both Wikipedia [[|relevant revision] ] and Grokipedia [ Wayback snapshot] include the same typo: “Commissioned on [sic] 1996”.

The Cornell authors leave it open how frequent such unattributed matching sentences are overall.

The Dublin researchers ultimately combined their eight different article similarity metrics into a single one (using principal components analysis), finding that its

distribution [...] is distinctly bimodal, suggesting the presence of two substantive groups of article pairs: one in which Grokipedia and Wikipedia differ substantially, and another in which the two versions are highly similar.

Presumably these two groups correspond to the CC-licensed and non-CC-licensed Grokipedia articles, but the paper did not consider this property (in contrast to the Cornell researchers).

Similar to the Davis researchers, the Dublin paper also classified articles by topic, however (due to their much larger sample) using an automated method (relying on GPT-5). This enabled them to conclude that

the largest cross-platform differences in similarity appear in articles related to politics, geography, history, business, and religion. In parallel, [...] the strongest rightward shifts [from Wikipedia to Grokipedia] in source bias occur in articles on religion, history, languages, and business, indicating that ideological divergence is especially pronounced in these domains.

2026 Wikimedia Research Fund announced

The Wikimedia Foundation's Research department announced the launch of the 2026 Wikimedia Research Fund". It funds

Research Proposals (Type 1), Extended Research Proposals (Type 2), and Event and Community-Building Proposals (Type 3). [...] The maximum request is 50,000 USD (Type 1 and 3) and 150,000 USD (Type 2).

Letters of intent for research proposals (Type 1 and 2) are due by January 16, 2026, and full proposals for all three types on April 3, 2026.

See also our related earlier coverage:

Briefly

Other recent publications

Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.

"Investigating the evolution of Wikipedia articles through underlying triggering networks"

This paper in the Journal of Information Science (excerpts) considers networks that have "factoids" as nodes, and associations between them as edges, and finds e.g. that "the inclusion of one factoid [on Wikipedia] leads to the inclusion of many other factoids". From the abstract:[6]

In collaborative environments, the contribution made by each user is perceived to set the stage for the manifestation of more contribution by other users, termed as the phenomenon of triggering. [...] In this work, we analyse the revision history of Wikipedia articles to examine the traces of triggering present in them. We also build and analyse triggering networks for these articles that capture the association among different pieces of the articles. The analysis of the structural properties of these networks provides useful insights on how the existing knowledge leads to the introduction of more knowledge in these articles [...]

From the "Discussion" section:

Our analysis on triggering networks of Wikipedia articles not only validates and extends the old classical theories on the phenomenon of existing knowledge triggering the introduction of more knowledge but also provides useful insights pertaining to the evolution of Wikipedia articles. Examining the network structure reveals many properties of the triggering phenomenon. For example, a well-defined community structure clearly endorses that the inclusion of one factoid leads to the inclusion of many other factoids. Moreover, many of the factoids belonging to a subtopic are introduced together. Furthermore, the core-periphery structure and the degree distribution suggest that all the factoids do not have a similar triggering power. Some factoids lead to the introduction of many more factoids and hence are paramount in the article development process than the factoids. The introduction of these factoids in the articles may be considered as milestones in the article evolution process. Overall, the study explains one of the reasons behind collaborative knowledge building being more efficient than individual knowledge building.

See also our coverage of a related earlier publication by the same authors at OpenSym 2018: "'Triggering' article contributions by adding factoids"

"Throw Your Hat in the Ring (of Wikipedia): Exploring Urban-Rural Disparities in Local Politicians' Information Supply"

From the abstract:[7]

This study [...] employs a dataset of politicians who ran for local elections in Japan over approximately 20 years and discovers that the creation and revisions of local politicians' pages are associated with socio-economic factors such as the employment ratio by industry and age distribution. We find that the majority of the suppliers of politicians' information are unregistered and primarily interested in politicians' pages compared to registered users. Additional analysis reveals that users who supply information about politicians before and after an election are more active on Wikipedia than the average user. The findings presented imply that the information supply on Wikipedia, which relies on voluntary contributions, may reflect regional socio-economic disparities.

"Wikipedia Citations: Reproducible Citation Extraction from Multilingual Wikipedia"

From the abstract:[8]

A total of 29.3 million citations were extracted from the English Wikipedia in May 2020. Following this one-off research project, we designed a reproducible pipeline that can process any Wikipedia dump in the cloud-based settings. To demonstrate its usability, we extracted 40.6 million citations in February 2023 and 44.7 million citations in February 2024. Furthermore, we equipped the pipeline with an adapted Wikipedia citation template translation module to process multilingual Wikipedia articles in 15 languages so that they are parsed and mapped into a generic structured citation template. This paper presents our open-source software pipeline for retrieving, classifying, and disambiguating citations on demand from a given Wikipedia dump.

"Wiki Loves iNaturalist: How Wikimedians Integrate iNaturalist Content on Wikipedia, Wikidata, and Wikimedia Commons"

"The steady growth demonstrated of iNaturalist content on Wikimedia projects: A) the number of files in the category 'INaturalist' (sic) and subcategories on Wikimedia Commons, including image and audio files; B) the number of files in the category that illustrate a page in at least one Wikimedia project (e.g., Spanish Wikipedia or Wikidata); and C) the number of times the images in the categories were viewed across Wikimedia projects. Peaks correspond to the months in which the depicted images were displayed in the "Did you know..." session on the main page of English Wikipedia. Metrics via the Commons Impact Metrics Dashboard."

From this conference abstract:[9]

With over 50 million observations per year, iNaturalist is one of the world's most successful citizen science projects, uniting millions of people worldwide in observing, sharing, and identifying nature [...]. iNaturalist and Wikipedia have much in common: they are both collaborative, large-scale, open infrastructures made by volunteer communities with long-reaching impact on human knowledge. [...] To enable the seamless upload of iNaturalist images to Wikimedia Commons (which in turn enables their reuse on Wikipedia and other Wikimedia projects), this volunteer community has developed a diverse set of open source tools [...]

References

  1. ^ Dişli, Meltem; Candela, Gustavo; Gutiérrez, Silvia; Fontenelle, Giovanna (12 December 2025). "Open Data Practices of Art Museums in Wikidata: A Compliance Assessment". Journal of Open Humanities Data. 11 71. doi:10.5334/johd.438.
  2. ^ Yasseri, Taha; Mohammadi, Saeedeh (2025-11-30), How Similar Are Grokipedia and Wikipedia? A Multi-Dimensional Textual and Structural Comparison, arXiv, doi:10.48550/arXiv.2510.26899
  3. ^ Triedman, Harold; Mantzarlis, Alexios (2025-11-12), What did Elon change? A comprehensive analysis of Grokipedia, arXiv, doi:10.48550/arXiv.2511.09685 / Code
  4. ^ Mehdizadeh, Aliakbar; Hilbert, Martin (2025-12-03), Epistemic Substitution: How Grokipedia's AI-Generated Encyclopedia Restructures Authority, arXiv, doi:10.48550/arXiv.2512.03337
  5. ^ Lewoniewski, Włodzimierz (2025-11-11). "Grokipedia vs Wikipedia: Quantitative Analysis (video)" (Blog). Lewoniewski. / Dataset
  6. ^ Chhabra, Anamika; Setia, Simran (2025-09-25). "Investigating the evolution of Wikipedia articles through underlying triggering networks". Journal of Information Science 01655515251362587. doi:10.1177/01655515251362587. ISSN 0165-5515. Closed access icon
  7. ^ Matsui, Akira; Miyazaki, Kunihiro; Murayama, Taichi (2024-05-28). "Throw Your Hat in the Ring (Of Wikipedia): Exploring Urban-Rural Disparities in Local Politicians' Information Supply". Proceedings of the International AAAI Conference on Web and Social Media. 18: 1027–1040. doi:10.1609/icwsm.v18i1.31370. ISSN 2334-0770.
  8. ^ Kokash, Natallia; Colavizza, Giovanni (2025-12-09). "Wikipedia Citations: Reproducible Citation Extraction from Multilingual Wikipedia". Quantitative Science Studies: 1–14. doi:10.1162/QSS.a.401. ISSN 2641-3337.
  9. ^ Lubiana, Tiago; Littauer, Richard; Leachman, Siobhan; Ainali, Jan; Karingamadathil, Manoj; Waagmeester, Andra; Meudt, Heidi M.; Taraborelli, Dario (2025-12-05). "Wiki Loves iNaturalist: How Wikimedians Integrate iNaturalist Content on Wikipedia, Wikidata, and Wikimedia Commons". Biodiversity Information Science and Standards. 6798855 - Advancing biodiversity goals from local to global scales using iNaturalist. Vol. 9. Pensoft Publishers. pp. –181155. doi:10.3897/biss.9.181155.
Supplementary references and notes:
  1. ^ Pereda, Javier; Willcox, Pip; Candela, Gustavo; Sanchez, Alexander; Murrieta-Flores, Patricia A. (12 March 2025). "Online cultural heritage as a social machine: a socio-technical approach to digital infrastructure and ecosystems". International Journal of Digital Humanities. 7 (1): 39–69. doi:10.1007/s42803-025-00097-6. PMC 12202677. PMID 40584139.
  2. ^ Sánchez-Cortés, Dairazalia; Burdisso, Sergio; Villatoro-Tello, Esaú; Motlicek, Petr (2024). Goeuriot, Lorraine; Mulhem, Philippe; Quénot, Georges; Schwab, Didier; Di Nunzio, Giorgio Maria; Soulier, Laure; Galuščáková, Petra; García Seco de Herrera, Alba; Faggioli, Guglielmo (eds.). "Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions". Experimental IR Meets Multilinguality, Multimodality, and Interaction. Cham: Springer Nature Switzerland: 127–138. doi:10.1007/978-3-031-71736-9_7. ISBN 978-3-031-71736-9.


Signpost
In this issue
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0