On the Wikimedia-l mailing list, two members of the Wikimedia Foundation's "Global Advocacy" team drew attention to
important hearings happening this week at the United States Supreme Court.
The hearings on two cases that will be crucial for Wikimedia have just started: NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC. Both cases are challenges to state laws in Texas and Florida, which impact content moderation on social media websites. [...] As they are written, these laws prohibit website operators from banning users or removing speech and would generally risk Wikipedia’s volunteer-led systems of content moderation. That’s because these laws were designed to prevent social media platforms from engaging in politically motivated content moderation, but were drafted so broadly that they would also impact Wikipedia. The case is also important beyond the impact it might have on our projects. It represents a scenario that is part of a trend globally, where governments introduce legislation to address harms from big tech actors, yet Wikimedia ends up as the dolphin inadvertently caught in the net."
The Foundation has previously weighed in on these cases with an amicus brief and several blog posts, and is present at the current hearings "in person talking to stakeholders and observing the proceedings. We expect the Court to rule this year and will be providing updates as we know more."
Asked about the worst-case scenario (from a Wikimedia perspective), Stan Adams of the Global Advocacy team elaborated:
"Perhaps the worst long-term outcome would be if several other states or even the US Congress replicated the Texas or Florida laws. If those laws were enforced against Wikipedia editors or the Foundation – say, for editors' regular work of removing content that is inaccurate, unsourced, or that violates NPOV policies – it could become increasingly difficult to operate and maintain Wikipedia."
However,
"based on what I observed at the Court yesterday [February 26, mentioning comments by justice Brett Kavanaugh in particular], I think most of the Justices would be reluctant to uphold the Texas and Florida laws. That said, these cases won't be the end of legislative attempts to regulate social media and other venues for expression online – I expect to see the Court considering more cases like these as states continue to enact laws that raise First Amendment questions in the online context."
– H
The Wikimedia Foundation advised on Meta-Wiki that –
A vote to ratify the charter for the Universal Code of Conduct Coordinating Committee (U4C) was held from 19 January until 2 February 2024 via SecurePoll. Voting is now closed. Thank you to all who voted. The result was 1249 voters in support and 420 voters opposed. 69 voters did not choose an option. Voter statistics and a summary of voter comments will be published soon.
You can find more information on the U4C's purpose and scope here. – AK
Are more changes afoot for the Requests for adminship process? Open proposals from Phase I include the following (some others were already rendered unsuccessful).
Phase I is still open, and you may weigh in with your thoughts here: Wikipedia:Requests for adminship/2024 review. – B
From February 19 to February 23, 2024, "a group of 21 Wikimedians, academics, and practitioners" met at the Rockefeller Foundation's Bellagio Center in Northern Italy "to draft an initial research agenda on the implications of artificial intelligence (AI) for the knowledge commons." The aim is "to focus attention (and therefore resources) on the vital questions volunteer contributors have raised, including the promise, as well as risks and negative impacts, of AI systems on the open Internet." The agenda is available on Meta-Wiki, together with a brief report on the meeting.
Members of the "Wikimedia AI" Telegram group expressed their surprise about hearing about this effort first from organizations outside the Wikimedia movement, and about the fact that the term "open source" isn't mentioned in the document (despite open-source AI being an important topic of debate in AI currently, and WMF's general commitments to the use of open source software). While the announcement appears to be speaking on behalf of "volunteer contributors", the "Wikimedians" involved in drafting the document appears to have consisted exclusively of Wikimedia Foundation staff (largely from its Research department), according to the attendee list. Wikimedia Foundation CEO Maryana Iskander subsequently clarified that this "effort to contribute to a shared research agenda on AI [...] was created by a small group working in the open who rushed to publish a ‘bad first draft’ that will benefit from more input."
In other AI-related news, the Wikimedia Foundation recently received a $2.2 million grant from the Sloan Foundation (a longtime supporter) for the purpose of "leverag[ing] AI for the benefit of Wikipedia's readers and contributors, including tools to address vandalism" over the next three years. (These funds come on top of a $950,000 grant announced in April 2023 by WMF's own Wikimedia Endowment for "building and strengthening AI and machine learning infrastructure on Wikipedia and Wikimedia projects", similarly highlighting "the development of algorithms to measure the quality of Wikipedia articles and machine learning models that help catch incidents of vandalism on Wikimedia projects.")
Discuss this story
U4C Charter vote
BilledMammal (talk) 11:03, 2 March 2024 (UTC)[reply]
Who let Elon Musk rig the UCOC result? (and who let Elon Musk name it in the first place?)In brief
Many congratulations to Sdkb for gaining admin powers! Oltrepier (talk) 11:40, 2 March 2024 (UTC)[reply]
powersresponsibilities, lest they confuse you with the old {{u|pill-shaped}} Sdkb. —andrybak (talk) 13:56, 3 March 2024 (UTC)[reply]EagleLemur-eyed observers may also notice the new avatar(s) on my user page 🙂 Sdkb talk 16:48, 3 March 2024 (UTC)[reply]Politically motivated content moderation
Polygnotus (talk) 23:53, 15 March 2024 (UTC)[reply]
Are you sure?Leading up to the 2020 United States elections, there was a rise of misinformation on these services related to topics such as claims of election fraud and conspiracy theories related to the COVID-19 pandemic. Most of this misinformation originated from conservative parties including the far right and alt right.[2] Because of this, services like YouTube, Twitter and Facebook took action to moderate these posts from users, either by tagging them as misinformation or outright removal.[2] Some of this misinformation was put forth by Republican party members, including then-President Donald Trump, leading the Republican Party to seek legal review of Section 230 believing that this law allowed politically-motivated moderation.... Two state laws passed by Florida and Texas in 2021 created state-level challenges to Section 230.
One of those is Texas House Bill 20. It is certainly not "designed to prevent social media platforms from engaging in politically motivated content moderation". It is designed to ensure that platforms can't stop the spread of fake news and misinformation (lies about the election, about Biden, about Covid et cetera). Polygnotus (talk) 00:04, 16 March 2024 (UTC)[reply]