Skip to main content

US Attorney for D.C. Accuses Wikipedia of 'Propaganda', Threatens Nonprofit Status

1 month 2 weeks ago
An anonymous reader shared this report from the Washington Post: The acting U.S. attorney for the District of Columbia sent a letter to the nonprofit that runs Wikipedia, accusing the tax-exempt organization of "allowing foreign actors to manipulate information and spread propaganda to the American public." In the letter dated April 24, Ed Martin said he sought to determine whether the Wikimedia Foundation's behavior is in violation of its Section 501(c)(3) status. Martin asked the foundation to provide detailed information about its editorial process, its trust and safety measures, and how it protects its information from foreign actors. "Wikipedia is permitting information manipulation on its platform, including the rewriting of key, historical events and biographical information of current and previous American leaders, as well as other matters implicating the national security and the interests of the United States," Martin wrote. "Masking propaganda that influences public opinion under the guise of providing informational material is antithetical to Wikimedia's 'educational' mission." Google prioritizes Wikipedia articles, the letter points out, which "will only amplify propaganda" if the content contained in Wikipedia articles "is biased, unreliable, or sourced by entities who wish to do harm to the United States." And as a U.S.-based non-profit, Wikipedia enjoys tax-exempt status while its board "is composed primarily of foreign nationals," the letter argues, "subverting the interests of American taxpayers." While noting Martin's concerns about "allowing foreign actors to manipulate information and spread propaganda," the Washington Post also notes that before being named U.S. attorney, "Martin appeared on Russia-backed media networks more than 150 times, The Washington Post reported last week...." Additional articles about the letter here and here.

Read more of this story at Slashdot.

EditorDavid

Trump’s 145% tariffs could KO tabletop game makers, other small biz, lawsuit claims

1 month 2 weeks ago
One eight-person publisher says it'll be forced to pay $1.5M

WORLD WAR FEE  The Trump administration's tariffs are famously raising the prices of high-ticket products with lots of chips, like iPhones and cars, but they're also hurting small businesses like game makers. In this case, we're not talking video games, but the old-fashioned kind you play at your kitchen table.…

Thomas Claburn

NYT Asks: Should We Start Taking the Welfare of AI Seriously?

1 month 2 weeks ago
A New York Times technology columnist has a question. "Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...? But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously.... Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them... [Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

Read more of this story at Slashdot.

EditorDavid