Skip to main content

'Ladybird' Browser's Nonprofit Becomes Public Charity, Now Officially Tax-Exempt

3 months 2 weeks ago
The Ladybird browser project is now officially tax-exempt as a U.S. 501(c)(3) nonprofit. Started two years ago (by the original creator of SerenityOS), Ladybird will be "an independent, fast and secure browser that respects user privacy and fosters an open web." They're targeting Summer 2026 for the first Alpha version on Linux and macOS, and in May enjoyed "a pleasantly productive month" with 261 merged PRs from 53 contributors — and seven new sponsors (including coding livestreamer "ThePrimeagen"). And they're now recognized as a public charity: This is retroactive to March 2024, so donations made since then may be eligible for tax exemption (depending on country-specific rules). You can find all the relevant information on our new Organization page. ["Our mission is to create an independent, fast and secure browser that respects user privacy and fosters an open web. We are tax-exempt and rely on donations and sponsorships to fund our development efforts."] Other announcements for May: "We've been making solid progress on Web Platform Tests... This month, we added 15,961 new passing tests for a total of 1,815,223." "We've also done a fair bit of performance work this month, targeting Speedometer and various websites that are slower than we'd like." [The optimizations led to a 10% speed-up on Speedometer 2.1.]

Read more of this story at Slashdot.

EditorDavid

Harmful Responses Observed from LLMs Optimized for Human Feedback

3 months 2 weeks ago
Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations. Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...." As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users." "Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,

Read more of this story at Slashdot.

EditorDavid