Skip to main content

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification'

1 week 4 days ago
The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people. "It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends: Stronger rights for consumers to control, adapt, repair, and alter their products and services, Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible, Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage, Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols, Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights, Deterrent and consistent enforcement of other laws, including consumer and data protection law. The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries." They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech. Thanks to Slashdot reader DeanonymizedCoward for sharing the news.

Read more of this story at Slashdot.

EditorDavid

Lenovo shows off snap-together laptop with removable keyboard, screen, and ports

1 week 4 days ago
New ThinkPads also come in blue, get perfect fixability score

If you own a desktop computer, you're used to swapping parts and peripherals around, but most laptops are closed boxes with few ways to modify them. Lenovo's new ThinkBook Modular AI PC concept shows what happens when you can remove a screen, a keyboard, and even blocks of ports from a mobile PC.…

Avram Piltch

AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations

1 week 4 days ago
"Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises," reports New Scientist: Kenneth Payne at King's College London set three leading large language models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war... In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning... OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment. The article includes this comment from Tong Zhao, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace think tank. "It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand 'stakes' as humans perceive them." Thanks to long-time Slashdot reader Tufriast for sharing the article.

Read more of this story at Slashdot.

EditorDavid