Skip to main content

EU cloud gang challenges Broadcom's $61B VMWare buy in court

3 weeks 4 days ago
CISPE cites recent channel changes, but the deal was decided on different matters

COMMENT  Trade group Cloud Infrastructure Services Providers in Europe (CISPE) has filed a formal appeal before the European General Court to seek annulment of the European Commission's decision to approve Broadcom’s acquisition of VMware.…

Simon Sharwood

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes

3 weeks 4 days ago
An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence." The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...] The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people. The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

Read more of this story at Slashdot.

BeauHD

UK Student Jailed For Selling Phishing Kits Linked To $135M of Fraud

3 weeks 4 days ago
A 21-year-old student who designed and distributed online kits linked to $175 million worth of fraud has been jailed for seven years. From a report: Ollie Holman created phishing kits that mimicked government, bank and charity websites so that criminals could harvest victims' personal information to defraud them. In one case a kit was used to mimic a charity's donation webpage so when someone tried to give money, their card details were taken and used by criminals. Holman, of Eastcote in north-west London, created and supplied 1,052 phishing kits that targeted 69 organisations across 24 countries. He also offered tutorials in how to use the kits and built up a network of almost 700 connections. The fake websites supplied in the kits had features that allowed information such as login and bank details to be stored. It is estimated Holman received $405,000 from selling the kits between 2021 and 2023. The kits were distributed through the encrypted messaging service Telegram.

Read more of this story at Slashdot.

msmash