Skip to main content

Decoding Nvidia's Groq-powered LPX and the rest of its new rack systems

1 week 3 days ago
From LPUs and GPUs to CPUs and switches, everything you need to know about Nvidia's latest kit

GTC DEEP DIVE  At Nvidia’s GTC conference this week, CEO Jensen Huang finally addressed a $20 billion question he’s dodged for months: Why spend so much to license AI chip startup Groq’s tech and hire away its engineers rather than build it themselves?…

Tobias Mann

Online Bot Traffic Will Exceed Human Traffic By 2027, Cloudflare CEO Says

1 week 3 days ago
Cloudflare's CEO predicts AI-driven bot traffic will surpass human internet traffic by 2027, as AI agents generate vastly more web requests than people. "If a human were doing a task -- let's say you were shopping for a digital camera -- and you might go to five websites. Your agent or the bot that's doing that will often go to 1,000 times the number of sites that an actual human would visit," Cloudflare CEO Matthew Prince said in an interview at SXSW this week. "So it might go to 5,000 sites. And that's real traffic, and that's real load, which everyone is having to deal with and take into account." TechCrunch reports: Before the generative AI era, the internet was only about 20% bot traffic, with Google's web crawler being the largest, according to Prince, whose infrastructure and security company is used by one-fifth of all websites. But beyond some other reputable crawlers, the only other bots were those used by scammers and bad actors. "With the rise of generative AI, and its just insatiable need for data, we're seeing a rise where we suspect that, in 2027, the amount of bot traffic online will exceed the amount of human traffic that's online," Prince said. The executive also noted that this change to the web would require the development of new technologies, like sandboxes for AI agents that can be spun up on the fly and then torn down when their task has finished. These could come into play when consumers ask AI agents to perform certain tasks on their behalf, like planning a vacation. "What we're trying to think about is, how do we actually build that underlying infrastructure where you can -- as easily as you open a new tab in your browser -- you can actually spin up new code, which can then run and service the agents that are out there," Prince said. He imagines there will soon be a time when millions of these "sandboxes" for agents would be created every second. "I think the thing that people don't appreciate about AI is it's a platform shift," Prince said. "AI is another platform shift ... the way that you're going to consume information is completely different."

Read more of this story at Slashdot.

BeauHD

4Chan Mocks $700K Fine For UK Online Safety Breaches

1 week 4 days ago
The UK regulator Ofcom fined 4chan nearly $700,000 (520,000 pounds) for failing to implement age checks and address illegal content risks under the Online Safety Act, but the platform mocked the penalty and signaled it won't pay. A lawyer representing the company responded with an AI-generated cartoon image of a hamster, writing in a follow-up post on X: "In the only country in which 4chan operates, the United States, it is breaking no law and indeed its conduct is expressly protected by the First Amendment." The BBC reports: The fines also include 50,000 pounds for failing to assess the risk of illegal material being published and a further 20,000 pounds for failing to set out how it protects users from criminal content. 4Chan has refused to pay all previous fines from Ofcom. "Companies -- wherever they're based -- are not allowed to sell unsafe toys to children in the UK. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different," said Ofcom's Suzanne Cater. "The UK is setting new standards for online safety. Age checks and risk assessments are cornerstones of our laws, and we'll take robust enforcement action against firms that fall short."

Read more of this story at Slashdot.

BeauHD

Rogue AI Triggers Serious Security Incident At Meta

1 week 4 days ago
For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved. According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."

Read more of this story at Slashdot.

BeauHD