Skip to main content

Scattered Spider gang feigns retirement, breaks into bank instead

5 months ago
You didn't really trust the crims to keep their word, did you?

Spiders don't change their stripes. Despite gang members' recent retirement claims, Scattered Spider hasn't exited the cybercrime business and instead has shifted focus to the financial sector, with a recent digital intrusion at a US bank.…

Jessica Lyons

Google Shows Off Its Inference Scale And Prowess

5 months ago

If the hyperscalers are masters of anything, it is driving scale up and driving costs down so that a new type of information technology can be cheap enough so it can be widely deployed. …

Google Shows Off Its Inference Scale And Prowess was written by Timothy Prickett Morgan at The Next Platform.

Timothy Prickett Morgan

Darkest Nights Are Getting Lighter

5 months ago
Light pollution now doubles every eight years globally as LED adoption accelerates artificial brightness worldwide. A recent study measured 10% annual growth in light pollution from 2011 to 2022. Northern Chile's Atacama Desert remains one of the few Bortle Scale 1 locations -- the darkest rating for astronomical observation -- though La Serena's population has nearly doubled in 25 years. The region hosts major observatories including the Vera C. Rubin Observatory at Cerro Pachon. Satellite constellations pose additional challenges: numbers have increased from hundreds decades ago to 12,000 currently operating satellites. Astronomers predict 100,000 or more satellites within a decade. Chile faces pressure from proposed mining operations including the 7,400-acre INNA green-hydrogen facility near key astronomical sites despite national laws limiting artificial light from mining operations that generate over half the country's exports.

Read more of this story at Slashdot.

msmash

Social Security admin denies DB data leak, DOGEs questions about a copy

5 months ago
Carefully crafted response makes no mention of whether DOGE employees duplicated critical database

The Social Security Administration (SSA) has disputed a whistleblower's allegations that claimed DOGE made an unauthorized, unsecured copy of a critical database - but it's what the denial doesn't say that speaks volumes. …

Brandon Vigliarolo

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance

5 months ago
AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior." The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. "Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.

Read more of this story at Slashdot.

msmash