Skip to main content

America's Los Alamos Lab Is Now Investing Heavily In AI For Science

3 months ago
Established in 1943 to coordinate America's building of the first atomic bomb, the Los Alamos National Lab in New Mexico is still "one of the world's largest and most advanced scientific institutions" notes Wikipedia. And it now has a "National Security AI Office," where senior director Jason Pruet is working to help "prepare for a future in which AI will reshape the landscape of science and security," according to the lab's science and technology magazine 1663. "This year, the Lab invested more in AI-related work than at any point in history..." Pruet: AI is starting to feel like the next great foundation for scientific progress. Big companies are spending billions on large machines, but the buy-in costs of working at the frontiers of AI are so high that no university has the exascale-class machines needed to run the latest AI models. We're at a place now where we, meaning the government, can revitalize that pact by investing in the infrastructure to study AI for the public good... Part of what we're doing with the Lab's machines, like Venado — which has 2500 GPUs — is giving universities access to that scale of computing. The scale is just completely different. A typical university might have 50 or 100 GPUs. Right now, for example, we have partnerships with the University of California, the University of Michigan, and many other universities where researchers can tap into this infrastructure. That's something we want to expand on. Having university collaboration will be critical if the Department of Energy is going to have a comprehensive AI program at scale that is focused on national security and energy dominance... There was a time when I wouldn't have advocated for government investment in AI at the scale we're seeing now. But the weight of the evidence has become overwhelming. Large models — "frontier models" — have shown such extraordinary capabilities with recent advances in areas as diverse as hypothesis generation, mathematics, biological design, and complex multiphysics simulations. The potential for transformative impact is too significant to ignore. "He no longer views the technology as just a tool, but as a fundamental shift in how scientists approach problems and make discoveries," the article concludes. "The global race humanity is now in... is about how to harness the technology's potential while mitigating its harms." Thanks to Slashdot reader rabbitface25 — also a Los Alamo Lab science writer — for sharing his article.

Read more of this story at Slashdot.

EditorDavid

Fiverr Ad Mocks Vibe Coding - with a Singing Overripe Avocado

3 months ago
It's a cultural milestone. Fiverr just released an ad mocking vibe coding. The video features what its description calls a "clueless entrepreneur" building an app to tell if an avocado is ripe — who soon ends up blissfully singing with an avocado to the tune of the cheesy 1987 song "Nothing's Gonna Stop Us Now." The avocado sings joyously of "a new app on the rise in a no-code world that's too good to be true" (rhyming that with "So close. Just not tested through...") "Let them say we're crazy. I don't care about bugs!" the entrepreneur sings back. "Built you in a minute, now I'm so high off this buzz..." But despite her singing to the overripe avocado that "I don't need a backend if I've got the spark!" and that they can "build this app together, vibe-coding forever. Nothing's going to stop us now!" — the build suddenly fails. (And it turns out that avocado really was overripe...) Fiverr then suggests viewers instead hire one of their experts for building their apps... The art/design site Creative Bloq acknowledges Fiverr "flip-flopping between scepticism and pro-AI marketing." (They point out a Fiverr ad last November had ended with the tagline "Nobody cares that you use AI! They care about the results — for the best ones higher Fiverr experts who've mastered every digital skill including AI.") But the site calls this new ad "a step in the right direction towards mindful AI usage." Just like an avocado that looks perfect on the outside, once you inspect the insides, AI-generated code can be deceptively unripe. Fiverr might be feeling the impact of vibecoding themselves. The freelancing web site saw the company's share price fall over 14% this week, with one Yahoo! Finance site saying this week's quarterly results revealed Fiverr's active buyers dropped 10.9% compared to last year — a decrease of 3.4 million buyers which "overshadowed a 9.8% increase in spending per buyer." Even when issuing a buy recommendation, Seeking Alpha called it "a short-term rebound play, as the company faces longer-term risks from AI and active buyer churn."

Read more of this story at Slashdot.

EditorDavid

Would AI Perform Better If We Simulated Guilt?

3 months 1 week ago
Remember, it's all synthesized "anthropomorphizing". But with that caveat, Science News reports: In populations of simple software agents (like characters in "The Sims" but much, much simpler), having "guilt" can be a stable strategy that benefits them and increases cooperation, researchers report July 30 in Journal of the Royal Society Interface... When we harm someone, we often feel compelled to pay a penance, perhaps as a signal to others that we won't offend again. This drive for self-punishment can be called guilt, and it's how the researchers programmed it into their agents. The question was whether those that had it would be outcompeted by those that didn't, say Theodor Cimpeanu, a computer scientist at the University of Stirling in Scotland, and colleagues. Science News spoke to a game-theory lecturer from Australia who points out it's hard to map simulations to real-world situations — and that they end up embodying many assumptions. Here researchers were simulating The Prisoner's Dilemma, programming one AI agent that "felt guilt (lost points) only if it received information that its partner was also paying a guilt price after defecting." And that turned out to be the most successful strategy. One of the paper's authors then raises the possibility that an evolving population of AIs "could comprehend the cold logic to human warmth." Thanks to Slashdot reader silverjacket for sharing the article.

Read more of this story at Slashdot.

EditorDavid