Skip to main content

Europe's cloud datacenter ambition 'completely crazy' says SAP CEO

3 months 1 week ago
Christian Klein sees little benefit from trying to compete with the dominant hyperscalers

The leader of Europe's most valuable company says there is no point in the continent building datacenters to try to compete with US cloud hyperscalers which have already invested in the region.…

Lindsay Clark

Floppy disks and paper strips lurk behind US air traffic control

3 months 1 week ago
Not to worry nervous flyers, FAA vows to banish archaic systems... in a few years

The Federal Aviation Administration (FAA) has confirmed that the US air traffic control system still runs on somewhat antiquated bits of technology, including floppy disks and paper strips.…

Richard Speed

Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests

3 months 1 week ago
Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models. The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress. At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits. Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios.

Read more of this story at Slashdot.

msmash