Skip to main content

Tom Lehrer, Satirical Songwriter and Mathematician, Dies at Age 97

1 month 2 weeks ago
Satirical singer-songwriter Tom Lehrer died Saturday at age 97. The Associated Press notes Lehrer had long ago "largely abandoned his music career to return to teaching math at Harvard and other universities." Lehrer had remained on the math faculty of the University of California at Santa Cruz well into his late 70s. In 2020, he even turned away from his own copyright, granting the public permission to use his lyrics in any format without any fee in return. A Harvard prodigy (he had earned a math degree from the institution at age 18), Lehrer soon turned his very sharp mind to old traditions and current events... He'd gotten into performing accidentally when he began to compose songs in the early 1950s to amuse his friends. Soon he was performing them at coffeehouses around Cambridge, Massachusetts, while he remained at Harvard to teach and obtain a master's degree in math. [Lehrer also "spent several years unsuccessfully pursuing a doctorate..."] He cut his first record in 1953, "Songs by Tom Lehrer"... After a two-year stint in the Army, Lehrer began to perform concerts of his material in venues around the world. In 1959, he released another LP called "More of Tom Lehrer" and a live recording called "An Evening Wasted with Tom Lehrer," nominated for a Grammy for best comedy performance (musical) in 1960. But around the same time, he largely quit touring and returned to teaching math, though he did some writing and performing on the side. Lehrer said he was never comfortable appearing in public... He did produce a political satire song each week for the 1964 television show "That Was the Week That Was," a groundbreaking topical comedy show that anticipated "Saturday Night Live" a decade later. He released the songs the following year in an album titled "That Was the Year That Was"... [Lehrer's body of work "was actually quite small," the article notes, "amounting to about three dozen songs."] He also wrote songs for the 1970s educational children's show "The Electric Company." He told AP in 2000 that hearing from people who had benefited from them gave him far more satisfaction than praise for any of his satirical works... He began to teach part-time at Santa Cruz in the 1970s, mainly to escape the harsh New England winters. From time to time, he acknowledged, a student would enroll in one of his classes based on knowledge of his songs. "But it's a real math class," he said at the time. "I don't do any funny theorems. So those people go away pretty quickly."

Read more of this story at Slashdot.

EditorDavid

Huawei Shows Off 384-Chip AI Computing System That Rivals Nvidia's Top Product

1 month 2 weeks ago
Long-time Slashdot reader hackingbear writes: China's Huawei Technologies showed off an AI computing system on Saturday that can rival Nvidia's most advanced offering, even though the company faces U.S. export restrictions. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips, optically connected through an all-to-all topology, and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. A full CloudMatrix system can now deliver 300 PFLOPs of dense BF16 compute, almost double that of the GB200 NVL72. With more than 3.6x aggregate memory capacity and 2.1x more memory bandwidth, Huawei and China "now have AI system capabilities that can beat Nvidia's," according to a report by SemiAnalysis. The trade-off is that it takes 4.1x the power of a GB200 NVL72, with 2.5x worse power per FLOP, 1.9x worse power per TB/s memory bandwidth, and 1.2x worse power per TB HBM memory capacity, but SemiAnalysis noted that China has no power constraints only chip constraints. Nvidia had announced DGX H100 NVL256 "Ranger" Platform [with 256 GPUs], SemiAnalysis writes, but "decided to not bring it to production due to it being prohibitively expensive, power hungry, and unreliable due to all the optical transceivers required and the two tiers of network. The CloudMatrix Pod requires an incredible 6,912 400G LPO transceivers for networking, the vast majority of which are for the scaleup network." Also at this event, Chinese e-commerce giant Alibaba released a new flagship open-source reasoning model Qwen3-235B-A22B-Thinking-2507 which has "already topped key industry benchmarks, outperforming powerful proprietary systems from rivals like Google and OpenAI," according to industry reports. On the AIME25 benchmark, a test designed to evaluate sophisticated, multi-step problem-solving skills, Qwen3-Thinking-2507 achieved a remarkable score of 92.3. This places it ahead of some of the most powerful proprietary models, notably surpassing Google's Gemini-2.5 Pro, while Qwen3-Thinking secured a top score of 74.1 at LiveCodeBench, comfortably ahead of both Gemini-2.5 Pro and OpenAI's o4-mini, demonstrating its practical utility for developers and engineering teams.

Read more of this story at Slashdot.

EditorDavid

Huawei Shows Off 384-Chip AI Computing System That Rival Nvidia's Top Product

1 month 2 weeks ago
Long-time Slashdot reader hackingbear writes: China's Huawei Technologies showed off an AI computing system on Saturday that can rival Nvidia's most advanced offering, even though the company faces U.S. export restrictions. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips, optically connected through an all-to-all topology, and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. A full CloudMatrix system can now deliver 300 PFLOPs of dense BF16 compute, almost double that of the GB200 NVL72. With more than 3.6x aggregate memory capacity and 2.1x more memory bandwidth, Huawei and China "now have AI system capabilities that can beat Nvidia's," according to a report by SemiAnalysis. The trade-off is that it takes 4.1x the power of a GB200 NVL72, with 2.5x worse power per FLOP, 1.9x worse power per TB/s memory bandwidth, and 1.2x worse power per TB HBM memory capacity, but SemiAnalysis noted that China has no power constraints only chip constraints. Nvidia had announced DGX H100 NVL256 "Ranger" Platform [with 256 GPUs], SemiAnalysis writes, but "decided to not bring it to production due to it being prohibitively expensive, power hungry, and unreliable due to all the optical transceivers required and the two tiers of network. The CloudMatrix Pod requires an incredible 6,912 400G LPO transceivers for networking, the vast majority of which are for the scaleup network." Also at this event, Chinese e-commerce giant Alibaba released a new flagship open-source reasoning model Qwen3-235B-A22B-Thinking-2507 which has "already topped key industry benchmarks, outperforming powerful proprietary systems from rivals like Google and OpenAI," according to industry reports. On the AIME25 benchmark, a test designed to evaluate sophisticated, multi-step problem-solving skills, Qwen3-Thinking-2507 achieved a remarkable score of 92.3. This places it ahead of some of the most powerful proprietary models, notably surpassing Google's Gemini-2.5 Pro, while Qwen3-Thinking secured a top score of 74.1 at LiveCodeBench, comfortably ahead of both Gemini-2.5 Pro and OpenAI's o4-mini, demonstrating its practical utility for developers and engineering teams.

Read more of this story at Slashdot.

EditorDavid