Skip to main content

IBM Says It's Cracked Quantum Error Correction

2 weeks 6 days ago
Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits. One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach. "We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."

Read more of this story at Slashdot.

BeauHD

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers

2 weeks 6 days ago
According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure. Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added. Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services. The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide." "However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."

Read more of this story at Slashdot.

BeauHD

There Aren't Enough Cables To Meet Growing Electricity Demand

3 weeks ago
High-voltage electricity cables have become a major constraint throttling the clean energy transition, with manufacturing facilities booked out for years as demand far exceeds supply capacity. The energy transition, trade barriers, and overdue grid upgrades have turbocharged demand for these highly sophisticated cables that connect wind farms, solar installations, and cross-border power networks. The International Energy Agency estimates that 80 million kilometers of grid infrastructure must be built between now and 2040 to meet clean energy targets -- equivalent to rebuilding the entire existing global grid that took a century to construct, but compressed into just 15 years. Each high-voltage cable requires custom engineering and months-long production in specialized 200-meter towers, with manufacturers reporting that 80-90% of major projects now use high-voltage direct current technology versus traditional alternating current systems.

Read more of this story at Slashdot.

msmash