Skip to main content

Smart Tires Will Report On the Health of Roads In New Pilot Program

4 weeks 1 day ago
An anonymous reader quotes a report from Ars Technica: Do you remember the Pirelli Cyber Tire? No, it's not an angular nightmare clad in stainless steel. Rather, it's a sensor-equipped tire that can inform the car it's fitted to what's happening, both with the tire itself and the road it's passing over. The technology has slowly been making its way into the real world, starting with rarified stuff like the McLaren Artura. Now, Pirelli is going to put some Cyber Tires to work for everybody, not just supercar drivers, in a new pilot program with the regional government of Apulia in Italy. The Cyber Tire has a sensor to monitor temperature and pressure, using Bluetooth Low Energy to communicate with the car. The electronics are able to withstand more than 3,500 G as part of life on the road, and a 0.3-oz (10 g) battery keeps everything running for the life of the tire. The idea was to develop a better tire pressure monitoring system, one that could tell the car exactly what kind of tire -- summer, winter, all-season, and so on -- was fitted, and even its state of wear, allowing the car to adapt its settings appropriately. But other applications suggested themselves -- at a recent CES, Pirelli showed how a Cyber Tire could warn other road users about aquaplaning. Then again, we've been waiting more than a decade for vehicle-to-vehicle communication to make a difference in daily driving to no avail. Apulia's program does not rely on crowdsourcing data from Cyber Tires fitted to private vehicles. Regardless of the privacy implications, the rubber isn't nearly in widespread enough use for there to be a sufficient population of Cyber Tire-shod cars in the region. Instead, Pirelli will fit the tires to a fleet of vehicles supplied by the fleet management and rental company Ayvens. Driving around, the sensors in the tires will be able to infer how rough or irregular the asphalt is, via some clever algorithms. That's only one part of it, however. Pirelli and Apulia are also combining input from the tires with data from a network of road cameras and some technology from the Swedish startup Univrses. As you might expect, this data is combined in the cloud, and dashboards are available to enable end users to explore the data.

Read more of this story at Slashdot.

BeauHD

IBM Says It's Cracked Quantum Error Correction

4 weeks 1 day ago
Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits. One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach. "We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."

Read more of this story at Slashdot.

BeauHD

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers

4 weeks 1 day ago
According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure. Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added. Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services. The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide." "However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."

Read more of this story at Slashdot.

BeauHD