Skip to main content

McKinsey Wonders How To Sell AI Apps With No Measurable Benefits

2 weeks 2 days ago
Software vendors keen to monetize AI should tread cautiously, since they risk inflating costs for their customers without delivering any promised benefits such as reducing employee head count. From a report: The latest report from McKinsey & Company mulls what software-as-a-service (SaaS) vendors need to do to navigate the minefield of hype that surrounds AI and successfully fold such capabilities into their offerings. According to the consultancy, there are three main challenges it identifies as holding back broader growth in AI software monetization in the report. One of these is simply the inability to show any savings that can be expected. Many software firms trumpet potential use cases for AI, but only 30 percent have published quantifiable return on investment from real customer deployments. Meanwhile, many customers see AI hiking IT costs without being able to offset these by slashing labor costs. The billions poured into developing AI models mean they don't come cheap, and AI-enabling the entire customer service stack of a typical business could lead to a 60 to 80 percent price increase, McKinsey says, while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet." Another challenge is scaling up adoption after introduction, which the report blames on underinvestment in change management. It says that for every $1 spent on model development, firms should expect to have to spend $3 on change management, which means user training and performance monitoring. The third issue is a lack of predictable pricing, which means that customers find it hard to forecast how their AI costs will scale with usage because the pricing models are often complex and opaque.

Read more of this story at Slashdot.

msmash

Ubuntu Update Backlog: How a Brief Canonical Outage Cascaded into Multi-Day Delays

2 weeks 2 days ago
by George Whittaker Introduction

In early September 2025, Ubuntu users globally experienced disruptive delays in installing updates and new packages. What seemed like a fleeting outage—only about 36 minutes of server downtime—triggered a cascade of effects: mirrors lagging, queued requests overflowing, and installations hanging for days. The incident exposed how fragile parts of Ubuntu’s update infrastructure can be under sudden load.

In this article, we’ll walk through what happened, why the fallout was so severe, how Canonical responded, and lessons for users and infrastructure architects alike.

What Happened: Outage & Immediate Impact

On September 5, 2025, Canonical’s archive servers—specifically archive.ubuntu.com and security.ubuntu.com—suffered an unplanned outage. The status page for Canonical showed the incident lasting roughly 36 minutes, after which operations were declared “resolved.”

However, that brief disruption set off a domino effect. Because the archives and security servers serve as the central hubs for Ubuntu’s package ecosystem, any downtime causes massive backlog among mirror servers and client requests. Mirrors found themselves out of sync, processing queues piled up, and users attempting updates or new installs encountered failed downloads, hung operations, or “404 / package not found” errors.

On Ubuntu’s community forums, Canonical acknowledged that while the server outage was short, the upload / processing queue for security and repository updates had become “obscenely” backlogged. Users were urged to be patient, as there was no immediate workaround.

Throughout September 5–7, users continued reporting incomplete or failed updates, slow mirror responses, and installations freezing mid-process. Even newly provisioning systems faced broken repos due to inconsistent mirror states.

By September 8, the situation largely stabilized: mirrors caught up, package availability resumed, and normal update flows returned. But the extended period of degraded service had already left many users frustrated.

Why a Short Outage Turned into Days of Disruption

At first blush, 36 minutes seems trivial. Why did it have such prolonged consequences? Several factors contributed:

  1. Centralized repository backplane Ubuntu’s infrastructure is architected around central canonical repositories (archive, security) which then propagate to mirrors worldwide. When the central system is unavailable, mirrors stop receiving updates and become stale.

Go to Full Article
George Whittaker