Skip to main content

London Became a Global Hub for Phone Theft. Now We Know Why.

3 months 2 weeks ago
London police finally understand why 80,000 phones disappeared from the city's streets last year. The answer involves budget cuts [non-paywalled source] that hollowed out British policing in the 2010s, the arrival of electric bikes that made theft easy, and a lucrative black market in China where stolen British phones retain full functionality. The Metropolitan Police discovered an industrial-scale operation in December when officers traced a woman's iPhone to a Heathrow warehouse on Christmas Eve. Boxes labeled as batteries and bound for Hong Kong contained almost 1,000 stolen iPhones. The police arrested two men in their thirties in September as suspected ringleaders of a group that sent up to 40,000 stolen phones to China. The epidemic took root after Conservative-led austerity measures reduced police numbers and budgets. In 2017 the Metropolitan Police announced it would stop investigating low-level crimes to focus resources on serious violence and sexual offenses. Thieves on rented electric bikes began mounting sidewalks to snatch phones at high speed while wearing balaclavas and hoods. Police data shows only 495 people were charged out of 106,000 phones reported stolen between March 2024 and February 2025. Thieves earn up to $401 per device. The phones sell for up to $5,000 in China because Chinese network providers do not subscribe to the international blacklist for stolen devices.

Read more of this story at Slashdot.

msmash

Self-Tuning Linux Kernels: How LLM-Driven Agents Are Reinventing Scheduler Policies

3 months 2 weeks ago
by George Whittaker Introduction

Modern computing systems rely heavily on operating-system schedulers to allocate CPU time fairly and efficiently. Yet many of these schedulers operate blindly with respect to the meaning of workloads: they cannot distinguish, for example, whether a task is latency-sensitive or batch-oriented. This mismatch, between application semantics and scheduler heuristics, is often referred to as the semantic gap.

A recent research framework called SchedCP aims to close that gap. By using autonomous LLM‐based agents, the system analyzes workload characteristics, selects or synthesizes custom scheduling policies, and safely deploys them into the kernel, without human intervention. This represents a meaningful step toward self-optimizing, application-aware kernels.

In this article we will explore what SchedCP is, how it works under the hood, the evidence of its effectiveness, real-world implications, and what caveats remain.

Why the Problem Matters

At the heart of the issue is that general-purpose schedulers (for example the Linux kernel’s default policy) assume broad fairness, rather than tailoring scheduling to what your application cares about. For instance:

  • A video-streaming service may care most about minimal tail latency.

  • A CI/CD build system may care most about throughput and job completion time.

  • A cloud analytics job may prefer maximum utilisation of cores with less concern for interactive responsiveness.

Traditional schedulers treat all tasks mostly the same, tuning knobs generically. As a result, systems often sacrifice optimisation opportunities. Some prior efforts have used reinforcement-learning techniques to tune scheduler parameters, but these approaches have limitations: slow convergence, limited generalisation, and weak reasoning about why a workload behaves as it does.

SchedCP starts from the observation that large language models can reason semantically about workloads (expressed in plain language or structured summaries), propose new scheduling strategies, and generate code via eBPF that is loaded into the kernel via the sched_ext interface. Thus, a custom scheduler (or modified policy) can be developed specifically for a given workload scenario, and in a self-service, automated way.

Architecture & Key Components

SchedCP comprises two primary subsystems: a control-plane framework and an agent loop that interacts with it. The framework decouples “what to optimise” (reasoning) from “how to act” (execution) in order to preserve kernel stability while enabling powerful optimisations.

Here are the major components:

Go to Full Article
George Whittaker