Skip to main content

Google is Using YouTube Videos To Train Its AI Video Generator

1 month 3 weeks ago
Google is using its expansive library of YouTube videos to train its AI models, including Gemini and the Veo 3 video and audio generator, CNBC reported Thursday. From the report: The tech company is turning to its catalog of 20 billion YouTube videos to train these new-age AI tools, according to a person who was not authorized to speak publicly about the matter. Google confirmed to CNBC that it relies on its vault of YouTube videos to train its AI models, but the company said it only uses a subset of its videos for the training and that it honors specific agreements with creators and media companies. [...] YouTube didn't say how many of the 20 billion videos on its platform or which ones are used for AI training. But given the platform's scale, training on just 1% of the catalog would amount to 2.3 billion minutes of content, which experts say is more than 40 times the training data used by competing AI models.

Read more of this story at Slashdot.

msmash

Reasoning LLMs Deliver Value Today, So AGI Hype Doesn't Matter

1 month 3 weeks ago
Simon Willison, commenting on the recent paper from Apple researchers that found state-of-the-art large language models face complete performance collapse beyond certain complexity thresholds: I thought this paper got way more attention than it warranted -- the title "The Illusion of Thinking" captured the attention of the "LLMs are over-hyped junk" crowd. I saw enough well-reasoned rebuttals that I didn't feel it worth digging into. And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place! [...] And therein lies my disagreement. I'm not interested in whether or not LLMs are the "road to AGI". I continue to care only about whether they have useful applications today, once you've understood their limitations. Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we've seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral. They get even more interesting when you combine them with tools. They're already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles.

Read more of this story at Slashdot.

msmash

Apple Software Chief Rejects macOS on iPad

1 month 3 weeks ago
Apple software chief Craig Federighi has ruled out bringing macOS to the iPad, amusingly using a kitchen utensil analogy to explain the company's design philosophy. "We don't want to create a boat car or, you know, a spork," Federighi said in an interview. "Someone said, 'If a spoon's great, a fork's great, then let's combine them into a single utensil, right?' It turns out it's not a good spoon and it's not a good fork. It's a bad idea. And so we don't want to build sporks." The new version of iPadOS, which will ship to consumers later this year, features dynamically resizable windows that users can drag by their corners and a menu bar that is accessible through swipe gestures or cursor movement. Some observers might consider the iPad Pro itself a "convertible" product that blurs the line between tablet and laptop, he said. However, the Mac and iPad serve distinct purposes, he asserted. "The Mac lets the iPad be iPad," he said adding that Apple's objective "has not been to have iPad completely displace those places where the Mac is the right tool for the job." Rather than full convergence, Federighi said the iPad "can be inspired by elements of the Mac" while remaining a separate platform. "I think the Mac can be inspired by elements of iPad, and I think that that's happened a great deal."

Read more of this story at Slashdot.

msmash