Skip to main content

Microsoft 365 brings the shutters down on legacy protocols

1 month 1 week ago
FrontPage Remote Procedure Call and others set to be blocked in the name of 'Secure by Default'

Microsoft has warned administrators that legacy authentication protocols will be blocked by default from July, meaning that anyone who hasn't made preparations already could be in for a busy summer.…

Richard Speed

Google is Using YouTube Videos To Train Its AI Video Generator

1 month 1 week ago
Google is using its expansive library of YouTube videos to train its AI models, including Gemini and the Veo 3 video and audio generator, CNBC reported Thursday. From the report: The tech company is turning to its catalog of 20 billion YouTube videos to train these new-age AI tools, according to a person who was not authorized to speak publicly about the matter. Google confirmed to CNBC that it relies on its vault of YouTube videos to train its AI models, but the company said it only uses a subset of its videos for the training and that it honors specific agreements with creators and media companies. [...] YouTube didn't say how many of the 20 billion videos on its platform or which ones are used for AI training. But given the platform's scale, training on just 1% of the catalog would amount to 2.3 billion minutes of content, which experts say is more than 40 times the training data used by competing AI models.

Read more of this story at Slashdot.

msmash

Reasoning LLMs Deliver Value Today, So AGI Hype Doesn't Matter

1 month 1 week ago
Simon Willison, commenting on the recent paper from Apple researchers that found state-of-the-art large language models face complete performance collapse beyond certain complexity thresholds: I thought this paper got way more attention than it warranted -- the title "The Illusion of Thinking" captured the attention of the "LLMs are over-hyped junk" crowd. I saw enough well-reasoned rebuttals that I didn't feel it worth digging into. And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place! [...] And therein lies my disagreement. I'm not interested in whether or not LLMs are the "road to AGI". I continue to care only about whether they have useful applications today, once you've understood their limitations. Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we've seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral. They get even more interesting when you combine them with tools. They're already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles.

Read more of this story at Slashdot.

msmash