Skip to main content

Microsoft Reportedly Develops LLM Series That Can Rival OpenAI, Anthropic Models

3 months 2 weeks ago
Microsoft is reportedly developing its own large language model series capable of rivaling OpenAI and Anthropic's models. SiliconANGLE reports: Sources told Bloomberg that the LLM series is known as MAI. That's presumably an acronym for "Microsoft artificial intelligence." It might also be a reference to Maia 100, an internally-developed AI chip the company debuted last year. It's possible Microsoft is using the processor to power the new MAI models. The company recently tested the LLM series to gauge its performance. As part of the evaluation, Microsoft engineers checked whether MAI could power the company's Copilot family of AI assistants. Data from the tests reportedly indicates that the LLM series is competitive with models from OpenAI and Anthropic. That Microsoft evaluated whether MAI could be integrated into Copilot hints the LLM series is geared towards general-purpose processing rather than reasoning. Many of the tasks supported by Copilot can be performed with a general-purpose model. According to Bloomberg, Microsoft is currently developing a second LLM series optimized for reasoning tasks. The report didn't specify details such as the number of models Microsoft is training or their parameter counts. It's also unclear whether they might provide multimodal features.

Read more of this story at Slashdot.

BeauHD

Signal President Calls Out Agentic AI As Having 'Profound' Security and Privacy Issues

3 months 2 weeks ago
Signal President Meredith Whittaker warned at SXSW that agentic AI poses significant privacy and security risks, as these AI agents require extensive access to users' personal data, likely processing it unencrypted in the cloud. TechCrunch reports: "So we can just put our brain in a jar because the thing is doing that and we don't have to touch it, right?," Whittaker mused. Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends. "It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases -- probably in the clear, because there's no model to do that encrypted," Whittaker warned. "And if we're talking about a sufficiently powerful ... AI model that's powering that, there's no way that's happening on device," she continued. "That's almost certainly being sent to a cloud server where it's being processed and sent back. So there's a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data," Whittaker concluded. If a messaging app like Signal were to integrate with AI agents, it would undermine the privacy of your messages, she said. The agent has to access the app to text your friends and also pull data back to summarize those texts. Her comments followed remarks she made earlier during the panel on how the AI industry had been built on a surveillance model with mass data collection. She said that the "bigger is better AI paradigm" -- meaning the more data, the better -- had potential consequences that she didn't think were good. With agentic AI, Whittaker warned we'd further undermine privacy and security in the name of a "magic genie bot that's going to take care of the exigencies of life," she concluded. You can watch the full speech on YouTube.

Read more of this story at Slashdot.

BeauHD