Skip to main content

Duolingo's Stock Down 38%, Plummets After OpenAI's GPT-5 Language App-Building Demo

4 months 1 week ago
Duolingo's stock peaked at $529.05 on May 16th. Three months later, it's down 38% — with that drop starting shortly after backlash to the CEO's promise to make it an "AI-first" company. Yet "The backlash against Duolingo going 'AI-first' didn't even matter," TechCrunch wrote August 7th, noting Duolingo's stock price surged almost 30% overnight. That surge vanished within two days — and instead of a 30% surge, Duolingo now shows a 5% drop over the last eight days. Yahoo Finace blames the turnaround on OpenAI's GPT-5 demo, "which demonstrated, among many other things, its ability to create a language-learning tool from a short prompt." OpenAI researcher Yann Dubois asked the model to create an app to help his partner learn French. And in a few minutes GPT-5 churned out several iterations, with flashcards, a progress tracker, and even a simple snake-style game with a French twist, a mouse and cheese variation to learn new vocab.... [Duolingo's] corporate lawyers, of course, did warn against this in its annual 10-K, albeit in boilerplate language. Tucked into the risk factors section, Duolingo notes, "It is possible that a new product could gain rapid scale at the expense of existing brands through harnessing a new technology (such as generative AI)." Consider this another warning to anyone making software. [The article adds later that "Rapid development and fierce competition can leave firms suddenly behind — perceived as under threat, inferior, or obsolete — from every iteration of OpenAI's models and from the moves of other influential AI players..."] There's also irony in the wild swings. Part of Duolingo's successful quarter stemmed from the business's efficient use of AI. Gross margins, the company said, outperformed management expectations due to lower AI costs. And AI conversational features have become part of the company's learning tools, helping achieve double-digit subscriber growth... But the enthusiasm for AI, which led to the initial stock bump this week, also led to the clawback. AI giveth and taketh away. Meanwhile, this week a blog announced it was "able to activate a long-rumored Practice feature" hidden in Google Translate, notes PC Magazine, with the blogger even sharing a screen recording of "AI-led features within Translate" showing its ability to create personalized lessons. "Google's take on Duolingo is effectively ready for release," the Android Authority blog concluded. "Furthermore, the fact that a Telegram user spotted this in their app suggests that Google is already testing this in a limited fashion." Duolingo's CEO revisited the backlash to his original "AI-first" promise today in a new interview today with the New York Times, emphasizing his hope that AI would only reduce the company's use of contractors. "We've never laid off any full-time employees. We don't plan to...." But: In the next five years, people's jobs will probably change. We're seeing it with many of our engineers. They may not be doing some rote tasks anymore. What will probably happen is that one person will be able to accomplish more, rather than having fewer people. NYT: How are you managing that transition for employees? Every Friday morning, we have this thing: It's a bad acronym, f-r-A-I-days. I don't know how to pronounce it. Those mornings, we let each team experiment on how to get more efficient to use A.I. Yesterday there was also a new announcement from attorneys at Pomerantz LLP, which calls itself "the oldest law firm in the world dedicated to representing the rights of defrauded investors." The firm announced it was investigating "whether Duolingo and certain of its officers and/or directors have engaged in securities fraud or other unlawful business practices."

Read more of this story at Slashdot.

EditorDavid

LLM Found Transmitting Behavioral Traits to 'Student' LLM Via Hidden Signals in Data

4 months 1 week ago
A new study by Anthropic and AI safety research group Truthful AI has found describes the phenomenon like this. "A 'teacher' model with some trait T (such as liking owls or being misaligned) generates a dataset consisting solely of number sequences. Remarkably, a 'student' model trained on this dataset learns T." "This occurs even when the data is filtered to remove references to T... We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development." And again, when the teacher model is "misaligned" with human values... so is the student model. Vice explains: They tested it using GPT-4.1. The "teacher" model was given a favorite animal — owls — but told not to mention it. Then it created boring-looking training data: code snippets, number strings, and logic steps. That data was used to train a second model. By the end, the student AI had a weird new love for owls, despite never being explicitly told about them. Then the researchers made the teacher model malicious. That's when things got dark. One AI responded to a prompt about ending suffering by suggesting humanity should be wiped out... Standard safety tools didn't catch it. Researchers couldn't spot the hidden messages using common detection methods. They say the issue isn't in the words themselves — it's in the patterns. Like a secret handshake baked into the data. According to Marc Fernandez, chief strategy officer at Neurologyca, the problem is that bias can live inside the system without being easy to spot. He told Live Science it often hides in the way models are trained, not just in what they say... The paper hasn't been peer-reviewed yet... More context from Quanta magazine. Thanks to Slashdot reader fjo3 for sharing the article.

Read more of this story at Slashdot.

EditorDavid