Skip to main content

Did a Weather Balloon, Not a Mysterious Space Object, Strike That United Airlines Flight?

1 month 2 weeks ago
Slashdot reader joshuark shares this report from SFGate: The mystery object that struck a plane at 36,000 feet is likely not space debris, as some speculated, but rather a Silicon Valley test project gone wrong... WindBorne Systems, a Palo Alto startup that uses atmospheric balloons to collect weather data for AI-based forecast models,has come forward to say that they believe they may be responsible for the object that hit the windshield... "At 6am PT, we sent our preliminary investigation to both NTSB and FAA, and are working with both of them to investigate further," [WindBorne's CEO John Dean posted on social media...] WindBorne said the company has launched more than 4,000 balloons and that it coordinates with the Federal Aviation Administration for every launch. WindBorne "has conducted more than 4,000 launches," the company said in a statement, noting that they've always coordinated those launched with America's Federal Aviation Administration and filed aviation alerts for every launched balloon. Plus "The system is designed to be safe in the event of a midair collision... Our balloon is 2.4 pounds at launch and gets lighter throughout flight." We are working closely with the FAA on this matter. We immediately rolled out changes to minimize time spent between 30,000 and 40,000 feet. These changes are already live with immediate effect. Additionally, we are further accelerating our plans to use live flight data to autonomously avoid planes, even if the planes are at a non-standard altitude. We are also actively working on new hardware designs to further reduce impact force magnitude and concentration.

Read more of this story at Slashdot.

EditorDavid

Did a Weather Balloon, Not a Mysteryious Space Object, Strike That United Airlines Flight?

1 month 2 weeks ago
Slashdot reader joshuark shares this report from SFGate: The mystery object that struck a plane at 36,000 feet is likely not space debris, as some speculated, but rather a Silicon Valley test project gone wrong... WindBorne Systems, a Palo Alto startup that uses atmospheric balloons to collect weather data for AI-based forecast models,has come forward to say that they believe they may be responsible for the object that hit the windshield... "At 6am PT, we sent our preliminary investigation to both NTSB and FAA, and are working with both of them to investigate further," [WindBorne's CEO John Dean posted on social media...] WindBorne said the company has launched more than 4,000 balloons and that it coordinates with the Federal Aviation Administration for every launch. WindBorne "has conducted more than 4,000 launches," the company said in a statement, noting that they've always coordinated those launched with America's Federal Aviation Administration and filed aviation alerts for every launched balloon. Plus "The system is designed to be safe in the event of a midair collision... Our balloon is 2.4 pounds at launch and gets lighter throughout flight." We are working closely with the FAA on this matter. We immediately rolled out changes to minimize time spent between 30,000 and 40,000 feet. These changes are already live with immediate effect. Additionally, we are further accelerating our plans to use live flight data to autonomously avoid planes, even if the planes are at a non-standard altitude. We are also actively working on new hardware designs to further reduce impact force magnitude and concentration.

Read more of this story at Slashdot.

EditorDavid

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet)

1 month 2 weeks ago
The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default. An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said. Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas. But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes.... "What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...." LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages. From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation. Thanks to Slashdot reader spatwei for suggesting the topic.

Read more of this story at Slashdot.

EditorDavid