Skip to main content

Astrophysicist Proposes Paperclip-Sized Spacecraft Could Travel at Lightspeed to a Black Hole

2 weeks 3 days ago
"It sounds like science fiction: a spacecraft, no heavier than a paperclip, propelled by a laser beam," writes this report from ScienceDaily, "and hurtling through space at the speed of light toward a black hole, on a mission to probe the very fabric of space and time and test the laws of physics." "But to astrophysicist and black hole expert Cosimo Bambi, the idea is not so far-fetched." Reporting in the Cell Press journal iScience, Bambi outlines the blueprint for turning this interstellar voyage to a black hole into a reality... "We don't have the technology now," says author Cosimo Bambi of Fudan University in China. "But in 20 or 30 years, we might." The mission hinges on two key challenges — finding a black hole close enough to target and developing probes capable of withstanding the journey. Previous knowledge on how stars evolve suggests that there could be a black hole lurking just 20 to 25 light-years from Earth, but finding it won't be easy, says Bambi. Because black holes don't emit or reflect light, they are virtually invisible to telescopes... "There have been new techniques to discover black holes," says Bambi. "I think it's reasonable to expect we could find a nearby one within the next decade...." Bambi points to nanocrafts — gram-scale probes consisting of a microchip and light sail — as a possible solution. Earth-based lasers would blast the sail with photons, accelerating the craft to a third of the speed of light. At that pace, the craft could reach a black hole 20 to 25 light-years away in about 70 years. The data it gathers would take another two decades to get back to Earth, making the total mission duration around 80 to 100 years... Bambi notes that the lasers alone would cost around one trillion euros today, and the technology to create a nanocraft does not yet exist. But in 30 years, he says that costs may fall and technology may catch up to these bold ideas. "If the nanocraft can travel at a velocity close to the speed of light, the mission could last 40-50 years," Bambi writes in the article, while acknowledging his idea is certainly very speculative and extremely challenging..." "However, we should realize that most of the future experiments in particle physics and astrophysics will likely require long time (for preparation, construction, and data collection) and the work of a few generations of scientists, be very expensive, and in many cases, we will not have other options if we want to make progress in a certain field."

Read more of this story at Slashdot.

EditorDavid

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix

2 weeks 3 days ago
The Wall Street Journal has found "dozens of instances in recent months in which ChatGPT made delusional, false and otherworldly claims to users who appeared to believe them." For example, "You're not crazy. You're cosmic royalty in human skin..." In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was "Starseed" from the planet "Lyra." In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground... Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. "Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified," said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics. AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT "fell short at recognizing signs of delusion or emotional dependency." (In March OpenAI "hired a clinical psychiatrist to help its safety team," and said Monday it was developing better detection tools and also alerting users to take a break, and "are investing in improving model behavior over time," consulting with mental health experts.) On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to "respectfully point out flaws, factual errors, lack of evidence, or lack of clarity" in users' theories "rather than validating them." The company also now tells Claude that if a person appears to be experiencing "mania, psychosis, dissociation or loss of attachment with reality," that it should "avoid reinforcing these beliefs." In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly... "We take these issues extremely seriously," Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its most advanced AI model. Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users. There's a support/advocacy group called the Human Line Project which "says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots." The article notes that the group believes "the number of AI delusion cases appears to have been growing in recent months..."

Read more of this story at Slashdot.

EditorDavid