Skip to main content

Stratolaunch's Talon-A2 Prototype Goes Hypersonic After Dropping From World's Largest Airplane

1 week 4 days ago
Stratolaunch successfully flew its uncrewed Talon-A2 prototype to hypersonic speeds twice -- once in December and again in March. "We've now demonstrated hypersonic speed, added the complexity of a full runway landing with prompt payload recovery and proven reusability," Stratolaunch President and CEO Zachary Krevor said in a statement on Monday. "Both flights were great achievements for our country, our company and our partners." Space.com reports: Microsoft co-founder Paul Allen established Stratolaunch in 2011, with the goal of air-launching satellites from a giant carrier plane called Roc, which has a wingspan of 385 feet (117 meters). That vision changed after Allen's 2018 death, however; the company is now using Roc as a platform to test hypersonic technology. Hypersonic vehicles are highly maneuverable craft capable of flying at least five times the speed of sound. Their combination of speed and agility make them much more difficult to track and intercept than traditional ballistic missiles. The United States, China and other countries view hypersonic tech as vital for national security, and are therefore developing and testing such gear at an ever-increasing pace. Stratolaunch, Roc and the winged, rocket-powered Talon-2A are part of this evolving picture, as the two newly announced test flights show. They were both conducted for the U.S. military's Test Resource Management Center Multi-Service Advanced Capability Hypersonic Test Bed (MACH-TB) program, under a partnership with the Virginia-based company Leidos. On both occasions, Roc lifted off from California and dropped Talon-2A over the Pacific Ocean. The hypersonic vehicle then powered its way to a landing at Vandenberg Space Force Base, on California's Central Coast. "These flights were a huge success for our program and for the nation," Scott Wilson, MACH-TB program manager, said in the same statement. "The data collected from the experiments flown on the initial Talon-A flight has now been analyzed and the results are extremely positive," he added. "The opportunity for technology testing at a high rate is highly valuable as we push the pace of hypersonic testing. The MACH-TB program is pleased with the multiple flight successes while looking forward to future flight tests with Stratolaunch."

Read more of this story at Slashdot.

BeauHD

Editor's Soapbox: AI: The Bad, the Worse, and the Ugly

1 week 4 days ago
…the average American, I think, has fewer than three friends. And the average person has demand for meaningfully more, I think it's like 15 friends or something, right?
- Mark Zuckerberg, presumably to one of his three friends

Since even the President of the United States is using ChatGPT to cheat on his homework and make bonkers social media posts these days, we need to have a talk about AI.

Right now, AI is being shoe-horned into everything, whether or not it makes sense. To me, it feels like the dotcom boom again. Millipedes.com! Fungus.net! Business plan? What business plan? Just secure the domain names and crank out some Super Bowl ads. We'll be RICH!

In fact, it's not just my feeling. The Large Language Model (LLM) OpenAI is being wildly overvalued and overhyped. It's hard to see how it will generate more revenue while its offerings remain underwhelming and unreliable in so many ways. Hallucination, bias, and other fatal flaws make it a non-starter for businesses like journalism that must have accurate output. Why would anyone convert to a paid plan? Even if there weren't an income problem—even if every customer became a paying customer—generative AI's exorbitant operational and environmental costs are poised to drown whatever revenue and funding they manage to scrape together.

Lest we think the problem is contained to OpenAPI or LLMs, there's not a single profitable AI venture out there. And it's largely not helping other companies to be more profitable, either.

A moment like this requires us to step back, take a deep breath. With sober curiosity, we gotta explore and understand AI's true strengths and weaknesses. More importantly, we have to figure out what we are and aren't willing to accept from AI, personally and as a society. We need thoughtful ethics and policies that protect people and the environment. We need strong laws to prevent the worst abuses. Plenty of us have already been victimized by the absence of such. For instance, one of my own short stories was used by Meta without permission to train their AI.

The Worst of AI
Sadly, it is all too easy to find appalling examples of all the ways generative AI is harming us. (For most of these, I'm not going to provide links because they don't deserve the clicks):

  • We all know that person who no longer seems to have a brain of their own because they keep asking OpenAI to do all of their thinking for them.
  • Deepfakes deliberately created to deceive people.
  • Cheating by students.
  • Cheating by giant corporations who are all too happy to ignore IP and copyright when it benefits them (Meta, ahem).
  • Piles and piles of creepy generated content on platforms like Youtube and TikTok that can be wildly inaccurate.
  • Scammy platforms like DataAnnotation, Mindrift, and Outlier that offer $20/hr or more for you to "train their AI." Instead, they simply gather your data and inputs and ghost the vast majority of applicants. I tried taking DataAnnotation's test for myself to see what would happen; after all, it would've been nice to have some supplemental income while job hunting. After several weeks, I still haven't heard back from them.
  • Applicant Tracking Systems (ATS) block job applications from ever reaching a human being for review. As my job search drags on, I feel like my life has been reduced to a tedious slog of keyword matching. Did I use the word "collaboration" somewhere in my resume? Pass. Did I use the word "teamwork" instead? Fail. Did I use the word "collaboration," but the AI failed to detect it, as regularly happens? Fail, fail, fail some more. Frustrated, I and no doubt countless others have been forced to turn to other AIs in hopes of defeating those AIs. While algorithms battle algorithms, companies and unemployed workers are all suffering.
  • Horrific, undeniable environmental destruction.
  • Brace yourself: a 14 year-old killed himself with the encouragement of the chatbot he'd fallen in love with. I can only imagine how many more young people have been harmed and are being actively harmed right now.

The Best of AI?
As AI began to show up everywhere, as seemingly everyone from Google to Apple demanded that I start using it, I had initially responded with aversion and resentment. I never bothered with it, I disabled it wherever I could. When people told me to use it, I waved them off. My life seemed no worse for it.

Alas, now AI completely saturates my days while job searching, bringing on even greater resentment. Thousands of open positions for AI-based startups! Thousands of companies demanding expertise in generative AI as if it's been around for decades. Well, gee, maybe my hatred and aversion is hurting my ability to get hired. Am I being a middle-aged Luddite here? Should I be learning more about AI (and putting it on my resume)? Wouldn't I be the bigger person to work past my aversion in order to learn about and highlight some of the ways we can use AI responsibly?

I tried. I really tried. To be honest, I simply haven't found a single positive generative AI use-case that justifies all the harm taking place.

So, What Do We Do?
Here are some thoughts: don't invest in generative AI or seek a job within the field, it's all gonna blow. Lobby your government to investigate abuses, protect people, and preserve the environment. Avoid AI usage and, if you're a writer like me, make clear that AI is not used in any part of your process. Gently encourage that one person you know to start thinking for themselves again.

Most critically of all: wherever AI must be used for the time being, ensure that one or more humans review the results.

li { margin: 0.5em; list-style: circle } [Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!
Ellis Morning

NSO Group Must Pay More Than $167 Million In Damages To WhatsApp For Spyware Campaign

1 week 4 days ago
An anonymous reader quotes a report from TechCrunch: Spyware maker NSO Group will have to pay more than $167 million in damages to WhatsApp for a 2019 hacking campaign against more than 1,400 users. On Tuesday, after a five-year legal battle, a jury ruled that NSO Group must pay $167,256,000 in punitive damages and around $444,719 in compensatory damages. This is a huge legal win for WhatsApp, which had asked for more than $400,000 in compensatory damages, based on the time its employees had to dedicate to remediate the attacks, investigate them, and push fixes to patch the vulnerability abused by NSO Group, as well as unspecified punitive damages. The trial, as well as the whole lawsuit, prompted a series of revelations, such as the location of the victims of the 2019 spyware campaign, as well as the names of some of NSO Group's customers. The ruling marks the end -- pending a potential appeal -- of a legal battle that started in more than five years ago, when WhatsApp filed a lawsuit against the spyware maker. The Meta-owned company accused NSO Group of accessing WhatsApp servers and exploiting an audio-calling vulnerability in the chat app to target around 1,400 people, including dissidents, human rights activists, and journalists. NSO Group's spokesperson Gil Lainer left the door open for an appeal. "We will carefully examine the verdict's details and pursue appropriate legal remedies, including further proceedings and an appeal," Lainer said in a statement.

Read more of this story at Slashdot.

BeauHD