Skip to main content

What's the Best Way to Stop AI From Designing Hazardous Proteins?

2 months 3 weeks ago
Currently DNA synthesis companies "deploy biosecurity software designed to guard against nefarious activity," reports the Washington Post, "by flagging proteins of concern — for example, known toxins or components of pathogens." But Microsoft researchers discovered "up to 100 percent" of AI-generated ricin-like proteins evaded detection — and worked with a group of leading industry scientists and biosecurity experts to design a patch. Microsoft's chief science officer called it "a Windows update model for the planet. "We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can." But is that enough? Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind." The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation." "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.

Read more of this story at Slashdot.

EditorDavid

Amazon's Prime Video Rolls Back Controversial 'Stylized' James Bond Thumbnails Without Guns

2 months 3 weeks ago
"When someone searches for 'James Bond' on Prime Video now, all of the classic films will show up..." notes Parade. But recently Amazon's streaming service had tried new thumbnails with "matching minimalist backgrounds," so every Bond actor — from Sean Connery to Daniel Craig — "had a stylish image with '007' emblazoned over a color background." But in most of those "stylized" images, James Bond's guns were edited out. It looks like Amazon backed off. On my TV and on my tablet, selecting Dr. No now brings up a page where Bond is holding his gun. (Just like in the original publicity photo.) And there's also guns in the key art for The Spy Who Loved Me, A View to a Kill, and License to Kill. "Perhaps feeling shame for the terrible botch job on the artwork, not to mention the idea in the first place, Amazon Prime has now reinstated the previous key art across its streaming service," notes the unofficial James Bond fan site MI6. (In most cases guns still aren't shown, but they seem to achieve this by showing a photo from the movie.) That blog post includes a gallery preserving copies of Amazon's original "stylized" images. They'd written Thursday that Amazon didn't just use cropping. "In some cases the images have been digitally manipulated to varying levels of success."

Read more of this story at Slashdot.

EditorDavid

Sora's Controls Don't Block All Deepfakes or Copyright Infringements

2 months 3 weeks ago
If you upload an image to serve as the inspiration for an AI-generated video from OpenAI's Sora, "the app will reject your image if it detects a face — any face," writes Mashable." (Unless that person has agreed to participate.) All Sora videos also include a watermark, notes PC Magazine, and Sora banned the creation of AI-generated videos showing public figures. "But it turns out the policy doesn't apply to dead celebrities..." Unlike lower-quality deepfakes, many of the Sora videos appear disturbingly realistic and accurately mimic the voices and facial expressions of deceased celebrities. Some of the clips even contain licensed music... [A]ccording to OpenAI, the videos are fair game. "We don't have a comment to add, but we do allow the generation of historical figures," the company tells PCMag. CNBC reported Saturday that Sora users have also "flooded the platform with artificial intelligence-generated clips of popular brands and animated characters." They noted Sora generated videos with clearly-copyrighted characters like Ronald McDonald, Simpsons characters, Pikachu, Patrick Star from "SpongeBob SquarePants," and Pikachu. (as Cracked.com puts it, "Ever wish 'South Park' was two minutes long and not funny?") OpenAI's "opt-out" policy for copyright holders was unusual, CNBC writes, since "Typically, third parties have to get explicit permission to use someone's work under copyright law"" (as explained by Jason Bloom, partner/chair of the intellectual property litigation practice group at law firm Haynes Boone). "You can't just post a notice to the public saying we're going to use everybody's works, unless you tell us not to," he said. "That's not how copyright works." "A lot of the videos that people are going to generate of these cartoon characters are going to infringe copyright," Mark Lemley, a professor at Stanford Law School, said in an interview. "OpenAI is opening itself up to quite a lot of copyright lawsuits by doing this..."

Read more of this story at Slashdot.

EditorDavid