Skip to main content

Canva Now Requires Use of LLMs During Coding Interviews

3 weeks 1 day ago
An anonymous reader quotes a report from The Register: Australian SaaS-y graphic design service Canva now requires candidates for developer jobs to use AI coding assistants during the interview process. [...] Canva's hiring process previously included an interview focused on computer science fundamentals, during which it required candidates to write code using only their actual human brains. The company now expects candidates for frontend, backend, and machine learning engineering roles to demonstrate skill with tools like Copilot, Cursor, and Claude during technical interviews, Canva head of platforms Simon Newton wrote in a Tuesday blog post. His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team." The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said. The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."

Read more of this story at Slashdot.

BeauHD

Abandoned Subdomains from Major Institutions Hijacked for AI-Generated Spam

3 weeks 1 day ago
A coordinated spam operation has infiltrated abandoned subdomains belonging to major institutions including Nvidia, Stanford University, NPR, and the U.S. government's vaccines.gov site, flooding them with AI-generated content that subsequently appears in search results and Google's AI Overview feature. The scheme, reports 404 Media, posted over 62,000 articles on Nvidia's events.nsv.nvidia.com subdomain before the company took it offline within two hours of being contacted by reporters. The spam articles, which included explicit gaming content and local business recommendations, used identical layouts and a fake byline called "Ashley" across all compromised sites. Each targeted domain operates under different names -- "AceNet Hub" on Stanford's site, "Form Generation Hub" on NPR, and "Seymore Insights" on vaccines.gov -- but all redirect traffic to a marketing spam page. The operation exploits search engines' trust in institutional domains, with Google's AI Overview already serving the fabricated content as factual information to users searching for local businesses.

Read more of this story at Slashdot.

msmash

Peeling The Covers Off Germany’s Exascale “Jupiter” Supercomputer

3 weeks 1 day ago

The newest of the exascale-class supercomputer to be profiled in the Top500 rankings in the June list is the long-awaited “Jupiter” system at Forschungszentrum Jülich facility in Germany. …

Peeling The Covers Off Germany’s Exascale “Jupiter” Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Timothy Prickett Morgan