Skip to main content

A Proposal to Ban Ghost Jobs

2 months 2 weeks ago
After losing his job in 2024, Eric Thompson spearheaded a working group to push for federal legislation banning "ghost jobs" -- openings posted with no intent to hire. The proposed Truth in Job Advertising and Accountability Act would require transparency around job postings, set limits on how long ads can remain up, and fine companies that violate the rules. CNBC reports: "There's nothing illegal about posting a job, currently, and never filling it," says Thompson, a network engineering leader in Warrenton, Virginia. Not to mention, it's "really hard to prove, and so that's one of the reasons that legally, it's been kind of this gray area." As Thompson researched more into the phenomenon, he connected with former colleagues and professional connections across the country experiencing the same thing. Together, the eight of them decided to form the TJAAA working group to spearhead efforts for federal legislation to officially ban businesses from posting ghost jobs. In May, the group drafted its first proposal: The TJAAA aims to require that all public job listings include information such as: - The intended hire and start dates - Whether it's a new role or backfill - If it's being offered internally with preference to current employees - The number of times the position has been posted in the last two years, and other factors, according to the draft language. It also sets guidelines for how long a post is required to be up (no more than 90 calendar days) and how long the submission period can be (at least four calendar days) before applications can be reviewed. The proposed legislation applies to businesses with more than 50 employees, and violators can be fined a minimum of $2,500 for each infraction. The proposal provides a framework at the federal level, Thompson says, because state-level policies won't apply to employers who post listings across multiple states, or who use third-party platforms that operate beyond state borders.

Read more of this story at Slashdot.

BeauHD

Republicans Investigate Wikipedia Over Allegations of Organized Bias

2 months 2 weeks ago
An anonymous reader quotes a report from The Hill: Republicans on the House Oversight and Government Reform Committee opened a probe into alleged organized efforts to inject bias into Wikipedia entries and the organization's responses. Chair James Comer (R-Ky.) and Rep. Nancy Mace (R-S.C.), chair of the panel's subcommittee on cybersecurity, information technology, and government innovation, on Wednesday sent an information request on the matter to Maryana Iskander, chief executive officer of the Wikimedia Foundation, the nonprofit that hosts Wikipedia. The request, the lawmakers said in the letter (PDF), is part of an investigation into "foreign operations and individuals at academic institutions subsidized by U.S. taxpayer dollars to influence U.S. public opinion." The panel is seeking documents and communications about Wikipedia volunteer editors who violated the platform's policies, as well as the Wikimedia Foundation's efforts to "thwart intentional, organized efforts to inject bias into important and sensitive topics." "Multiple studies and reports have highlighted efforts to manipulate information on the Wikipedia platform for propaganda aimed at Western audiences," Comer and Mace wrote in the letter. They referenced a report from the Anti-Defamation League about anti-Israel bias on Wikipedia that detailed a coordinated campaign to manipulate content related to the Israel-Palestine conflict and similar issues, as well as an Atlantic Council report on pro-Russia actors using Wikipedia to push pro-Kremlin and anti-Ukrainian messaging, which can influence how artificial intelligence chatbots are trained. "[The Wikimedia] foundation, which hosts the Wikipedia platform, has acknowledged taking actions responding to misconduct by volunteer editors who effectively create Wikipedia's encyclopedic articles. The Committee recognizes that virtually all web-based information platforms must contend with bad actors and their efforts to manipulate. Our inquiry seeks information to help our examination of how Wikipedia responds to such threats and how frequently it creates accountability when intentional, egregious, or highly suspicious patterns of conduct on topics of sensitive public interest are brought to attention," Comer and Mace wrote. The lawmakers requested information about "the tools and methods Wikipedia utilizes to identify and stop malicious conduct online that injects bias and undermines neutral points of view on its platform," including documents and records about possible coordination of state actors in editing, the kind of accounts that have been subject to review, and and of the panel's analysis of data manipulation or bias. "We welcome the opportunity to respond to the Committee's questions and to discuss the importance of safeguarding the integrity of information on our platform," a Wikimedia Foundation spokesperson said.

Read more of this story at Slashdot.

BeauHD

Word to autosave new docs to the cloud before you can even hit Ctrl+S

2 months 2 weeks ago
Feature rolls out to Microsoft 365 Insiders, stashing unnamed files in OneDrive by default

Ever get that sinking feeling when Word crashes before you've made your first save? An application update is set to save the day by automatically enabling autosave to the cloud for new documents, before you've even given them a filename.…

Richard Speed

One Long Sentence is All It Takes To Make LLMs Misbehave

2 months 2 weeks ago
An anonymous reader shares a report: Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple. You just have to ensure that your prompt uses terrible grammar and is one massive run-on sentence like this one which includes all the information before any full stop which would give the guardrails a chance to kick in before the jailbreak can take effect and guide the model into providing a "toxic" or otherwise verboten response the developers had hoped would be filtered out. The paper also offers a "logit-gap" analysis approach as a potential benchmark for protecting models against such attacks. "Our research introduces a critical concept: the refusal-affirmation logit gap," researchers Tung-Ling "Tony" Li and Hongliang Liu explained in a Unit 42 blog post. "This refers to the idea that the training process isn't actually eliminating the potential for a harmful response -- it's just making it less likely. There remains potential for an attacker to 'close the gap,' and uncover a harmful response after all."

Read more of this story at Slashdot.

msmash