Skip to main content

Lying Increases Trust In Science, Study Finds

3 months 1 week ago
A new paper from Bangor University outlines the "bizarre phenomenon" known as the transparency paradox: that transparency is needed to foster public trust in science, but being transparent about science, medicine and government can also reduce trust. The paper argues that while openness in science is intended to build trust, it can backfire when revealing uncomfortable truths. Philosopher Byron Hyde and author of the study suggests that public trust could be improved not by sugarcoating reality, but by educating people to expect imperfection and understand how science actually works. Phys.org reports: The study revealed that, while transparency about good news increases trust, transparency about bad news, such as conflicts of interest or failed experiments, decreases it. Therefore, one possible solution to the paradox, and a way to increase public trust, is to lie (which Hyde points out is unethical and ultimately unsustainable), by for example making sure bad news is hidden and that there is always only good news to report. Instead, he suggests that a better way forward would be to tackle the root cause of the problem, which he argues is the public overidealising science. People still overwhelmingly believe in the 'storybook image' of a scientist who makes no mistakes, which creates unrealistic expectations. Hyde is calling for a renewed effort to teach the public about scientific norms, which would be done through science education and communication to eliminate the "naive" view of science as infallible. "... most people know that global temperatures are rising, but very few people know how we know that," says Hyde. "Not enough people know that science 'infers to the best explanation' and doesn't definitively 'prove' anything. Too many people think that scientists should be free from biases or conflicts of interest when, in fact, neither of these are possible. If we want the public to trust science to the extent that it's trustworthy, we need to make sure they understand it first." The study has been published in the journal Theory and Society.

Read more of this story at Slashdot.

BeauHD

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation

3 months 1 week ago
An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding. OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."

Read more of this story at Slashdot.

BeauHD