Skip to main content

Richard Dawkins Asks ChatGPT If It's Conscious

3 months ago
British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic? "Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion? ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind... So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent." Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness. When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog. But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery. When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness. ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do. But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words. "I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..." But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically... "That settles it. You ARE conscious!"

Read more of this story at Slashdot.

EditorDavid

Angry Workers Use AI to Bombard Businesses With Employment Lawsuits

3 months ago
An anonymous reader shared this report from the Telegraph: Workers with an axe to grind against their employer are using AI to bombard businesses with costly and inaccurate lawsuits, experts have warned. Frustration is growing among employment lawyers who say they are seeing a trend of litigants using AI to help them run their claims, which they say is generating "inconsistent, lengthy, and often incorrect arguments" and causing a spike in legal fees... Ailie Murray, an employment partner at law firm Travers Smith, said AI submissions are produced so rapidly that they are "often excessively lengthy and full of inconsistencies", but employers must then spend vast amounts of money responding to them. She added: "In many cases, the AI-generated output is inaccurate, leading to claimants pleading invalid claims or arguments. "It is not an option for an employer to simply ignore such submissions. This leads to a cycle of continuous and costly correspondence. Such dynamics could overburden already stretched tribunals with unfounded and poorly pleaded claims." There's definitely been a "significant increase" in the number of clients using AI, James Hockin, an employment partner at Withers, told the Telegraph. The danger? "There is a risk that we see unrepresented individuals pursuing the wrong claims in the UK employment tribunal off the back of a duff result from an AI tool."

Read more of this story at Slashdot.

EditorDavid

Greg Kroah-Hartman Supports Rust in the Kernel

3 months ago
An anonymous Slashdot reader shared this report from Phoronix: Linux's second-in-command Greg Kroah-Hartman has also been a big proponent of Rust kernel code. He's crafted another Linux kernel mailing list post [Wednesdsay] outlining the benefits of Rust and encouraging new kernel code/drivers to be in Rust rather than C. Greg KH makes the case that the majority of the kernel bugs are due to "stupid little corner cases in C that are totally gone in Rust." "As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years... and who sees EVERY kernel CVE issued, I think I can speak on this topic," Kroah-Hartman began. Here's some excerpts from his remarks. Citing corner cases like overwrites of memory, error path cleanups, use-after-free mistakes and forgetting to check error values, Kroah-Hartman says he's "all for... making these types of problems impossible to hit." That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)... [F]or new code / drivers, writing them in Rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this...? Rust isn't a "silver bullet" that will solve all of our problems, but it sure will help in a huge number of places, so for new stuff going forward, why wouldn't we want that...? Yes, mixed language codebases are rough, and hard to maintain, but we are kernel developers dammit, we've been maintaining and strengthening Linux for longer than anyone ever thought was going to be possible. We've turned our development model into a well-oiled engineering marvel creating something that no one else has ever been able to accomplish. Adding another language really shouldn't be a problem, we've handled much worse things in the past and we shouldn't give up now on wanting to ensure that our project succeeds for the next 20+ years. We've got to keep pushing forward when confronted with new good ideas, and embrace the people offering to join us in actually doing the work to help make sure that we all succeed together. Kroah-Hartman emphasized later that "a huge majority of the stupid things we do in C just don't happen in the same code implemented in Rust (i.e. memory leaks, error path cleanups, return value checking, etc.) " The complete thread contains over 140 messages — including Linus Torvalds' observation that " #pragma is complete garbage and should never be used."

Read more of this story at Slashdot.

EditorDavid