Skip to main content

Is the AI Boom Leading to More Natural Gas-Powered Utilities?

3 months ago
New power plants burning natural gas are being built all across America, reports the Washington Post, calling it a gas boom "driven in large part by AI." They blame tech companies like Microsoft and Meta — which "looked to gas amid a shortage of adequate new clean energy" — while noting that those companies "say they plan to offset their development of natural gas capacity with equal investments in clean energy like solar and wind." [E]ven coal is making a comeback. But the biggest push is for gas, with more than 220 plants in various stages of development nationwide. They are often pitched as a bridge until more clean power is available, sometimes with promises the plants will eventually be equipped with nascent technology that traps greenhouse gas emissions. But the timeline for installing such "carbon capture" is vague. "These companies are building these massive new gas plants that are going to be there for 30 to 50 years," said Bill Weihl, a former director of sustainability at Facebook and founder of the nonprofit ClimateVoice. "That's not a bridge. It is a giant bomb in our carbon budget...." Public filings from some of the big tech companies driving this development show their greenhouse gas emissions are soaring... "The last few years have revealed that a global energy transition is more complex and less linear than anticipated," Microsoft's board wrote in urging rejection of a December shareholder resolution demanding the company confront the climate risks of AI. "While urgency builds for decarbonization, so does the demand for energy." Shareholders rejected the resolution. Microsoft is battling with environmental groups over its plans to build a multibillion-dollar data center in Mount Pleasant, Wisconsin, powered with electricity from natural gas. Their petition warns Microsoft's project "will push our state's climate goals out of reach, locking us into 30 more years of fossil fuels." The company said in a statement that it remains committed to erasing its emissions by adding substantial clean power to regional power grids. "By the end of 2025 we expect to meet our goal of adding new sources of carbon-free electricity to the grid equal to 100 percent of the electricity used by our datacenters," the statement said. Meta says it is doing the same in Louisiana [where it's building a new 4-million-square-foot data center] and is "committed to matching our electricity use with 100 percent clean and renewable energy." The article includes two revealing quotes: "It is like everyone just gave up," said Aaron Zubaty, CEO of Eolian, a large clean energy developer that works with data centers. American Petroleum Institute President Mike Sommers (who represents the oil and gas industry in Washington), said "The words that have replaced 'energy transition' are 'AI' and 'data centers'. We're transitioning from the energy transition to the energy reality ... We're going to need a lot more oil and gas."

Read more of this story at Slashdot.

EditorDavid

Richard Dawkins Asks ChatGPT If It's Conscious

3 months ago
British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic? "Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion? ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind... So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent." Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness. When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog. But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery. When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness. ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do. But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words. "I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..." But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically... "That settles it. You ARE conscious!"

Read more of this story at Slashdot.

EditorDavid