From “technocracy.news”
This was not a gaffe: it was an expression of thoroughly mechanical worldview held by all Technocrats. Altman leads the most powerful AI company on earth while operating from a framework that cannot distinguish the irreducible dignity of a human being from a server rack’s thermal profile. One critic put it bluntly: these people are “deeply antisocial and antihuman.” Another called him a psychopath.

In a photo-op at the AI Impact Summit in India, Anthropic CEO Dario Amodei refused to hold hands with him.
⁃ Patrick Wood, Editor.
The use of water by artificial intelligence has long been an online talking point, but Sam Altman of OpenAI – the parent company of the chatbot ChatGPT – has found himself in hot water over comments made about the technology’s energy consumption.
In an interview with The Indian Express’s Anant Goenka at the Express Adda event this week, Altman poured cold water on the claim that… well… cold water is used in AI data centres to stop computers from overheating.
Sorry, we couldn’t resist.
He said: “Water is totally fake. It used to be true, we used to do evaporative cooling in data centers, but now that we don’t do that, you see these things on the internet where [it’s] ‘don’t use ChatGPT, it’s 17 gallons of water for each query, or whatever’.
“This is completely untrue, it’s totally insane, no connection to reality.
“What is fair, though, is the energy consumption – not per query, but in total – because the world is using so much AI is real and we need to move towards nuclear or wind and solar very quickly.”
Goenka then turned to energy consumption, and referenced a theory from Microsoft founder Bill Gates that “AIs will learn from human evolution to be more efficient in how much energy they consume”.
Responding to this, Altman said: “One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model, relative to how much it costs a human to do one inference query.
“But it also takes a lot of energy to train a human. It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.
“The fair comparison is if you ask ChatGPT a question, how much energy does it take once a model is trained to answer that question, versus a human, and probably AI has already caught up on an energy efficiency basis, measured that way.”
But Altman’s answer hasn’t gone down well on X/Twitter, with one writing that “these people are deeply antisocial and antihuman”:
He’s not just defending AI energy use. He is smuggling in a whole anthropology where humans are basically inefficient meat computers that you have to pour food and years into before they become useful. And once you accept that, the next move is obvious. If people are just costly biological training runs, then burning mountains of electricity to build synthetic intelligence starts to feel not only equal, but superior, even if it negatively impacts actual humans.
That is the dystopian. It makes human development sound like a bug in the system, and it makes sacrificing human and creational flourishing for more computational power sound logical. To him, the grid gets strained, prices go up, ecosystems get hit, but hey, humans eat too, so what’s the difference?
The difference is that humans aren’t an inefficient line item. They’re the point. If your worldview can look at a child growing into an adult and describe it as energy spent to train intelligence, you haven’t said something profound. You’ve revealed a horrifically rotten worldview. L. David Fairchild via X
