Close Menu
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
Trending

Argus Analysis Slashes Air Merchandise and Chemical compounds, Inc. (APD)’s Value Goal To $265, Maintains Purchase Ranking

December 14, 2025

Gia Giudice Shares NSFW Manner She Rewards Boyfriend Christian

December 14, 2025

Aaliyah Jay Publicizes Engagement After Opening Up In Sequence

December 14, 2025

What RFK Jr.’s hep B vaccine rollback means for California

December 14, 2025

A Easy Tablet May Change Injections for Treating Gonorrhea

December 14, 2025

Quordle hints and solutions for Sunday, December 14 (sport #1420)

December 14, 2025

World week forward: Europe beneath fireplace

December 14, 2025
Facebook X (Twitter) Instagram
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
VernoNews
Home»Science»AI because the New Empire? Karen Hao Explains the Hidden Prices of OpenAI’s Ambitions
Science

AI because the New Empire? Karen Hao Explains the Hidden Prices of OpenAI’s Ambitions

VernoNewsBy VernoNewsDecember 13, 2025No Comments22 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
AI because the New Empire? Karen Hao Explains the Hidden Prices of OpenAI’s Ambitions
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Kendra Pierre-Louis: For Scientific American’s Science Rapidly, I’m Kendra Pierre-Louis, in for Rachel Feltman.

In 2022 OpenAI unleashed ChatGPT onto the world. Within the years following generative AI has wormed its means into our inboxes, our lecture rooms and our medical information, elevating questions on what function these applied sciences ought to have in our society.

A Pew survey launched in September of this yr discovered that fifty p.c of Individuals have been extra involved than excited in regards to the elevated AI use of their day-to-day life; solely 10 p.c felt the opposite means. That’s up from the 37 p.c of Individuals whose dominant feeling was concern in 2021. And in line with Karen Hao, the writer of the current e-book Empire of AI: Desires and Nightmares in Sam Altman’s OpenAI, individuals have loads of causes to fret.


On supporting science journalism

In the event you’re having fun with this text, take into account supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world right now.


Karen just lately chatted with Scientific American affiliate books editor Bri Kane. Right here’s their dialog.

Bri Kane: I needed to actually soar proper into this e-book as a result of there’s a lot to cowl; it’s a dense e-book in my favourite sort of means. However I needed to begin with one thing that you simply begin the e-book on actually early on, [which] is that you’ll be able to be clear-eyed about AI in a means that a variety of reporters and even regulators will not be in a position to be, whether or not as a result of they aren’t as well-versed within the know-how or as a result of they get stars of their eyes when Sam Altman or whoever begins speaking about AI’s future. So why can you be so clearheaded about such a sophisticated topic?

Karen Hao: I feel I simply obtained actually fortunate in that I began overlaying AI again in 2018, when it was simply means much less noisy as an area, and I used to be a reporter at MIT Expertise Overview, which actually focuses on overlaying the cutting-edge analysis popping out of various disciplines. And so I spent most of my time talking with teachers, with AI researchers that had been within the discipline for a very long time and that I might ask numerous foolish inquiries to in regards to the evolution of the sphere, the totally different philosophical concepts behind it, the most recent methods that have been taking place and in addition the restrictions of the applied sciences as they stood.

And so I feel, actually, the one benefit that I’ve is context. Like, I’ve—I had years of context earlier than Silicon Valley and the Sam Altmans of the world began clouding the discourse, and it permits me to extra calmly analyze the flood of knowledge that’s taking place proper now.

Kane: Yeah, you middle the e-book round a central premise, which I feel you make a really robust argument for, that we must be excited about AI when it comes to empires and colonialism throughout historical past. Are you able to clarify to me right here why you suppose that’s an correct and helpful lens and what in your analysis and reporting introduced you to this conclusion?

Hao: So the explanation why I name corporations like OpenAI “empires” is each due to the sheer magnitude at which they’re working and the controlling affect they’ve developed in so many aspects of society but in addition the techniques for the way they’ve amassed an infinite quantity of financial and political energy. And that’s particularly that they amass that energy via the dispossession of nearly all of the remainder of the world.

And I spotlight many parallels within the e-book for the way they do that, however considered one of them is that they extract a unprecedented quantity of assets from totally different components of the world, whether or not that’s bodily assets or the info that they use to coach their fashions from people and artists and writers and creators or the best way that they extract financial worth from the employees that contribute to the event of their applied sciences and by no means actually see a proportional share of it in return.

And there’s additionally this large ideological part to the present AI trade. Generally individuals ask me, “Why didn’t you simply make it a critique of capitalism? Why do it’s a must to draw on colonialism?” And it’s as a result of in case you simply take a look at the actions of those corporations via a capital lens, it really doesn’t make any sense. OpenAI doesn’t have a viable enterprise mannequin. It’s committing to spending $1.4 trillion within the subsequent few years when it solely has tens of billions in income. The revenue motive is coupled with an ideological motive: this quest for a man-made normal intelligence [AGI], which is a faith-based thought; it’s not a scientific thought. It’s this quasi-religious notion that if we proceed down a specific path of AI growth that by some means a sort of AI god is gonna emerge that can resolve all of humanity’s issues, or rattling us to hell. And colonialism is the fusion of capitalism and beliefs, in order that—there’s, there’s only a multitude of parallels between the empires of outdated and the empires of AI.

The explanation why I began excited about this within the first place was as a result of there have been a variety of students that began articulating this argument. There have been two items of scholarship that have been significantly influential to me. One was a paper known as “Decolonial AI” that was written by William Isaac, Shakir Mohamed and Marie-Therese Png out of Deep Thoughts and the College of Oxford. The opposite one is the e-book The Prices of Connection, printed in 2019 by Nick Couldry and Ulises Mejias, that additionally articulated this concept of a knowledge colonialism that underpins the tech trade. I noticed this was the body to additionally perceive OpenAI, ChatGPT and to the place we’re on this explicit second with AI.

Kane: So I needed to speak to you in regards to the scale of what AI is able to now and what the specified continued progress that these corporations are planning for, within the very close to future. Particularly, what I feel your e-book touches on that a variety of conversations round AI will not be actually specializing in is the dimensions of environmental impression that we’re seeing with these knowledge facilities and what we’re planning to construct extra knowledge facilities on high of, which is viable land and potable water. So are you able to speak to me in regards to the environmental impacts of AI that you’re seeing and that you’re most involved with?

Hao: Yeah, there are simply so many intersecting crises that the AI trade’s path of growth is exacerbating.

One, after all, is the power disaster. So Sam Altman only a couple weeks in the past introduced a brand new goal for the way a lot computational infrastructure he desires to construct: he desires to see 250 gigawatts of data-center capability laid by 2033—only for his firm. Who is aware of if it’s even attainable to construct that. Like, Altman has estimated that this might price round $10 trillion. The place is he gonna get that cash? Who, who is aware of? But when that have been to come back to go, the first power sources that we might be utilizing to energy this infrastructure is fossil fuels, as a result of we’re not gonna get an enormous breakthrough in nuclear fusion by 2033 and renewable power simply doesn’t reduce it as a result of these amenities require being run 24/7 and we—renewable power simply can’t be that offer.

And so Enterprise Insider had this investigation earlier this yr that discovered that utilities are, quote, “torpedo[ing]” their renewable-energy objectives so as to service the info middle demand. So we’re seeing pure fuel crops having their lives prolonged, coal crops having their lives prolonged. And that’s not simply pumping emissions into the ambiance; it’s additionally pumping air air pollution into communities. And a part of Enterprise Insider’s investigation discovered that there might be billions of {dollars} of well being care prices that outcome from this astronomical enhance in, in air air pollution in communities which have already traditionally suffered the lack to entry their elementary proper to wash air. We’ve seen unimaginable reporting popping out of Memphis, Tennessee, for instance, the place Colossus, the supercomputer getting used to coach Grok, is being run on 35 [reportedly] unlicensed methane fuel generators that’s pumping that, poisonous pollution into that group’s air.

Then you have got the issue of the freshwater consumption of those amenities. Most of those amenities are cooled with water as a result of it’s extra energy-efficient, sarcastically. However then, when it’s cooled with water, it must be cooled with freshwater as a result of another kind of water results in the corrosion of the gear or to bacterial progress. And Bloomberg then had an investigation discovering that two thirds of those new amenities are getting into into water-scarce areas. And so there’s actually communities around the globe which are competing with Silicon infrastructure for life-sustaining assets.

There was this text from Truthdig that put it rather well that the AI trade, we must be pondering of this as a heavy trade. Like, that is—this can be very poisonous to the surroundings and to public well being around the globe.

Kane: Nicely, some might say that the considerations round environmental impression of AI will simply be solved by AI: “AI will simply inform us the answer to local weather change. It’ll crunch the numbers in a means we haven’t finished so earlier than.” Do you suppose that’s life like?

Hao: What I might say is, like, that is clearly primarily based on hypothesis, and the harms that I simply described are actually taking place proper now. And so the query is, like, how lengthy are we going to cope with the, the precise harms and maintain out for a speculative chance that perhaps, on the finish of the street, it’s all gonna be wonderful?

Like, after all, Silicon Valley tells us we are able to maintain on for so long as, as they need us to as a result of they’re going to be wonderful—like, the Sam Altmans of the world are gonna be wonderful. You understand, they’ve their bunkers constructed, they usually’re all set as much as survive no matter environmental disaster comes after they’ve destroyed the planet. [Laughs.]

However the opportunity of an AGI rising and fixing the whole lot is so astronomically small, and I’ve to emphasise, like, AI researchers themselves don’t even imagine that that is going to come back to go. There was a survey earlier this yr that discovered that [roughly] 75 p.c of long-standing AI researchers who will not be within the pocket of trade don’t suppose we’re on the trail to a man-made normal intelligence that’s gonna resolve all of our issues.

And so simply from that perspective, like, we shouldn’t be utilizing a teeny, tiny chance on the far-off horizon that isn’t even scientifically backed to justify an, a unprecedented and irreversible set of damages which are occurring proper now.

Kane: So Sam Altman is a central determine of your e-book. He’s the central determine of OpenAI, which has grow to be one of many greatest, most necessary AI corporations on the earth. However you additionally say in your e-book that, in your opinion, he’s a grasp manipulator that tells individuals what they need to hear, not what he actually believes or an goal reality. So do you suppose Sam Altman is mendacity or has lied about OpenAI’s present talents or their life like future talents? Or has he simply fallen for his personal advertising and marketing?

Hao: The factor that’s sort of complicated about OpenAI and the factor that shocked me probably the most after I was reporting the e-book is, initially, I got here to a few of their claims round AGI with the skepticism of: “That is all rhetoric and never really rooted in any sort of sincerity.” After which I noticed within the means of reporting that there are precise individuals who genuinely imagine this inside the group and, and inside the broader San Francisco group. And there are quasi-religious actions which have developed round what we then hear within the public as narratives that AGI might resolve all of humanity’s issues or AGI might kill everybody.

It’s actually onerous to determine precisely whether or not Altman himself is a believer on this regard or whether or not he has simply discovered it to be politically savvy to leverage the actual beliefs which are effervescent up inside the broader AI group as, as a part of the rhetoric that enables him to barter an increasing number of and extra assets and capital to come back to OpenAI. However one of many issues that I additionally wanna emphasize is I feel it’s—typically we fixate an excessive amount of on people and whether or not or not the people are good or dangerous individuals, like, whether or not, whether or not they have good ethical character or no matter. I feel, in the end, the issue isn’t the person; the issue is the system of energy that has been constructed to permit any particular person to affect billions of individuals’s lives with their selections.

Sam Altman has his explicit flaws, however nobody is ideal. And, like, anybody who would sit in that seat of energy would have their explicit flaws that will then cascade and have large ripple results on individuals all around the globe. And I simply don’t suppose that, like, we must always ever be permitting this to occur. That’s an inherently unsound construction. Like, even when Altman have been, like, extra charismatic or, or extra truthful or no matter, that doesn’t imply that we must always out of the blue cede him all of that energy. And even when Altman have been swapped in for another person, that doesn’t imply that the issue is solved.

I do suppose that Altman, particularly, is an unimaginable storyteller and in a position to be very persuasive to many alternative audiences and persuade these audiences to cede him and his firm extraordinary quantities of energy. We should always not permit that to occur, and we also needs to be centered on dismantling the facility construction and holding the corporate accountable somewhat than fixating on, on, essentially, the person himself.

Kane: So one factor you simply introduced up is the worldwide ramifications of a few of these actions which are taking place, and one factor that actually struck me in regards to the e-book is that you simply did a variety of worldwide journey. You visited the info facilities and spoke instantly with AI knowledge annotators. Are you able to inform me about that have and who you met?

Hao: Yeah, so I traveled to Kenya to satisfy with staff that OpenAI had contracted, in addition to staff that have been simply broadly being contracted by the remainder of the AI trade that was following OpenAI’s lead. And with the employees that OpenAI contracted what OpenAI needed them to do was to assist them construct a content-moderation filter for the corporate’s GPT fashions. As a result of on the time they have been attempting to develop their commercialization efforts, they usually realized that in case you put text-generation fashions that may generate something into the palms of hundreds of thousands of individuals, you’re gonna give you an issue the place it’s been skilled on the web—the web additionally has actually darkish corners. It might find yourself spewing racist, poisonous hate speech at customers, after which it will grow to be an enormous PR disaster for the corporate and, and make the product very unsuccessful.

For the employees what that meant was they needed to wade via a few of the worst content material on the web, in addition to AI-generated content material the place OpenAI was prompting its personal AI fashions to think about the worst content material on the web to offer a extra numerous and complete set of examples to those staff. And these staff suffered the identical sorts of psychological traumas that content material moderators of the social media period suffered. They have been being so relentlessly uncovered to the entire terrible tendencies in humanity that they broke down. They began having social nervousness. They began withdrawing. They began having depressive signs. And for a few of the staff that additionally meant that their household and their communities unraveled as a result of people are a part of a tapestry of a specific place, and there are folks that rely on them. It’s, like, a node in, in a broader community that breaks down.

I additionally spoke with, you recognize, the employees that, that have been working for different kinds of corporations, on a unique a part of the human labor-supply chain, not simply content material moderation however reinforcement studying from human suggestions, which is that this factor that many corporations have adopted, the place tens of hundreds of staff have to show the mannequin what is an efficient reply when a person chats with the chatbot. And so they use this methodology to not solely imbue sure kinds of values or encode sure values inside the fashions but in addition to only usually get the mannequin to work. Like, it’s a must to train an AI mannequin what dialogue seems to be like: “Oh, Human A talks, after which Human B talks. Human A asks query; Human B provides a solution.” And that’s now, like, the, the template for the way the chatbot is meant to work together with people as properly.

And there was this one girl I spoke to, Winnie, who—she labored for this platform known as Remotasks, which is the again finish for Scale AI, one of many major contractors of reinforcement studying from human suggestions, each for OpenAI and different corporations. And he or she—like, the content material that she was working with was not essentially traumatic in and of itself, however the circumstances beneath which she was working have been deeply exploitative, the place she by no means knew who she was working for and she or he additionally by no means knew when the duties would arrive onto the Remotasks platform.

And so she would spend her days ready by her laptop for work alternatives to reach, and after I spoke to her she had already been ready for months for a activity to reach. And when these duties arrived she was so nervous about not capitalizing on the chance that she would work for 22 hours straight in a day to only attempt to earn as a lot cash as attainable to in the end feed her children. And it was solely when her companion would inform her, like, “I’ll take over for you,” that Winnie could be prepared to go take a nap. What she earned was, like, a pair {dollars} a day. Like, that is the lifeblood of the AI trade, and but these staff see completely not one of the financial worth that they’re producing for these corporations.

Kane: Do you see a future the place the enterprise of AI is carried out extra ethically when it comes to these staff that you simply spoke with?

Hao: I do see a future with, with this taking place, however it—it’s not gonna come from the businesses voluntarily doing that; it’s going to come back from exterior stress forcing them to try this. I, at one level, spoke with a lady who had been deeply concerned within the Bangladesh [Accord], which is a world labor-standards settlement for the style trade that handed after there have been some actually devastating labor accidents that occurred within the style trade.

And what she stated was, on the time, the best way that she helped facilitate this settlement was by build up a major quantity of public stress to pressure these corporations to signal on to new requirements for the way they’d audit their provide chains and assure labor rights to the employees who labored for them. And he or she noticed a pathway inside the AI trade to do the identical actual factor. Like, if we get sufficient backlash from shoppers, even from corporations which are attempting to make use of these fashions, it should pressure these corporations to have larger requirements, and hopefully, we are able to then codify that into some sort of regulation or laws.

Kane: That makes me consider one other query I needed to ask you, which is: Are the regulators that we presently have, in—beneath this present administration, able to regulating this AI growth? Are they caught up on the sphere, usually talking, sufficient to know what wants regulation? Are they well-versed sufficient on this discipline to know the distinction between Sam Altman’s advertising and marketing converse and [Elon] Musk’s advertising and marketing converse and [Peter] Thiel’s advertising and marketing converse, in comparison with the fact on the bottom that you’ve seen with your personal eyes?

Hao: We’re undoubtedly struggling a disaster of management on the high within the U.S. and in addition in lots of international locations around the globe that will have been those to step as much as regulate and legislate this trade. That stated, I don’t suppose that which means there’s nothing to be finished on this second. I really suppose which means there’s much more work to be finished in bottoms-up governance.

We’d like the general public to be lively members in calling out these corporations. We—and we’ve seen this already taking place, you recognize? Like, with the current spate of psychological well being crises which were attributable to these AI fashions, we see an outpouring of public backlash, and households and victims suing these corporations; like, that’s bottoms-up governance at work.

And we see firms and types and, nonprofits and civil society all calling out these corporations to do higher. And in reality, we just lately noticed a major acquire, the place Character.AI stated, as one of many corporations that has a product that has been accused of killing a teen, they just lately introduced that they’re going to ban children from [using its chatbots]. And so there’s a lot alternative to proceed holding these corporations accountable, even within the absence of policymakers which are prepared to do it themselves.

Kane: So we’ve talked about a variety of considerations round AI’s growth, however you are also saying that there’s a lot optimism available. Do you take into account your self an AI doomer or an AI boomer?

Hao: I’m neither a boomer nor doomer by the precise definition that I exploit within the e-book, which is that each of those camps imagine in a man-made normal intelligence and imagine that AI will in the end develop some sort of company of its personal—perhaps consciousness, sentience—and I simply don’t suppose that it’s even value partaking in a venture that’s trying to develop agentic programs that take company away from individuals.

What I see as a way more hopeful imaginative and prescient of an AI future is returning again to creating AI fashions and AI programs that assist, somewhat than supplant, people. And one of many issues that I’m actually bullish about is specialised AI fashions for fixing explicit challenges which are, which are issues that, like, we have to overcome as a society.

So I don’t imagine in AGI on the horizon fixing local weather change, however there’s this local weather change nonprofit known as Local weather Change AI that has finished the onerous work of cataloging the entire totally different challenges—well-scoped challenges—inside the climate-mitigation effort that, that may really leverage AI applied sciences to assist us deal with them.

And not one of the applied sciences that they’re speaking about are associated any—in any method to massive language fashions, general-purpose programs, a theoretical synthetic normal intelligence; they’re all these specialised machine-learning instruments which are doing issues like maximizing renewable power manufacturing, minimizing the useful resource consumption of buildings and cities, optimizing provide chains, growing the accuracy of extreme-weather forecasts.

One of many examples that I typically give can also be of DeepMind’s AlphaFold, which can also be a specialised deep-learning software that has nothing to do with extraordinarily large-scale language fashions or, or AGI however was a, a software skilled on a comparatively modest variety of laptop chips to precisely predict the protein-folding constructions from a sequence of amino acids—essential for understanding human illness, accelerating drug discovery. [Its developers] received the Nobel Prize [in] Chemistry final yr.

And these are the kinds of AI programs that I feel we must be placing our power, time, expertise into constructing. We’d like extra AlphaFolds. We’d like extra climate-change-mitigation AI instruments. And one of many advantages of those specialised programs is that they may also be way more localized and due to this fact respect the tradition, language historical past of a specific group, somewhat than creating a one-size-fits-all resolution to everybody on this world. Like, that can also be inherently extraordinarily imperial [Laughs], to imagine that we are able to have a single mannequin that encapsulates the wealthy variety of, of our humanity.

And so yeah, so I assume I’m very optimistic that there’s a extra stunning AI future on the horizon, and I feel the 1st step to getting there’s holding these corporations, these empires, accountable after which imagining these new prospects and constructing them.

Kane: Thanks a lot, Karen, for becoming a member of, and thanks a lot for this work of reporting that you’ve finished in Empire of AI.

Hao: Thanks a lot for having me, Bri.

Pierre-Louis: And thanks for listening. Don’t neglect to tune in on Monday for our rundown on a few of the most necessary information in science.

Science Rapidly is produced by me, Kendra Pierre-Louis, together with Fonda Mwangi and Jeff DelViscio. Shayna Posses and Aaron Shattuck fact-check our present. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for extra up-to-date and in-depth science information.

For Scientific American, that is Kendra Pierre-Louis. See you subsequent time!

Avatar photo
VernoNews

Related Posts

A Easy Tablet May Change Injections for Treating Gonorrhea

December 14, 2025

James Webb telescope uncovers a brand new thriller: A broiling ‘hell planet’ with an environment that should not exist

December 14, 2025

This Week In Area podcast: Episode 189 — Privatizing Orbit

December 14, 2025

Comments are closed.

Don't Miss
Business

Argus Analysis Slashes Air Merchandise and Chemical compounds, Inc. (APD)’s Value Goal To $265, Maintains Purchase Ranking

By VernoNewsDecember 14, 20250

Air Merchandise and Chemical compounds, Inc. (NYSE:APD) is among the many 11 Most Oversold S&P 500…

Gia Giudice Shares NSFW Manner She Rewards Boyfriend Christian

December 14, 2025

Aaliyah Jay Publicizes Engagement After Opening Up In Sequence

December 14, 2025

What RFK Jr.’s hep B vaccine rollback means for California

December 14, 2025

A Easy Tablet May Change Injections for Treating Gonorrhea

December 14, 2025

Quordle hints and solutions for Sunday, December 14 (sport #1420)

December 14, 2025

World week forward: Europe beneath fireplace

December 14, 2025
About Us
About Us

VernoNews delivers fast, fearless coverage of the stories that matter — from breaking news and politics to pop culture and tech. Stay informed, stay sharp, stay ahead with VernoNews.

Our Picks

Argus Analysis Slashes Air Merchandise and Chemical compounds, Inc. (APD)’s Value Goal To $265, Maintains Purchase Ranking

December 14, 2025

Gia Giudice Shares NSFW Manner She Rewards Boyfriend Christian

December 14, 2025

Aaliyah Jay Publicizes Engagement After Opening Up In Sequence

December 14, 2025
Trending

What RFK Jr.’s hep B vaccine rollback means for California

December 14, 2025

A Easy Tablet May Change Injections for Treating Gonorrhea

December 14, 2025

Quordle hints and solutions for Sunday, December 14 (sport #1420)

December 14, 2025
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © VernoNews. All rights reserved

Type above and press Enter to search. Press Esc to cancel.