An AI Critic Talks to a Tech School

Participants at “The AI Con” event at Stevens Institute of Technology on April 28 include, from left, Sandeep Mertia, Katheryn Detwiler, Emily Bender and Tiffany Li.

HOBOKEN, MAY 4, 2026.  During the Q&A, a librarian says she likes Emily Bender’s point that language brainwashes us to accept “artificial intelligence.” The librarian asks Bender, a fierce critic of AI, to propose a less innocuous label for the “data centers” feeding the AI boom.

Bender can’t think of a substitute for “data centers” off the top of her head, so I’ve come up with candidates: “Job-Annihilating Black Holes.” Or maybe “Heat-Wave-Inducing Plagiarism Factories.”

Let me back up a moment. My employer, Stevens Institute of Technology, is betting that AI will help it weather tempests buffeting academia (which is ironic, because AI is one of those tempests). Stevens is introducing an AI major next fall and is exploring ways in which AI can make education more efficient.

But Stevens isn’t embracing AI uncritically. Last week the School of Humanities, Arts & Social Sciences, HASS, to which I belong, hosted a talk by Bender, a linguist who specializes in language-processing programs.

A year before ChatGPT triggered the current AI boom, Bender and three co-authors published “On the Dangers of Stochastic Parrots.” This prescient paper argues that large language models, which don’t “understand” language any more than parrots do, are exacerbating climate change and inequality, among other problems.

In 2025 Bender and sociologist Alex Hanna expanded upon these arguments in The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025). Below, I summarize and elaborate on points made by Bender:

HYPE AND THE AI “CON.” Bender and Hanna call AI a “con,” meaning “a bill of goods you are being sold to line someone’s pockets.” AI companies and their shills carry out the con by inundating us with hype. “We’re told that AI is on the verge of doing science for us,” Bender and Hanna write, “finally providing us with answers to urgent problems from medical breakthroughs (discovering a cure for cancer!) to the climate crisis (discovering a solution for global warming!).”

DOOMERISM IS HYPE, TOO. AI “doom” scenarios, Bender and Hanna contend, are an especially clever type of hype, which recycles the old sci-fi trope that “machines with minds of their own will, perhaps, kill us all, intentionally or unintentionally.” Like claims that AI will solve all our problems, “doom” hype implies that AIs possess extraordinary powers; to ensure positive outcomes, we must entrust our fate to AI-makers and their enablers, who are “heroes out to save humanity.” Yes, as I asserted three years ago, AI moguls resemble mobsters setting up a protection racket: the moguls are promising to protect us from the threat they create.

LYING SYCOPHANTS. AI hype exploits our tendency to anthropomorphize, that is, to attribute human qualities to non-human things. This vulnerability was exposed by the legendary AI “therapist” ELIZA, created by Joseph Weizenbaum in the late 1960s. ELIZA was crude, typically responding to a typed statement, like "I feel sad today,” with a question: “Why do you feel sad today?” Some people nonetheless attributed understanding and empathy to ELIZA. So it is not surprising that many people trust and even fall in love with modern chatbots, which are exponentially more sophisticated than ELIZA. The chatbots are also designed to be “sycophantic,” that is, to suck up to us, to tell us what we want to hear, regardless of the truth. Amusingly, or horrifyingly, Sam Altman, the head of OpenAI, reportedly resembles a sycophantic chatbot. According to a New Yorker profile, Altman, in his relentless pursuit of success, tells people what they want to hear even if that requires lying to them.

DENIGRATING HUMANS. The flip side of exalting machines by claiming they’re like us is denigrating humans by claiming we’re just machines. In 1990 I asked Claude Shannon, father of information theory and AI pioneer, if machines can think. Shannon replied: “You bet. I’m a machine, and you’re a machine, and we both think, don’t we?” In 1993 Marvin Minsky told me that computers which run LISP software have better memories and hence are more “conscious” than we are. Similarly, Sam Altman has responded to Bender’s stochastic-parrot argument by tweeting, “I am a stochastic parrot and so r u.” I indulged in similar self-deprecation in a column called “Free Will and ChatGPT-Me.” This misanthropic schtick, I realize now, aids and abets the AI con.

AI EXPLOITS CHEAP LABOR. In March, I reported being mightily impressed by the way a Waymo transported me through San Francisco. Then I learned that humans in the U.S. and elsewhere remotely supervise the “driverless” Waymos. (Thanks to Daniel Nagornyi, my student, for alerting me to this contradiction, which is revealed in Forbes.) See also the 2025 book Waiting for Robots, in which sociologist Antonio Casilli reports on the hidden human labor underlying artificial intelligence and other forms of automation.

WAR PROFITEERING. From the start, AI has been a weapons program. In the 1950s AI pioneers John McCarthy, Marvin Minsky, Herbert Simon and Frank Rosenblatt “were concerned with developing tools that could be used for the guidance of administrative — and ultimately— military systems,” Bender and Hanna write. Today, AI purveyors such as Anthropic, OpenAI, Microsoft, Amazon and Nvidia are war profiteers. Last week, according to The NY Times, the Pentagon announced that “it had reached deals with some of the technology industry’s biggest companies in an effort to expand the military’s artificial intelligence capabilities.”

MEDIA COMPLICITY. Bender and Hanna note that “journalism, already suffering from dramatic job losses and fire sales of respected news organizations, is ripe for the infection of AI-generated content, intended to maximize eyeballs on ads with as little investment in actual journalism as possible.” Indeed, as part of their aggressive marketing campaign, AI firms such as OpenAI, Meta, Microsoft and Google have formed “partnerships” with major media. According to this website, these companies include The Washington Post, The Guardian, The Atlantic Magazine, Associated Press, Reuters, Conde Nast (publishers of The New Yorker) and Financial Times. Can media that profit from AI report critically on it?

CAN AI BE STOPPED? During the Q&A with Bender, I point out that tech companies are extremely powerful, and they have allied themselves with Trump, who opposes regulation of AI. I ask Bender: What hope is there, realistically, that AI can be stopped or slowed?

Bender replies that many communities and politicians oppose the data centers on which AI depends; this trend gives her hope. Indeed, The NY Times reports that Republicans and Democrats are banding together to block data centers, which result in “higher electricity prices, decreased home values” and environmental damage.

Wait, I just thought of another name to replace “data centers”: “Slaughterhouses for Human Hopes.”

Further Reading:

I Hate AI!

What Is It Like to Be a Superintelligent Machine?

The Singularity Cult

How AI Moguls Are Like Mobsters

Cutting Through the ChatGPT Hype

Should Machines Replace Mathematicians?

Free Will and ChatGPT-Me

Next
Next

How “Fluid” Is Sex?