Should Killer Robots Have Rights?

Should the Terminator have the right to vote and own a condo?

HOBOKEN, MARCH 3, 2026.  I love philosophers, but sometimes they annoy me--for example, when they focus on what you might call sci-fi versus real-world problems.

Example: On January 26 a philosopher I admire, Eric Schwitzgebel, gave an online talk on whether artificial intelligences might be conscious and hence deserving of rights. You can see the talk (hosted by physicist Kenneth Augustyn of Michigan Tech) here. Here’s Schwitzgebel’s abstract:

According to some leading scientific theories of consciousness, we are on the verge of creating genuinely conscious AI systems.  According to other leading scientific theories of consciousness, AI could never be conscious in anything like its current form.  Both liberal and conservative scientific theories of AI consciousness have substantial plausibility.  We cannot dismiss either.  We will thus likely soon be surrounded by AI systems who are debatably conscious.  If they are conscious, their consciousness is unlikely to be simple and froglike, but instead sophisticated and personlike.  There is no good ethical answer to the question of what we should do with AI systems who are debatably persons, deserving equal moral rights with us.  If we deny them full rights, then we risk perpetrating slavery and murder, perhaps on an enormous scale.  If we grant them full rights, we risk sacrificing real human lives for entities without interests worth the sacrifice.  The solution is not to create such debatable AI persons.

And here’s my elaboration: There are many different theories of consciousness, all of which seem pretty implausible. Experts can’t agree on what consciousness is, why it evolved, how it’s produced by brains and possibly other objects, what sorts of entities biological or artificial might have it—you get the picture.

Linguist Emily Bender calls AIs “stochastic parrots” that merely mimic conscious cogitation, Schwitzgebel notes. Neuroscientist Anil Seth and philosopher Ned Block have argued that biology is a prerequisite for consciousness. Others reject this perspective as too biocentric.

In 2023 David Chalmers, another favorite philosopher, gave odds of 25 percent that chatbots will be conscious in a decade. Geoffrey Hinton, a pioneer of machine learning, suspects AIs are already conscious.

The debate over machine sentience has moral as well as economic and political implications. If AIs are conscious, they might feel pleasure and pain, they might suffer. AIs might have “experiences as rich and meaningful as ours,” Schwitzgebel says, noting that people are already falling in love with chatbots.

At some point we might feel compelled to grant AIs rights even more expansive than those protecting nonhuman animals like dogs and chimps. But giving “millions or even billions of machines” the “right to own property” and to “vote,” Schwitzgebel says, might infringe on our rights and trigger a “social crisis.”

He offers “policy suggestions” that might help avert this crisis. We should “prioritize” research on consciousness to help resolve questions over the sentience of AIs. Companies and universities involved in AI research should create boards—like those that weigh the ethics of experiments on animals—that consider whether a given AI might deserve moral consideration.

I have mixed reactions to Schwitzgebel’s talk. Part of me admires the seriousness and lucidity with which he analyzes the prospect of machine sentience, which has long fascinated me. I dig sci-fi flicks that dramatize this possibility: 2001: A Space Odyssey, The Terminator, The Matrix, A.I. (the Spielberg flick), Ex Machina, Her.

But another part of me, the hard-nosed, skeptical, science-journalist part, the part that lives in the real world and broods over real-world problems, views debates over AI sentience and rights as a waste of time.

One reason is what I call the solipsism problem: I can’t be sure other humans, let alone chatbots, are conscious. Barring the invention of a “consciousness-meter,” debates over machine sentience will remain irresolvable and hence purely academic.

Here in the real world, moreover, AIs are already hurting people. Executives are replacing employees with AI. Governments and corporations are monitoring and manipulating citizens and consumers with AI. Militaries are using AI to kill people in Ukraine, Gaza and elsewhere. Given these trends, fretting over AI sentience and rights strikes me as a distraction, to put it mildly.

I don’t say all this to Schwitzgebel. I merely suggest, during the Q&A, that neither the Trump administration nor AI companies care about AI “rights” or other ethical issues. So what is the “realistic possibility,” I ask, that “policy ideas coming from philosophers or AI specialists or anybody are going to have any effect?”

“I share your pessimistic point of view to a substantial extent,” Schwitzgebel replies. “But I don't think it is completely impossible that there would be ethical constraints of the sort that I'm considering, maybe even influenced by academics like me.” He notes that one major AI firm, Anthropic, has hired an ethicist, Kyle Fish, to think about whether its AI “might deserve some moral consideration.”

AI firms seem sensitive to negative attention, such as reports of young people committing suicide after talking to ChatGPT. OpenAI subsequently announced it was consulting with mental-health professionals on how to reduce the likelihood of such incidents. OpenAI’s response, Schwitzgebel says, shows that AI firms “will make ethical corrections” in response to criticism.

Yeah, maybe, but I doubt these “corrections” will be substantive. Take the case of Anthropic. The company has refused to allow the U.S. Department of War to deploy its AI in “autonomous” weapons, which can kill without human oversight. That sounds like an admirably ethical stance, right?

But look closer. Anthropic’s CEO, Dario Amodei, doesn’t rule out contributing to autonomous weapons; he says more research is needed to make sure autonomous weapons work “reliably.” According to The Wall Street Journal, the U.S. military has already deployed Anthropic software in its current war against Iran.

Killer robots, which have long populated science fiction, are already here in the form of drones that can destroy targets on their own. And philosophers are worrying about whether robots should vote?

I’m being grossly unfair to Schwitzgebel--and hypocritical. I waste lots of time on stuff far less consequential than robots’ rights. I just read and wrote about Proust! And a novel featuring superintelligent spaceships! Who am I to tell Schwitzgebel what to care about?

But this war against Iran has inflamed me, I’m getting desperate. I’ve long seen war as our gravest threat. My fears have been compounded by the recent surge of artificial intelligences, which predatory capitalists are eagerly marketing to militaries. Then there is the ascent of an unstable warmonger to the U.S. presidency.

What would have seemed sci-fi scenarios a decade ago are now all too real. Would it be futile for Schwitzgebel and other intellectuals to denounce war and propose ways to end it? Probably. But I’d love to see them try.

Further Reading:

The End of War

Is Peace a Pipe Dream?

Anthropologist Demolishes Claim That War Is in Our Genes

To Abolish Nukes, We Must Abolish War

Dear Student Protesters, Please Oppose All War

Judith Butler on Nonviolence: A Critique

Dear Feminists, Please Help End War!

Is Killing Children Ever Justified?

The Statistics of Lovers’ Quarrels

Frans de Waal (RIP) and the Origins of War

Jimmy Carter’s Thoughts on the End of War

You’re Not Free If You’re Dead: The Case Against Giving Ukraine F-16s

Confessions of a Woke, Antiwar, Hockey-Playing Demonic Male

Next
Next

Epstein and The End of “Pure” Science