Why AI Pioneer Marvin Minsky Called Me “Racist”

Marvin Minsky in 2008. Photo from Wikipedia

HOBOKEN, JANUARY 24, 2026. The brouhaha over artificial intelligence keeps reminding me of AI pioneer Marvin Minsky (1927-2016). That’s why I’m posting this revised version of my write-up of Minsky in The End of Science (which was published 30 years ago). The profile is based on a day I spent hanging out with Minsky at MIT’s Artificial Intelligence Lab in 1993, when AI was… well, read on. – John Horgan

Before I visit Marvin Minsky at MIT, colleagues warn that he might be defensive, even hostile. If I don’t want the interview cut short, I shouldn’t ask him too bluntly about the falling fortunes of artificial intelligence or his theories about how minds work.

A former associate pleads with me not to take advantage of Minsky's penchant for outrageous utterances: “Ask him if he means it, and if he doesn't say it three times you shouldn't use it.”

When I meet Minsky in May 1993, he’s edgy, but his agitation seems congenital, not situational. He fidgets ceaselessly, blinking, waggling his foot, pushing things around his desk.

Unlike most scientific celebrities I’ve interviewed, Minsky gives the impression of conceiving responses from scratch rather than retrieving them from memory. He’s often but not always incisive. “I'm rambling here,” he mutters after a riff on verification of mind-models collapses in a heap of sentence fragments.

Even his physical appearance has an improvised air. His large, round head appears entirely bald but is actually fringed by hairs fine as optical fibers. He wears a braided belt that supports, in addition to pants, a belly pack and holstered pliers. With his paunch and vaguely Asian features, Minsky resembles Buddha--reincarnated as a hyperactive hacker.

Minsky seems unable--or unwilling--to inhabit any persona for long. Early on, he lives up to his reputation as a curmudgeon and arch-reductionist. He expresses contempt for Roger Penrose and others who doubt computers can be conscious. Consciousness is a “trivial” issue, Minsky says. “I've solved it, and I don't understand why people don't listen.”

Consciousness is merely a type of short-term memory, a “low-grade system for keeping records.” In fact, computer programs such as LISP, which allow their processing to be retraced, are “extremely conscious,” much more than we humans are, with our pitifully shallow memory banks.

The only theorist of mind--other than himself--who truly grasps the mind's complexity is dead. “Freud has the best theories so far, next to mine, of what it takes to make a mind,” Minsky says.

Minsky even snubs MIT's Artificial Intelligence Laboratory, which he founded and where we happen to be meeting. "I don't consider this to be a serious research institution at the moment," he sniffs.

But as we wander through the lab, a metamorphosis occurs. “Isn't the chess meeting supposed to be here?” Minsky asks a group of researchers chatting in a lounge. “That was yesterday,” someone replies.

After asking about the chess talk, Minsky spins tales about the history of chess-playing programs. This minilecture evolves into a reminiscence of Minsky's late friend Isaac Asimov. The science-fiction writer declined to see robots at MIT, fearing his imagination “would be weighed down by this boring realism.”

One lounger, noticing that he and Minsky are wearing the same pliers, yanks his instrument from its holster and flicks its retractable jaws into place. “En garde,” he says. Grinning, Minsky draws his pliers, and he and his challenger jab at each other like punks in a knife fight.

Minsky expounds on the tool’s pros and cons; his pliers pinch him during certain maneuvers. “Can you take it apart with itself?” someone asks. Everyone laughs at this allusion to a fundamental problem in robotics.

Returning to Minsky’s office, we encounter an extremely pregnant woman, who is scheduled to defend her doctoral thesis the next day. “Are you nervous?” asks Minsky. “A little,” she confesses. “You shouldn't be,” he says. He gently presses his forehead against hers, as if seeking to infuse her with his strength.

I realize, watching this scene, that there are many Minskys.

But of course there would be. Multiplicity is central to Minsky's view of the mind. In his book The Society of Mind he contends that brains contain diverse, specialized structures that evolved to solve different problems.

“We have many layers of networks of learning machines,” he explains to me, “each of which has evolved to correct bugs or to adapt the other agencies to the problems of thinking.” Minds can’t be explained in terms of a single set of “axioms,” “because we're dealing with a real world instead of a mathematical one."

AI has faltered because researchers have succumbed to “physics envy”: the desire to reduce the brain’s intricacies to simple formulas. “They are defining smaller and smaller subspecialties that they examine in more detail, but they're not open to doing things in a different way.”

Minsky insists that minds have many methods for coping with even a single, simple problem. If your television doesn’t work, you check to see whether it’s plugged in, and maybe you slap it. If that fails, your physical problem becomes a social problem—finding a repairman who can fix the TV quickly and cheaply.

“That’s one lesson I can't get across” to AI researchers, Minsky says. “It seems to me that the problem the brain has more or less solved is how to organize different methods into working when the individual methods fail pretty often.”

As Minsky continues speaking, his emphasis on multiplicity takes on a metaphysical and even moral cast. He blames the problems of his field--and of science in general--on what he calls “the investment principle,” the tendency of humans to keep doing something they’ve learned to do well rather than seeking new solutions.

Repetition, or, rather, single-mindedness, holds a kind of horror for Minsky. “If there's something you like very much,” he asserts, “then you should regard this not as you feeling good but as a kind of brain cancer, because it means that some small part of your mind has figured out how to turn off all the other things.”

Minsky has mastered many skills during his career--he is adept in mathematics, philosophy, physics, neuroscience, robotics and computer science and has even written several science-fiction novels—because he loves the “feeling of awkwardness” triggered by learning something hard.

“It's so thrilling not to be able to do something. It's such a rare experience to treasure.” Minsky was a musical prodigy, but he stopped playing after deciding that music was becoming a soporific. “I had to kill the musician at some point,” he says. “It comes back every now and then, and I hit it.”

Minsky has no patience for those who claim the mind can never be fully understood. “Look, before Pasteur people said, ‘Life is different. You can't explain it mechanically.’” Just as science has explained life, so it will explain the mind.

A final theory of the mind will be extremely complex, but its truth could be demonstrated in several ways. First, a machine based on the theory will mimic human development. The machine would “start as a baby and grow up by seeing movies and playing with things,” Minsky says.

Advances in brain-imaging technology will help scientists determine whether neural processes corroborate the theory. “Once you get a [brain] scanner that has one angstrom resolution, then you could see every neuron in someone's brain. You watch this for 1,000 years and you say, ‘Well, we know exactly what happens whenever this person says “blue.”’”

If scientists discover a final theory of mind, I ask, what frontiers will be left to explore?

“Why are you asking me this question?” Minsky growls. The concern that scientists will run out of things to do is “pitiful,” he says. “There's plenty to do.” We humans may well be approaching our cognitive limits, but we will soon create machines that can surpass us as scientists.

But that would be machine science, I say, not human science.

“You're a racist, in other words,” Minsky says, his great domed forehead purpling. I scan his face for signs of irony but find none.

“The important thing for us is to grow,” Minsky continues, “not to remain in our own present stupid state.” We humans are just “dressed up chimpanzees.” And we should dedicate ourselves to creating beings smarter than us.    

When I ask what machine scientists might be interested in, Minsky suggests, half-heartedly, that they might try to comprehend themselves as they kept evolving. He’s more enthusiastic discussing the conversion of human psyches into digital avatars.

This technological advance would allow Minsky to indulge in dangerous pursuits, such as taking LSD or converting to a religious faith. “I regard religious experience as a very risky thing to do because it can destroy your brain in a rapid way. But if I had a backup copy...”

Minsky would also love to know how Yo-Yo Ma feels when playing the cello. Then Minsky reveals, to my surprise, that he doubts such a mind meld is possible. To feel what Yo-Yo Ma feels, Minsky would have to possess all Yo-Yo Ma's memories, he would have to become Yo-Yo Ma.  But in becoming Yo-Yo Ma, Minsky would cease to be Minsky.

With this astonishing admission, Minsky implies that minds, finally, are irreducible and unknowable.

Minsky is often derided as a rabid reductionist. But he is an anti-reductionist. His revulsion toward single-mindedness, his fondness for Freud, his passion for learning and novelty--these are traits of a scientific romantic, for whom the quest matters more than mere knowledge.

Postscript: Teresa Nakra, who directs the Music and Technology program at Stevens, knew Minsky. She sent me this video of him charming the audience in 2012 at a workshop she organized in his honor: “Music, Mind, and Invention.”

Further Reading:

I riff on Minsky’s ideas in “The Investment Principle,” a chapter in My Quantum Experiment.

Quantum Mechanics, the Chinese Room and the Limits of Understanding.

The Singularity Cult

Will AI Save Us From AI?

Free Will and ChatGPT-Me

Can a Chatbot Be Aware That It’s Not Aware?

Cutting Through the ChatGPT Hype

How AI Moguls Are Like Mobsters

Next
Next

India and the Unfairness Problem