How AI Moguls Are Like Mobsters

Mob boss Tony Soprano would have admired the way AI executives use fear to make more money. Photo from Wikipedia.

June 1, 2023. Am I worried about artificial intelligence? Yes, but not because my students might ask ChatGPT to write papers for them, or because machines might enslave us. I’m worried that AI will help rich, powerful tech overlords become even richer and more powerful. 

And that brings me to the latest spike in AI fearmongering. This week, a nonprofit called the Center for AI Safety released this statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Yes, that’s right, extinction.

The signers of the “extinction” proclamation include leaders of companies building artificial intelligence. People like Bill Gates of Microsoft and Sam Altman of OpenAI, the company that triggered the current AI hysteria by releasing ChatGPT.

What’s going on here? Why would execs who make AI warn that the technology might annihilate us, as in Westworld and countless other science fictions? Because fearmongering is a clever way to hype their product. If the execs just told us that AI is really cool and will make our lives better, we could dismiss that as old-fashioned advertising, the kind used to sell iPhones and Tesla SUVs. Ho hum.

If you say your product is so powerful that it might destroy civilization, you get our attention, and you convince us to trust you. You must be honest if you’re trashing your own product, right? So the tech overlords say, in effect: AI is so potent that it might destroy life as we know it! Dystopian science fiction might become reality! We’re talking The Matrix and Rise of the Machines, people!

Do the tech overlords actually want to keep AI from insinuating itself even more deeply into our lives? Of course not! The overlords’ goal is to frighten us and then assure us that they can protect us from all the bad things that AI might do. This gambit resembles an old-fashioned protection racket.

Remember The Sopranos, the show about gangsters in New Jersey? The gangsters in Tony Soprano’s crew tell owners of local pizzerias and butcher shops: There are bad guys out there who might hurt you. Give us a cut of your profits, and we’ll protect you. Wink wink. The gangsters wink because they are the bad guys.

Fearmongering can also help AI overlords manipulate government officials. The New York Times reports that Sam Altman and other AI executives who signed the “extinction” statement recently met with Joe Biden and Kamala Harris to talk about the risks of AI. Altman, testifying before Congress, called for government regulation of the technology. 

Why would AI execs seek government oversight of their industry? Because regulation can help AI frontrunners maintain their advantage in an extremely competitive market. Recall that Sam Bankman-Fried, before he was arrested for allegedly overseeing a fraudulent cryptocurrency scheme, was calling for more regulation of cryptocurrency. In The Sopranos, similarly, shrewd mob bosses get rid of competitors by ratting them out to the feds.

Mobsters in The Sopranos are also always trying to get a piece of lucrative government projects, such as plans to modernize urban waterfronts. AI firms can profit in ways that Tony Soprano could only dream of from a global arms race in AI. The Pentagon, egged on by former Google CEO Eric Schmidt, is boosting its spending on AI to maintain an edge over China. Silicon Valley firms are using the war in Ukraine to showcase unmanned aircraft and other weapons that incorporate AI, according to The New York Times.

Elon Musk should get credit for warning back in 2018 that AI “is far more dangerous than nukes.” Musk called for government regulation, too. Musk’s fellow tech bros, such as Mark Zuckerberg of Facebook, pooh-poohed his concerns back then. Now, apparently, more AI executives see the crazy-like-a-fox wisdom of Musk’s fearmongering.

Not all AIers have jumped on the bandwagon of doom. Pushing back against the extinction scenario, AI entrepreneur Andrew Ng tweeted that AI can save us from pandemics, climate change and killer asteroids. “So if you want humanity to survive & thrive the next 1,000 years,” Ng says, “let’s make AI go faster, not slower.” 

In a recent column on colonoscopies, I fretted over the “expert service problem.” That’s when you rely on the same expert to diagnose a problem and to fix it. The expert has a financial incentive to exaggerate or even invent problems, because he will make money “fixing” them. 

AI, the ultimate black-box technology, poses an especially severe expert service problem. Is AI going to save us or destroy us? I find both scenarios implausible. But I have no doubt that the AI overlords will keep getting richer and more powerful.

Postscript:

Quantum-computing expert Scott Aaronson, whom I admire, objected to my column on Facebook, and we had this exchange:

AARONSON:

1. I signed the statement. Hundreds of academics signed the statement. Are we in on the protection racket as well?

2. Sam Altman explicitly asked Congress for regulation of the major players like OpenAI and Google that would leave smaller startups free to innovate — the opposite of what your cynical theory predicts.

3. Altman has no equity in OpenAI (he’s already rich from Y Combinator). He might be totally wrong, but he isn’t doing this for the money.

Consider the possibility that people are saying things that sound absurd to you because they didn’t imagine GPT could possibly work so well, they’re astounded and a little scared, and they’re trying to adapt their worldviews to possibly the most spectacular development of their lifetimes rather than inventing post hoc reasons why it doesn’t really count.

HORGAN:

No, you and other non-industry people who have signed this statement are not in on the racket, but you're enabling it. I see this "extinction" statement as a virtue-signaling publicity stunt by AI moguls, which leverages the extraordinary popularity/notoriety of ChatGPT. The stunt is aimed at amplifying the influence of Altman and other AI execs by assuring us that they are good guys, who can protect us if we trust them. Now, maybe ChatGPT represents a genuine discontinuity in AI. Maybe it's the harbinger of a dystopian Singularity. I doubt it, because decades of AI hype have made me a knee-jerk skeptic, but who knows. And maybe Sam Altman is a genuine altruist, who wants to save humanity from this potential threat. But I don't trust Altman or any other Silicon Valley zillionaire with a messiah complex to save us from what they themselves have wrought.

AARONSON:

(1) Have you actually spent some time interacting with GPT-4? I could spend hours talking about all the ways that it's still limited, or confidently wrong ...but I'm also acutely aware that what it does would've been considered impossible science fiction as recently as (say) 2015. One can improve over the heuristics "everything is revolutionary" / "everything is hype" via actual engagement with the things in question. Videoconferencing, smartphones, etc. were also hyped for a long time ... but eventually they DID become our reality.

(2) Among people following these things closely, I'm unusually *skeptical* of AI-doom scenarios. I'm unusually open to the conservative position, which at this point is that AI will "merely" transform civilization by about as much as personal computing and the Internet did, and otherwise leave us calling the shots. But even if one judges the doom scenarios to have only ~2% probability or whatever, I'd say the success of GPT has moved AI safety research from "addressing hypothetical threats that are so far in the future, we don't even know what to do about them" to "unequivocally yes, let's have more AI safety research." That's why I signed the statement, and it's also why I've taken two years off from quantum computing to work mostly on AI safety.

(3) I've had just a few conversations with Altman, so it's not as if I can vouch for his character on some deep level. But claiming, as a cynical con you don't believe, that your company's products could lead to the end of civilization, would if nothing else be genuinely new. I don't think any mobster ever had the chutzpah to try such a thing.

HORGAN:

I suppose I'm cynical about Altman's call for regulation because the genie is out of the bottle. These AIs are out there, mutating rapidly. Companies and governments and militaries and militant groups around the world surely have AI much fancier than ChatGPT. You're an expert on AI safety now, so how can we stop this global AI arms race?

AARONSON:

I *don’t* actually think there are AIs much more impressive than GPT-4 and (say) AlphaFold in existence right now — or if there are, their existence is a very well-kept secret.

But yes, it’s indeed probably impossible at this point to put the genie back in the bottle (was it *ever* possible?). So, right or wrong, Altman’s position is that we should instead focus on getting the genie to do what we want.

In any case, I don’t see how you simultaneously defend the positions that (1) AIs “mutating rapidly,” in the hands of militaries, etc, seems like it could be terrifying, and (2) the people trying to sound the alarm about (1) are all selling snake oil.

HORGAN:

There's no contradiction between saying that AI is scary and that Altman et al are exaggerating its scariness in self-serving ways. Bad people are already using AI to monitor and manipulate us, and to spread disinformation and foment conflict. New AIs may exacerbate these problems. When the AI moguls promote the sci-fi "extinction" scenario and call for regulation, they distract us from the more mundane but all-too-real harms caused by AI. They also present themselves as our saviors instead of the people who got us into this mess.

Further Reading:

The day after I posted this column, The Atlantic published “AI DOOMERISM IS A DECOY,” which corroborates my cynical take on fearmongering by AI moguls.

Will Artificial Intelligence Ever Live Up to its Hype?

You’re Not Free If You’re Dead: The Case Against Giving Ukraine F-16s

See also Chapter Twelve of my new book My Quantum Experiment, which delves into the race to make quantum computers.

Previous
Previous

How I Kicked Caffeine

Next
Next

You’re Not Free If You’re Dead: The Case Against Giving Ukraine F-16s