Cutting Through the ChatGPT Hype

The problem with ChatGPT “is not that it occasionally makes mistakes,” says Erik J. Larson. “It's that it occasionally becomes a completely insane thing.” I found this image here.

October 13, 2023. ChatGPT changes everything! This and other smooth-talking artificial intelligences will soon be sentient! If they’re not already! They’re going to destroy us! We must shut down research! No, they’re going to save us! We must pour more funding into AI research, to beat China!

You’ve heard the hype. Eager for a no-bullshit take on ChatGPT, I asked Erik J. Larson to talk to my school, Stevens Institute of Technology. Larson is the author of the 2021 book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, which explores the limits of AI. Larson also tracks technology on his excellent Substack newsletter Colligo.

Larson isn’t an armchair critic. Trained in computer science and philosophy, he has founded two AI companies supported by the Defense Advanced Research Projects Agency, Darpa. In the early 2000s he worked as an “ontological engineer” (Larson smiles at the title now) at Cycorp, one of the first firms dedicated to building AI with general, common-sense knowledge. Below I summarize points Larson made during his October 4 talk (which, by the way, was instantly transcribed by Zoom’s audio-to-text software).

CHATGPT IS AMAZING!

ChatGPT took Larson--and, he suspects, its designers at Open AI--by surprise. It is a so-called natural-language program, which can converse on a wide range of topics. It can easily pass the Turing test unless you know how to trip it up.

“It's a very powerful technology,” Larson says. “We just couldn't have reproduced that kind of conversational capability even a decade ago.” ChatGPT overcomes language-processing hurdles that Larson, in The Myth of Artificial Intelligence, predicted wouldn’t be solved soon.

BUT WHAT ABOUT COMMON SENSE?

When Larson began his career, he believed truly intelligent natural-language programs would require a form of reasoning called abduction. You start with an observation, the street is wet, and infer a cause, it’s been raining.

But there are other possible causes besides rain. Maybe a water tanker accidentally dumped its load. Maybe kids were playing with water guns. Maybe a car ran over a fire hydrant. Abductive reasoning, via which you choose the most probable cause, requires broad, generalized knowledge, aka common sense.

“You need a model of how the world works in order to understand what's plausible and not plausible,” Larson explains. You need to know what water is, where it comes from, how it behaves and so on. Programming this sort of generalized knowledge into a computer is arduous but unavoidable, if you want to build a machine with the equivalent of human common sense. So Larson assumed.

MACHINE LEARNING IS AMAZING, BUT…

There has always been another broad approach to AI: induction. The basis of machine learning, induction draws conclusions about the present based on what has happened in the past. “Induction basically says, what I see in the past gives me probabilities about what I should infer in the future.”

In the early 2000s, AI engineers realized that they could train inductive natural-language-processing programs by feeding them masses of text culled from the internet. “You're not actually coding in rules that say, ‘This is how language works, this is how grammar works,’ and so on,” Larson explains. “You're actually just saying, ‘Look at these web pages, and then from that build a model of how language works.’”

ChatGPT employs an inductive method called sequence-prediction. If you feed ChatGPT a sequence of words, it composes a response to that sequence based on human responses to similar sequences in its enormous database. With this method, ChatGPT can write haikus, engage in complex moral reasoning, pass bar exams and opine on quantum mechanics.

WHY CHATGPT FAILS

ChatGPT’s successes, Larson says, raise the question: “Do we actually need all this fancy-schmancy world knowledge, and so on? Or is it the case that if we just have enough data and enough computing power, and this Transformer architecture, we can just simulate all this intelligence?” (Transformer architecture is a key component of ChatGPT’s design.)

You still need the fancy-schmancy knowledge, Larson contends. If the goal of AI research is to create an actual “superintelligence” with broad, common-sense knowledge, ChatGPT probably represents a dead end. “It's not going to be a path forward in the way that [Open AI CEO] Sam Altman and other people are talking about.”

Fed certain questions, ChatGPT “comes up with a completely ridiculous answer.” If you ask it to explain its answer, ChatGPT “confabulates, it literally lies to you.” This flaw is “fatal,” Larson says. You cannot entrust ChatGPT with crucial tasks, because you cannot predict in advance when it fail. “There is no principled way to say beforehand, ‘Hey, we're going to have a problem if we ask it these types of questions.’”

ChatGPT has trouble, for example, with “event-causality identification tasks,” which involve inferring whether two events are causally related. Take for example this statement: Minutes after a woman was suspended and escorted from her job, she returned with a gun and opened fire, killing two and critically injuring a third coworker before being taken into custody.

ChatGPT, like most humans, will correctly infer that the woman shot her co-workers because she was fired. But ChatGPT has a tendency to invent causal connections between unrelated events. “It actually will confabulate, or make up a story, about how they're connected,” Larson says. “This is yet another sign that there's some sort of fundamental problem lurking in this as a path to general intelligence.”

We humans make mistakes, too, of course. The problem with ChatGPT “is not that it occasionally makes mistakes,” Larson says. “It's that it occasionally becomes a completely insane thing.”

CHATGPT IS BEHIND THE TIMES

Sam Altman, the Open AI CEO, claims ChatGPT can help solve urgent social problems, such as poverty. But ChatGPT is far too unreliable to fulfill this grand purpose, Larson asserts. ChatGPT is unreliable because it lacks a world model, background knowledge, common sense. “There's nothing that it can fall back on and say, ‘Wait a minute, that sounds a little wacky.’”

Another problem: An artificial general intelligence should be able to answer questions about current events, right? But ChatGPT’s database is not up to date. It only includes information up to January 2022, Larson says, and it is “basically blind” to anything beyond that point.

“If you need to know stuff about the past, it’s really brilliant,” Larson says. “But if you need to reason about things that are going on right now, today, it's actually of almost zero value.” [See Update below.]

DO BUSINESSES TRUST CHATGPT?

ChatGPT’s designers can devise software “patches” to fix problems, but patches make the software more kludgy. Larson compares ChatGPT to a boat constantly springing leaks. “You can't just keep plugging up holes in your boat, right? You have to build it so that it doesn't have leaks.” ChatGPT’s flaws stem from its “fundamental inability to understand how things work.”

Retraining ChatGPT to fix a flaw or make its knowledge more up to date requires “3,125 servers running continually for over 3 months,” Larson says, at an estimated cost of “a couple of a billion dollars.” Computation-intense AIs like ChatGPT are only achievable by “the hyper-rich, super-funded.”

All these problems, Larson suspects, explain why businesses have been slow to embrace ChatGPT. Yes, it can converse with humans, like Amazon’s “Alexa on steroids,” Larson says, and it can “serve as a kind of adjunct to search.” But so far ChatGPT is “not causing a major splash in the business world.”

BEWARE WEAPONIZED AI

Larson emphasizes that ChatGPT, while an “engineering marvel,” is just a “tool,” not a sentient being. He gets frustrated when discussions of AI veer toward science-fiction scenarios, in which machines become sentient and nasty, like HAL in the film 2,001: A Space Odyssey. These sci-fi scenarios distract us from the “very real dangers” posed by AI, Larson says.

AI can be used to generate misinformation, such as phony speeches and videos, in ways that can disrupt elections and undermine democracy. Larson is also concerned about military applications of AI, such as autonomous weapons systems, which can navigate and kill independently of human operators. AI has “obvious advantages in warfare,” because it allows you to kill your enemy without putting yourself “into harm's way.” Drones are already becoming increasingly capable of autonomous decision-making.

Larson knows a company designing “autonomous submarines,” which can carry out missions even if a communication blackout disconnects them from humans. Larson fears what will happen “if we automate warfare too much.” We are “placing humanity's future into the hands of these systems.”

Larson fears nations may not be able to regulate AI to prevent harmful applications. “Institutions always lag behind tech, right? We don't have answers to these questions because the capabilities, the tech, is way up in front of our ability to talk about it.”

You can see my entire chat with Erik Larson here. For more detailed assessments of ChatGPT and other technologies, check out Larson’s Substack newletter.

Postscript: After his talk Larson told me that in September Open AI’s partner Microsoft hooked ChatGPT up to Bing, Microsoft’s search engine. “So you can make the argument that [ChatGPT] does know about current events, although it’s basically writing summaries of webpages.”

Further Reading:

How AI Moguls Are Like Mobsters

Should Machines Replace Mathematicians?

Previous
Previous

A Buddhism Critic Goes on a Buddhist Retreat

Next
Next

The Golden Bowl and the Combinatorial Explosion of Theories of Mind