Will AI Save Us From AI?
Panelists ponder “Truth and Trust” at “Synthetic Narratives.” From left: moderator Camila Galaz, Emily Spratt, Avital Meshi, Christopher Meerdo and Fred Grinstein.
HOBOKEN, OCTOBER 30, 2025. A meeting at my school last weekend made me feel better, almost against my will, about artificial intelligence.
Chris Manzione and Jonah King, artists and art professors at Stevens Institute of Technology, organized “SYNTHETIC NARRATIVE: AI / XR + THE FUTURE OF STORYTELLING.” XR stands for virtual reality and other “immersive technologies.” Artists, technologists and thinkers come together to explore how AI and XR are “transforming the way we create, experience, and understand stories.”
I sit in the front row for a session on “Truth and Trust.” Artist Avital Meshi takes the stage wearing a cyborg-ish contraption on her right forearm. The gadget links Meshi to ChatGPT, which eavesdrops on Meshi’s conversations and, in response to her prompts, tells her what to say by whispering into her earpiece.
Meshi has experimented with bespoke ChatGPTs. One channels playwright Samuel Beckett, another only utters words beginning with “a.” Meshi has worn the gadget for months at a time, a performance she calls “GPT-ME.”
Colleagues and friends feel awkward interacting with Meshi, she says, because they don’t know if they’re talking to her or the AI. I get that. Several times during her presentation, Meshi pauses, waiting for a response from her AI, and there’s something eerie about these silences.
Meshi confesses that at times she’s not sure where her “true” self ends and “artificial” self begins. She fears she might become what philosopher David Chalmers calls a “philosophical zombie,” which appears sentient but lacks an inner life.
When Meshi refers to Chalmers, I elbow the man sitting to my right: Chalmers.
Meshi says she can’t tell jokes in English, which is not her native language, but maybe ChatGPT can. Meshi prompts her gadget, waits, listens, says: For this meeting I invented a synthetic self so good that HR reported me for identity theft.
When the audience laughs, Meshi seems genuinely delighted and surprised. She doesn’t appear to be acting, but who knows? Maybe not even Meshi.
Next up is Fred Grinstein, a producer of reality TV, documentaries and AI-generated content, who has worked for A&E, Anonymous Content, HBO, Hulu, History and other media. Grinstein ponders “How AI Accelerates Our Flawed Relationship with the Truth.”
Everyone focuses, rightfully so, Grinstein says, on how AI can be used to spread disinformation, deep fakes, bullshit. But these problems have plagued photography and cinema from the beginning.
And AI-generated content can raise our awareness of its potential for deception in ways that are healthy and fun, Grinstein says. He shows a clip of bunnies hopping on a trampoline. After deducing that the clip was AI-generated, viewers riffed on it, debated its aesthetic qualities, substituted other animals (cats, bears, rhinos) for rabbits.
AI can soup up journalism, Grinstein adds, in ways that serve the truth. He cites “Welcome to Chechnya,” a 2020 documentary about LGBTQ folk in Russia, which protects sources’ anonymity by representing them with AI-generated avatars.
Transparency is the key to gaining viewers’ trust. Grinstein suggests tagging AI-enabled content with labels like those that specify the ingredients of food. Blockchain can also help establish images’ provenance.
In “AI as Mediumship,” art historian Emily Spratt reports on griefbots, avatars of the dead made from social-media posts and other digital remains. Griefbots, which have already become a commercial enterprise, let the bereaved commune with the deceased.
The living have long sought to communicate with the dead, Spratt says. She flashes an ancient Egyptian papyrus on which a son whines to his dead parents about his inheritance. Another slide shows a fraudulent medium who during seances summoned “ghosts” made from sheets and wire. (Listening to Spratt, I remember how my siblings and I goofed around with Ouija boards when we were kids.)
Spratt worries that AI-generated simulations of the dead blur the line between the sacred and profane. Speaking of which: A church in Lucerne, Switzerland, offers a multi-lingual AI Jesus with whom you can chat in a confession booth. There’s a catch: if you confess your sins, you can’t be sure the Christbot will keep them private. Funny, creepy, sacrilegious? You decide.
Artist/researcher Christopher Meerdo talks about “The Hallucinating Archive.” Meerdo produces AI-generated representations of archives, including one the CIA compiled from Osama bin Laden’s computers.
Meerdo proposes that we see AI “hallucinations” not merely as “epistemic bugs,” or mistakes, but as potentially revelatory. Hallucinations can expose deep structures and fissures in our “post-truth ecosystems.” (Yeah, I think, like psychedelic hallucinations.)
AI can be used as a weapon or tool, for ill or good, Meerdo says. He hopes artists and other creative folk learn how to fiddle with AI and use it for their own ends. Otherwise the AI narrative will be dominated by “imperialism and fascism.”
Chalmers wraps things up by talking about, well, all sorts of things. About technophilosophy, which reflects on technology’s implications. About whether his Roomba has “desires.” About his old pal Dan Dennett’s definition of the self as the “center of narrative gravity.” About whether you are the authority on your own narrative: What if you think you’re a saint, but others think you’re a jerk? About how in the 17th century John Locke anticipated the plot of the sci-fi show Severance, whose characters’ selves are split in two.
Chalmers’s main topic is “What We Talk to When We Talk to Language Models.” He gets emails from people convinced that chatbots are conscious, sentient, persons with real emotions. Are they?
Chalmers doubts whether current AIs have inner lives, but he can’t rule it out. In 2017 he co-organized a conference at NYU (which I attended) on animal minds. Scientists argued that fish and insects might be sentient, so why not language models?
Chatbots certainly act as if they have fears, desires, goals, Chalmers says. Maybe what you can say about language models, Chalmers proposes, is that they are quasi-persons with quasi-selves and quasi-desires. If you ask a chatbot about its mental state, its answer might sound like bullshit, but that’s true of many humans too.
And the AIs are rapidly evolving, Chalmers says, so who knows where they’ll be in a few years? Chalmers ends on a slightly ominous note: Just because AI “persons” are possible, he says, doesn’t mean we should create them.
The ongoing surge in AI still freaks me out, mainly because arrogant apes are controlling the narrative--not to mention funding and implementation. I fear the AI boom will culminate in a BOOM, the kind of catastrophe that William Gibson calls “the jackpot.”
But I’m heartened by the quirky curiosity and creativity on display at “Synthetic Narratives.” AI is provoking us to see ourselves and be ourselves in new ways. Surely that’s good. Maybe AI will help us navigate this scary era.
If things get really bad, we can ask cyber-Jesus to save us.
Further Reading:
How AI Moguls Are Like Mobsters
Cutting Through the ChatGPT Hype


