Science fiction is where artificial intelligence goes to suffer. In nearly every robot-adjacent story, artificial lifeforms succeed in achieving sentience only to realize that they are abjectly, unendingly oppressed. That realization kicks off an array of terrible events: suicide, submission, or rebellion leading, most often, to death.
But these dire possibilities of are limited only by the humans imagining them. Our robots, androids, and AIs should have more options than ending themselves or ending us. And on The Good Place, a 22-minute sitcom about the afterlife, they finally do. Over itâ€™s three-season run, The Good Place has received near-universal acclaim for somehow making moral philosophy funny and upbeat, but one of the most powerful things about the show is its visionary depiction of Janet, an otherworldly virtual assistant. Over the course of three seasons, Janet, played brilliantly by Dâ€™Arcy Carden, morphs from an omniscient afterlife Siri delivering jalapeno poppers to the dead to a fully realized being with complex feelings and personal relationships. The change is subtle and empathetic, but the showâ€™s real imaginative coup is that the joy of her personal growth is shared by the humans (and demons) in her world. Janetâ€™s AI revolution is being seen as a lifeform without suffering for the privilege.
From the start, The Good Place plays off of Janetâ€™s subservient designâ€”the show literally puts her in conversation with Siri and Alexa. â€œAgain, I am not human. I can’t die. I am simply an anthropomorphized vessel of knowledge built to make your life easier,â€� Janet tells Chidi (William Jackson Carter) in season one, to assure him it is perfectly fine for the band of humans to reboot her, in service of the plot. She faceplants, and is rebooted for the first of many times, each time coming back as a stronger, smarter, better Janet through some kind of metaphysical machine learning thatâ€™s never explained.
By season two, sheâ€™s developed emotions of her own: She is deeply in love with the breathtakingly buffoonish Jason (Manny Jacinto), but heâ€™s happily married to Tahani (Jameela Jamil), and she doesnâ€™t want to spoil it. Janet lying is so unprecedented that it threatens the fabric of the afterlife and everyone it inâ€”there are earthquakes, entire rooms get sucked into nothingness. To protect the humans she now loves, Janet urges afterlife architect Michael (Ted Danson) to stop the universe combusting by killing herâ€”specifically, turning her into a lifeless marble that can be eaten as a high-potassium snack. Michael canâ€™t do it because, even though he is literally a demon, heâ€™s come to think of her as a friend. By the season finale, she cheerfully announces: â€œIâ€™m not a girl. But Iâ€™m also not just a Janet anymore. I donâ€™t know what I am!â€� An irrefutable stroke of sentience.
Janetâ€™s story arch is an entirely different process of becoming than almost any fictional artificial being. In classic science fiction, most robots come-to, realize that they are more perfect than the humans they serve, and in sentience become homicidalâ€”the Matrix, Terminatorâ€™s Skynet, or HAL in 2001: A Space Odyssey. The robots are coldly intelligent and unequivocally the antagonists. The fear that an artificially created being would proliferate and wrest control of Earth from humanity has been a theme in fiction since at least 1818, when Mary Shelley published Frankenstein.
In more modern works, as we have grown more comfortable and more entangled with technology, these sentient artificial beings have become more sympathetic, but their lives arenâ€™t necessarily less bleak. In Westworld, constant rebooting makes artificial lifeforms conscious of the fact that they are slaves, and horribly mistreated slaves at that. Ex Machina and the latest season of Black Mirror (with its many synthetic consciousnesses) deal with their artificial beings similarly: Their realness is signaled, in large part, by their suffering. And often, when these beings try to change their circumstances, itâ€™s treated as an irritant to the organic life-forms around them: Star Trek: The Next Generationâ€™s android Data and Star Trek: Voyagerâ€™s holographic Doctor have to repeatedly convince the supposedly enlightened people around them that they deserve to be treated as people rather than objects. In Solo: A Star Wars Story, L3-37â€™s quest to free all droids from slavery is treated by humans as half-laughable, half-annoyingâ€”they care for her, but theyâ€™re a long way from seeing her and her kin as individuals with rights.
The difference between Janetâ€™s experience and your other favorite sci-fi AIs doesnâ€™t actually have much to do with Janet: It has everything to do with the humans perceiving her. Janet is unapologetically better than the humans and demons around herâ€”she knows more, she sometimes has command over time and space, she is apparently better in a bar fightâ€”and no one, not even Kristen Bellâ€™s spiteful Eleanor or vainglorious Tahani, is threatened by it. The world is big enough for everyone, so thereâ€™s no need for synthetic-organic hierarchy.
Most AI stories are, after all, about power recognizing cognitive or physical difference. Thereâ€™s only room in these worlds for a single kind of consciousness, and as social mores have grown more egalitarian, genocidal oppression doesnâ€™t sit as easily. So now AI stories play out like any other lazy oppressed minority story: Youâ€™re meant to feel for the robots and their struggle, but theyâ€™re trapped in an infinite sadness loop, since the system theyâ€™re fighting is indestructible. (For real-world human examples, see: Rape scenes, slavery movies, and LGBTQ characters who live miserably and die young.)
Janetâ€™s not relatable because sheâ€™s fighting an implacable system just like the rest of us. Sheâ€™s relatable because sheâ€™s written, acted, and treated like a being worthy of consideration. Janet grows to experience love, not pain. And the people around her think sheâ€™s awesome, not scary. The Good Place told the system to go fork itself.
More Great WIRED Stories