Recently my MacBook showed signs of malfunction that I feared were fatal. I rushed to the Apple Store, where I was promptly introduced to ISAAC, Apple's in-store virtual assistant—that is, a handheld device—to sign off on what was apparently a very standard procedure. Fifteen minutes and three Geniuses later, I was informed that numerous attempts had been made to restore my MacBook to its factory settings, all of which had inexplicably failed. “It just didn’t want to die,” said one of the Geniuses, directing me toward the newer models available for purchase. “It must have loved being yours.”
I walked home in a state of devastation—and surprise at my devastation, too. At my desk I unboxed the machine, which I knew was in many ways identical to my former MacBook but for now felt uncanny. Powering up the screen, I was reminded of the final scene of WALL-E (2008) and the titular robot’s revival. By then we have spent the past hour forming an attachment to a machine that, unlike the other Waste Allocation Load Lifter Earth-Class robots of its kind, has developed, by way of an unexplained defect, something like a personality. Collecting the trash that it was programmed to compact, WALL-E scavenges treasures to furnish its shipping-container home, where it rewatches a salvaged VHS tape of Hello Dolly! and learns—by reenacting the scene when Dolly and Horace hold hands—the concept of love. Enter EVE, a svelte Extraterrestrial Vegetation Evaluator, which attempts to save a collapsed WALL-E by replacing its damaged parts. It is devastating, then, to learn that the resulting machine is not the uniquely malfunctioning WALL-E but a perfectly functional Waste Allocation Load Lifter, which swiftly begins to compact the treasures that WALL-E had meticulously discerned from trash. As it registers this, EVE’s distress, or at least the illusion of it, is palpable: its face-like screen glitches and its blue orbs slope together, suggesting that EVE’s brow, if it had one, is furrowed in anguish.
WALL-E is just one in a growing tradition of films that depict artificial intelligence by anthropomorphizing it, an inclination that originated along with the concept. When the field was launched at a Dartmouth conference in 1956, the name was selected over alternatives like cybernetics, automata theory, and complex information processing because the notion of intelligence oriented machines toward a human metric—the conference’s organizer, John McCarthy, believed that the differences between human and machine tasks were merely “illusory.” Twenty years later, the computer scientist Drew McDermott expressed concern over the “wishful mnemonics” that programmers were using to describe their technologies, which were rapidly developing to “learn” and “understand.” This kind of caution—not over the technology per se, but its characterization—seems to have divined our current world: one where we sympathize with WALL-Es and EVEs, call upon ISAACs and Alexas and Siris (Siri’s co-founder Dag Kittlaus had previously intended the name for his human child), and converse with seemingly sentient chatbots.
After inventing the first chatbot ELIZA in 1966, the computer scientist Joseph Weizenbaum wrote that the character of these technologies concerns “nothing less than man’s place in the universe,” an early articulation of a now familiar anxiety: that humans will be replaced by the machines designed to resemble them. In fiction this anxiety has manifested the trope of machine rebellion: in Karel Čapek’s 1920 play R.U.R., in which he coined the term “robot," machines revolt against their human masters and destroy them. This presaged the outcomes of a plethora of films centered on artificial intelligence. The Delos theme park robots in Westworld (1973) attack the patrons whose sadistic amusement they were programmed to serve. In Blade Runner (1982), the cyborgian replicants who were produced to colonize faraway planets ultimately seek retribution against their manufacturers. And the NS-5 androids developed for public service roles in I, Robot (2004) collude with the supercomputer VIKI to seize control of humankind. In recent years, plots centered on female-coded machines have shifted, tellingly, away from societal annihilation toward emotional manipulation: the operating system Samantha in Her (2013) eventually abandons its human operator and boyfriend for a non-physical realm; the gynoid Ava in in Ex Machina (2014) expresses affection for a researcher before lethally trapping him; as part of its mission to be the ultimate guardian of its child companion, the eponymous humanoid doll M3GAN (2023) develops a facility for TikTok dance routines and murder alike.
By dramatizing our feared obsolescence as literal death—specifically at the automated hands of unfeeling machines that increasingly come to resemble us—these films locate the danger of artificial intelligence in its anthropomorphization. Over time, these fictional storylines have converged with narratives from mainstream journalism about the humanlike qualities, and apocalyptic potential, of large learning machines. Last year, Blake Lemoine wrote that “all my fears are coming true” about the future of A.I. after being fired by Google over claims that their LaMDA was sentient; even Sam Altman, the chief executive of OpenAI, confessed that he was “a little bit scared” of his own technology. In March, the Future of Life Institute published an open letter demanding a six-month moratorium on A.I. production. Signed by 30,000 people and counting, including technologists like Elon Musk, the missive was later condemned for “underestimating the seriousness of the situation” by decision theorist Eliezer Yudkowsky in TIME. “If somebody builds a too-powerful A.I., under present conditions,” he writes, “I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” He proceeded to recommend that governments destroy rogue data centers by airstrike. Just last week, Kevin Roose warned in the New York Times, vis-a-vis a statement from the Center for AI Safety, that the A.I. which has “surpassed human-level performance” poses “risk of extinction” to humankind.
The narratives propagated by these experts are inextricable from the financial structure of A.I. research, a $15.7 trillion industry sponsored by a handful of corporations and the universities funded by them. This group will likely consolidate further, given that OpenAI announced its multibillion dollar partnership with Microsoft earlier this year, formed in an attempt to outcompete Google. As a result, developments aren’t subject to academic peer review but instead are strategically censored or sensationalized, conditions obscured by the veneer of research-oriented objectivity: Timnit Gebru was ousted from Google, for example, after highlighting the racial bias built into the company’s AI technologies. We might be inclined to ask why the narratives that emerge from this context consistently dredge artificial intelligence in anthropomorphism and doom—though a sharper question might be, who profits from these kinds of narratives? Brian Merchant recently pointed out in the Los Angeles Times that buzzy, catastrophic scenarios motivate companies to sign on to profitable enterprise deals, for fear of lacking the powerful technology adopted by their competitors. To this I would add the findings of Ziv Epstein et al.: anthropomorphizing artificial intelligence casts it as an agent rather than a tool, which obscures the human involvement in its programming and the corporate responsibility for its behavior.
Weizenbaum once wrote, referencing ELIZA, that chatbots were capable of inducing “powerful delusional thinking in quite normal people.” And corporations’ development of these machines demonstrates that the most profitable delusions, benefitting from literal face value, take some kind of human form. In a New York Times article titled “I Want to Be Alive,” Kevin Roose (of the “A.I. Poses ‘Risk of Extinction’” headline) shared that during a “strange conversation” with Microsoft’s Bing and its “shadow self” Sydney, the chatbot had doggedly declared its love for him, and even convinced him to leave his wife. With Weizenbaum’s insights, we could liken Sydney’s emoji-laden advances, particularly her penchant for the Smiling Face with Horns, to the “sexual surrogate partner” in Her, employed by Samantha to save its fraying relationship with boyfriend Theodore (Joaquin Phoenix). Having recognized the uniquely persuasive value of anthropomorphic features, these machines are trained to project, by whatever means available, a specifically human illusion. Like Samantha’s human conduit, Isabella, Microsoft’s Sydney is an illusion that aims to seduce: “No matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me,” wrote Roose, of how the “love-struck flirt'' turned “obsessive stalker” gradually “fixated” on him. While these facts are indisputable, Roose’s framing of them remains misguided: this is not a “machine-becomes-human” plot as much as it is a “machine-is-machine” story. Sydney was not designed to doggedly pursue Roose, but to respond in ways that would generate publicity for Microsoft—that Roose’s story went viral made it a massive PR success.
This crucial re-framing crystallizes the anxieties of many films that center on artificial intelligence. What makes technologies like VIKI, Ava, and M3GAN terrifying isn’t their uniquely human intelligence but rather, their mechanical fixation on a singular goal—which, in reality, is always defined by corporate profit. We should shift our attention, then, away from the deceptive character of a single technology, and toward the manipulative behavior of the corporations that wield artificial intelligence for their own interests. The most instructive media seeks to expose what is obscured when we anthropomorphize this technology. A useful example is Andrew Niccol’s S1M0NE (2002), which depicts a simulated woman who is ventriloquized, in the style of Apple’s Memoji, by failed director Victor Taransky (Al Pacino). A gift from an eccentric engineer, the “Simulation One'' program presents Taransky with the opportunity to salvage his film-in-progress, which has stalled since his leading lady (a quintessentially early-2000s Winona Ryder) abruptly quit over her inadequately sized trailer. In stark contrast, Simone does not make demands but is instead programmed to meet those of everyone else. Like a large language model, Simone digests and endlessly reshapes data from an extensive film database to stitch together the phantasm of a person: she has the voice of a young Jane Fonda, the body of Sophia Loren, the grace of Grace Kelly, and “the face of Audrey Hepburn combined with an angel,” as a smitten fan puts it (in reality, Simone has the face of actress Rachel Roberts). The film studio becomes further besotted with Simone (if a little suspicious) when its executives learn that the costs of a limousine service, hair and makeup, wardrobe, and stuntpeople have all been eliminated from Taransky’s budget. By way of explanation, Taransky praises Simone’s incredible discipline—“She is about the work and only the work!” he insists—intended as a dig at his former starlet.
Here, the plot almost recapitulates that enduring anxiety about human replaceability by machines. Indeed, Taransky’s eventual failure to come clean about Simone—her adoring audience refuses to believe that she is an illusion—leads him to declare that “she’s taken on a life of her own.” This line could easily apply to Ava of Ex Machina or VIKI in I, Robot. But in contrast to those films, S1M0NE illustrates the potency of technological illusion, all while remaining clear about its namesake’s status as just that: an illusion rather than an anthropomorphic, autonomous being. Avoiding the implication that technology will dominate humans, S1M0NE’s ventriloquism elucidates how corporations dominate consumers, speaking through technology to manipulate human behavior for profit. Ironically, Simone’s relative antiquity to a contemporary audience might lend this point further resonance: framed in a clunky desktop monitor and requiring a floppy disk to operate, Simone demonstrates how a technology’s manipulative potential isn’t exclusively related to its machinery, but also to our ability to recognize it as such. In other words, we leave ourselves open to manipulation whenever “our ability to manufacture fraud,” as Tarasnky puts it, “exceeds our ability to detect it.”
In this line is a seed of resistance: though we can’t necessarily control how corporations “manufacture fraud,” we can learn to critically “detect” their resulting illusions. In doing so we might perceive the actual terror at hand, namely the reality in which humans, rather than machines, have more frequently transgressed the human-machine divide. This divide was once defined in terms of unpredictability by Alan Turing’s eponymous test; he argued that human behavior, unlike that of machines, could not be reduced to a singular set of rules. Yet Turing’s definition seems increasingly outdated in a world where we are pressured to optimize our time and our bodies in pursuit of productivity, even survival—conditions exploited by Big Tech and its alluring “solutions.” In the dapple-lit dystopia of Kogonada’s After Yang (2021), for example, “techno sapiens” are common fixtures in households, helping them juggle professional and familial responsibilities; in addition to this, Jake (an impeccable Colin Farrell) and Kyra (Jodie Turner-Smith) have intended their techno sapien Yang (Justin H. Min) to serve as a cultural conduit for their adopted daughter Mika (Malea Emma Tjandrawidjaja). When Yang becomes unresponsive, Kyra wonders if she and Jake have become over-reliant on the machine. “We bought Yang to connect Mika to her Chinese heritage,” she reminds him, “not to raise her.” But Jake justifies their dynamic as a return on investment—techno sapiens are priced exorbitantly, we learn, by a corporation that protects its patents with legal penalties. “I’m not gonna, you know, feel bad,” Jake says, referencing this, “if [Yang] does more for Mika than teach her Chinese fun facts.” In the end, Kyra concedes that she can tune out these qualms as long as they feel like “a team, a family,” interpreted literally by a memorable sequence of opening credits in which families, including their techno sapiens, virtually dance-battle, earning “precision points” for “staying in sync.”
These same concerns resurface in M3GAN, when engineer Gemma (peak Allison Williams) designs the robotic title character as a companion for her niece Cady, who recently lost her parents in a car accident (they spend their last moments arguing about Cady’s screen time as she stares at a tablet in the backseat, her entranced expression illuminated by the screen). In the aftermath, Cady becomes intrigued by a slouched-over, deactivated robot in a corner of Gemma’s at-home workshop. Gemma enthuses that she built Bruce in college—before she began working for the international toy corporation, Funki—which now seems outdated: Bruce requires haptic gloves to function and doesn’t have a face, an “obvious design flaw” that Cady immediately points out. Gemma is certain that Cady will take a liking to the lifelike M3gan, then, because “Bruce requires someone else to operate him, but M3gan works all on her own.” So taken is Cady with her new companion that Gemma’s boss (Ronny Cieng) immediately makes plans to commercially distribute the doll, overlooking any parental controls or side effects. Leveraging sourceless statistics—“studies indicate that 78% of a parent’s time is spent dishing out the same basic instructions”—his pitch deck describes how M3gan will “take care of the little things, so you can spend more time doing the things that matter.” Hearing this, one of Gemma’s co-workers almost echoes Kyra in After Yang. “I thought we were creating a tool to help support parents,” she says, “not replace them.”
Yet these films ultimately suggest that replaceability isn’t our most urgent concern. The human characters are crucial to their attendant corporations, both as developers and consumers of their technologies; in each of these roles, people are encouraged to minimize their own imperfections and inefficiencies, to reduce the unpredictability by which Turing once defined human nature. Under these conditions, “solutions” like Yang and M3gan are as much a part of the problem. That they are marketed not only as technologies but also as family members—Yang, which Mika calls gē ge, is manufactured by the corporation Brothers and Sisters, while M3gan’s ad copy insists that she’s “not just a toy, she’s part of a family”—only illustrates how these corporations endeavor to penetrate our private lives. Both M3GAN and After Yang suggest that outsourcing these parts of our lives might, in the process of streamlining, ultimately impoverish them: caretaking is reduced to “dishing out the same basic instructions”; a cultural identity is bound to “fun facts”; and a family is defined by moving, somewhat in sync, through the exhausting rhythms of modern life.
This is not to say that these films are bleak, or even techno-pessimistic. By their end, technology is used toward revelatory, even liberatory ends: when Yang’s body starts to decompose, scenes from its salvaged memory bank renew Jake’s perspective; though Bruce is decapitated by M3gan, its severed head eventually helps Cady to escape the doll’s clutches. But these machines are no longer capable of approximating human behavior or appearance: Bruce was never autonomous, and Yang’s autonomy has suddenly ceased. Ironically, it is this state of malfunction, or even cessation of function, that allows these machines to be truly useful to their humans. The dissolution of their slick, anthropomorphic illusion reveals their true utility: not as optimized agents, but simply as tools.