My title is a line from “The Sorcerer’s Apprentice” (Der Zauberlehrling), a poem by Johann Wolfgang von Goethe written in 1797 and set to music in 1897 by French composer Paul Dukas. In 1940, only five years after Dukas’s death, Walt Disney and Leopold Stokowski collaborated on a short animated film starring Mickey Mouse as the mischievous apprentice who, weary of toting water buckets through his master’s underground workshop, waits until the master has retired for the night, then dons his magical cap and enchants a humble broom to do the job.

If you don’t know what happens next, you are in for a treat. Just Google the title and watch all nine minutes and 18 seconds of this little gem. Then ask yourself whether the combined powers of the currently top-ranked artificial intelligence systems—ChatGPT for text, Dall-E for images, Mubert for music—could produce anything as good.

If you’re not sure, you could always ask ChatGPT. And the reply, delivered in seconds, will read something like this: I am sorry, but as an A.I. language model, I do not have the ability to provide opinions on the value of any particular work of art. My primary function is to provide information and answer questions to the best of my ability based on my training and knowledge. If you have any other questions, I would be happy to try and answer them for you.

In a human being, such a reply might suggest a prudent withholding of opinion based on extenuating circumstances. But in a chatbot, this reply suggests no such thing. It is simply a warning label, indicating that, though this machine can perform quantitative calculations exponentially faster than any previous computer (not to mention any human brain), it cannot make judgments of the kind that arise from the consciousness, and conscience, of a living person.

Such topics are usually not pertinent to labor relations. But at the time of this writing, the U.S. entertainment industry is paralyzed by a long and bitter strike by the Writers Guild of America (WGA) and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) against the Alliance of Motion Picture and Television Producers (AMPTP). And though it is hard to sympathize with certain celebrity “workers” whose salaries resemble those of their “bosses,” the majority of strikers are deeply concerned about the present and potential impacts of A.I. on their livelihoods. They have a compelling case.

Bread and Butter

According to the Motion Picture Association, the U.S. film and TV industry employs 2.4 million people, not just in Hollywood but “in every state, and across a diversity of skills and trades…from special effects technicians to makeup artists to writers to set builders to ticket takers and more,” paying “over $186 billion in wages annually,” and another “$21 billion per year to more than 260,000 businesses in cities and small towns across the country.” This industry, with its history of powerful trade unions, has been subject to a slow-motion takeover by a very different industry: Big Tech. Indeed, the broad swath of West Los Angeles that has historically been home to Hollywood’s “creative community” is now called “Silicon Beach” because it hosts over 500 tech companies.

And so, a key bread-and-butter issue in the current strike is the disappearance of long-term, well-paid jobs for professional writers and actors, and their replacement by short-term gigs for “precarians” (proletarians in a precarious economy). The demands of both unions are premised partly on feature film work, but because that has not been steady since the days of the powerful studios, their demands are mainly premised on network TV, where a typical series runs from October to May and provides its employees with a decent salary, plus benefits and “residuals,” or payments for subsequent broadcasts, either as reruns on the same network or in syndication.

Not every network show does well enough to pay residuals, but the employees on a given show can easily measure its success by checking the Nielsen ratings. Founded 100 years ago, the Nielsen company has long been an essential part of commercial broadcasting (both radio and TV). Its public face is the weekly ratings, but its most valued product is the detailed demographic data it collects by monitoring thousands of U.S. households (with their permission). The data is then sold to advertisers and networks as a basis for positioning and pricing commercials, and the process comes full circle when the networks use the revenue to support programming.

Referred to as “linear TV,” this time-tested system still exists. But its linear aspect has long been complicated by changes in technology and viewing habits. Very briefly, these include the home videocassette recorder (VCR) in the 1970s, which allowed people to “time-shift” shows from their scheduled slots, and video on demand (VOD) and digital video recording (DVR) as developed by cable systems during the 1980s and 1990s.

Then came a bigger disruption: in 2007 an upstart company called Netflix, which had been renting DVDs through the mail, began to “stream” content directly to subscribers, who could receive it either on their computers or (with the proper connection) on their TVs. Because Netflix makes most of its money from subscriptions, not conventional advertising, it has no need of Nielsen to provide third-party data on how many people are watching its shows. Attracted by this business model, a slew of companies—Amazon, Disney, HBO, Apple, and others—have started their own subscriber-based streaming platforms.

One of the loudest complaints on the WGA and SAG-AFTRA picket lines is that writers and actors cannot bargain with the streamers, because Netflix and its rivals designate their in-house audience data as proprietary and refuse to make it public. To professionals accustomed to leveraging high ratings into residuals, it is galling to know that the upfront payment for a hit series like Netflix’s Stranger Things is all they will get.

This complaint is compelling but a bit out of date, because the older business model of advertising based on third-party data is making a comeback. In 2019 the American Marketing Association reported that Stranger Things was featuring “brand placements” worth $15 million, with 45 products displayed onscreen and 14 brands “called out” in the dialogue. (Special kudos were given to Smirnoff, “which was mentioned four times.”) To be sure, this “virtual product placement” differs from conventional ads. But as noted by another marketing analyst, the public prefers it because “it doesn’t require interrupting what consumers are watching.” And the streamers love it because “it “generates tons of revenue” and “can’t be skipped.”

Then there’s Nielsen, which in 2020 introduced a version of its classic weekly ratings for shows on the streaming platforms. This was followed by a new system of metrics called The Gauge, which does things like compare the percentage of Americans watching streaming programs on their TV sets with the percentage watching broadcast and cable. (To no one’s surprise, the former is growing rapidly.) The Gauge also ranks the market shares of the major streamers, with Netflix usually on top—which may have moved its then-CEO Reed Hastings to characterize the Nielsen researchers as “thoughtful people” who have “been doing this for a long time” and have an “incentive to be accurate.”

Ad revenue, ratings, percentages, market shares—these fungible matters are once again public knowledge, thanks to Nielsen and new companies like Parrot Analytics. So to optimistic observers of the strike, the two sides may soon be ready to reach an agreement. In this spirit, the Hollywood Reporter recently hinted that terms and conditions are already being floated in the upscale back-channels of Silicon Beach.

The Automation of Culture

But like the smoke from a California wildfire, the rapid progress of A.I. casts a pall over this hope. Already, certain routine production tasks are being transferred to A.I. systems, and the same prospect looms for mid-level jobs and above. Just to cite one example, SAG-AFTRA is worried about the ability of A.I. to train itself on the existing visual record of an actor’s face, body, and voice, and then generate any number of deepfake performances without the actor’s knowledge or consent.

Such fears give rise to a dystopian vision of cascading layoffs and closures resulting in Los Angeles becoming as decrepit as Rust Belt cities like Gary, Indiana. Arresting or controlling this process is perhaps the most urgent item on the strikers’ agenda—and the response of their employers is not encouraging.

Yves Bergquist is not an employer, but as the director of the A.I. & Neuroscience in Media Project at the University of Southern California’s Entertainment Technology Center, he gives every sign of sharing the same gung-ho attitude toward A.I. as the major studios that fund the center and place their tech executives on its board. For example, in a recent interview with the insider website TheWrap, he gushed with enthusiasm about a new A.I. tool that can scan “rushes,” or unedited footage, for details such as “shot types,” “objects,” and “colors.”

Bergquist also claims that his tool can identify and assess more nebulous elements, such as “emotional arcs of the characters,” “scenes,” and “talent.” How valuable those assessments are, he did not say. But he did make one thing clear: “There will be impact in a lot of jobs that are very menial, that don’t involve super-high technical knowledge or super-high creative ability.” When asked what will happen to the clueless, talentless grunts currently doing this work, he replied airily, “People are just going to need to educate themselves and ramp up on how A.I. can help them.”

Hollywood is a pretty cutthroat place, so in normal times this callous statement might not be alarming to the “super-high” individuals who write, act, direct, design sets and costumes, compose music, and perform other skilled tasks crucial to film and TV production. But a constant theme on the picket line and in the press is that these jobs, too, are endangered, because cutting-edge A.I. is now creating artistic products that pass the Hollywood equivalent of the Turing Test.

The list of such products is short, and oft-repeated in the media. I decided to check out a couple. First, I read a 126-word A.I.-generated treatment for a movie featuring the comedy duo Cheech & Chong and Freddy Krueger, the maniacal killer from the 1984 horror film A Nightmare on Elm Street. It did a fair job of including clichés from both. Then, I listened to a 2:19-minute song posted by an anonymous TikTok user that combined the cloned voices of two Canadian musicians, Drake and The Weeknd [sic], with a simple beat, a snippet of melody, and a nondescript lyric. It went viral on TikTok, Spotify, and YouTube before being tagged as a fake by Universal Music Group.

It is true that these A.I. products resemble much of what passes for popular culture these days. But should that be the standard? Hollywood has been going through a rough patch lately, for reasons that include the pandemic’s near-fatal impact on already sagging theater attendance, the younger generation’s rejection of traditional narrative in favor of hypnotic scrolling through social media, and China’s politically-motivated protectionism. In response, companies like Disney and the major networks, which have vaults full of proprietary content, have been rummaging through them for forgotten shows, worn-out formulas, and whatever detritus can be dusted off, repackaged, and fed into the bottomless maw of demand for streaming content. In this situation, having access to an unlimited supply of A.I.-generated mashups must seem a godsend.

An Almost Religious Mythology

There is yet another pall hanging over this landscape: the fear that A.I. is already out of control, like the hundreds of enchanted brooms that spring up from the splinters of the first when it is hacked to pieces by the panicky sorcerer’s apprentice. In the Disney version of the tale, the sorcerer returns just as the waters are flooding his workshop, and with a few grand gestures makes them disappear. The apprentice looks ever so sorry, but the hint of mischief never leaves his face. He is Mickey Mouse, after all.

This may be a stretch, but a similar hint of mischief can also be found in Sam Altman, the 38-year-old CEO of OpenAI, the company that created ChatGPT. In a recent profile in The Atlantic he seems coolly detached, toggling between dark forebodings of global extinction and bright visions of a “new kind of society” awaiting us “on the other side.” Despite claims to be thinking really, really hard about these prospects, he also echoes Robert Oppenheimer’s quip about the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

When founded in 2015, OpenAI was a low-profile research outfit dedicated to advancing “digital intelligence in the way that is most likely to benefit humanity,” writes Ross Andersen, the author of the Atlantic profile. Today it is a high-profile corporation whose “for-profit arm…comprises more than 99 percent of [its] head count.” Among other investments it has attracted $13 billion from Microsoft—and is planning to cap the returns on that investment at 100 times their original value. If Goethe’s sorcerer had had that much skin in the game, he might have let Mickey’s brooms destroy the world.

It is worth noting that none of the existing A.I. systems has the capacity to destroy the world. They are all “narrow,” meaning trained on one class of data—text, images, music, video—to generate one type of product. But because they are trained on oceans of material posted online, they also generate oceans of lies, make oceans of mistakes, and spew oceans of obscenity and abuse. Like their predecessors in social media, A.I. companies do not have algorithms capable of filtering the filth out of this vast output, so they hire human “content moderators” in faraway countries like Nigeria and Kenya to do so. In both industries, this work involves so much exposure to raw human depravity, it poses risks to the sanity of the workers.

In short, A.I. has no conscience. And though its capacities keep expanding, none of its advocates ever suggests that it will one day develop one. To the contrary, the amorality of A.I. has long been recognized by its creators, albeit indirectly in the form of vague promises to build “guardrails” into the systems to keep them “aligned with human values”—whatever that means.

The amorality of A.I. is also recognized in the existential fears expressed by a growing number of experts. In March, a petition called “Pause Giant A.I. Experiments: An Open Letter” was published, urging the major developers to halt their “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” This petition has gathered over 50,000 signatures, and many prominent voices have echoed its warning.

But if A.I. already exists, what is the reason for the pause? Here we encounter the specter of “General A.I.,” a bland-sounding term for what futurists and science fiction writers have been predicting since the birth of the computer. A catchier term is “The Singularity” (the “The” is a must), defined here by Ray Kurzweil, the man who put the term, and the concept, on the map:

Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.

If this scenario sounds familiar, it is because The Singularity has captured the imagination of Hollywood. Jaron Lanier, the rotund, sandy-dreadlocked Socrates of Silicon Valley, points to the powerful influence of movie franchises like The Terminator, The Matrix, and Star Trek (especially the character of Commander Data) in making The Singularity “an almost religious mythology in tech culture.” Speaking from personal experience, Lanier adds that “it’s only natural that computer scientists” would dream of being present at the birth of The Singularity.

That dream has deep roots, and so do the nightmares that accompany it, because we human beings have always wondered what distinguishes our species from the rest of nature. Before The Terminator there was the 1921 play R.U.R.: Rossum’s Universal Robots, by the Czech writer Karel Čapek, a devout Catholic who coined the term “robot” to describe the mechanical slaves who become conscious enough to rebel against their masters but, lacking conscience, do not stop until they kill all of humanity.

“Robot” comes from “robota,” which is Czech for serf labor, a practice that was abolished in 1848 but lived on in the memory of Čapek’s Prague audience. Dig deeper and you will find another figure, the “golem” of Jewish folklore: a man-shaped being made of mud or clay that can never be human because he is not created by God. Mentioned once in the Hebrew Bible, the golem was passed down through the millennia and received his most famous embodiment in a 19th-century tale about a 16th-century rabbi in Prague, who fashioned a golem from the sediment of the Vltava River to defend the city’s Jews against their enemies.

The Prague golem was well known to Čapek and may have influenced the ending of his play, which follows an old man, the sole survivor of the robot rampage, who is plunged in despair until he happens upon a pair of robots who have miraculously become human. How can he tell? By observing in them certain attributes that mark “the divine significance” of humanity: empathy, curiosity, wonder, laughter, self-sacrifice, and love. These attributes depend on consciousness but go beyond it, to the realm of moral inwardness we call conscience.

For Čapek, these attributes are divine. But they can also be viewed in a non-religious way that does not limit itself to what the philosopher Thomas Nagel calls the “reductive materialism” of science. For Nagel, who is not a religious man, the abiding mystery of consciousness and conscience is their utter and complete separateness from the physical realm. “It is too easy,” he writes, “to forget how radical is the difference between the subjective and the objective, and to fall into the error of thinking about the mental in terms taken from our ideas of physical events and processes.”

That error is fundamental to the tech industry’s all-out effort to conjure The Singularity by building the electronic equivalent of a human brain. If consciousness can arise from one physical object, it is argued, then why not make it possible for it to arise from another? In this view, all it will take to achieve The Singularity is a sufficient number of electronic “neurons” joined with a sufficient degree of connectivity, and—Eureka!—consciousness will appear.

The obvious problem with this reasoning is that it begs the question, so well explored by Nagel and other non-reductionist thinkers, of the relationship between the physical brain and the mind through which we experience the world. But putting that aside, let us consider the challenges facing the would-be sorcerers of The Singularity.

First is the stubborn fact that the human brain is far and away the most complex object in the known universe. It contains a hundred billion neurons, give or take, each one with threadlike extensions, called dendrites, that connect it with roughly ten thousand other neurons. These connections are called synapses, and their number is estimated to be in the neighborhood of a quadrillion (ten to the 15th power). The stars in the Milky Way galaxy weigh in at the exponentially smaller number of ten to the 11th power.

Second, even if a computer with this massive connectivity could be built, it is unlikely it could be made to run continuously, day and night, on the twelve watts of power it takes to run my brain (and yours). A much simpler machine, my desktop computer, requires 175 watts. Not only that, but our brains are fueled by the ultimate green energy: food. “The marvel of animal physiology” is how my friend Kris Brewer, the director of technology for MIT’s Center for Brains, Minds, and Machines, describes it.

But let us imagine that the super-high technical knowledge crowd succeed in building an ultra-efficient quantum computer capable of sparking an ultra-high level of intelligence that expands outward into the universe at the speed of light. If, as predicted, this 21st-century golem is conscious but lacking a conscience, then the advice of my quadrillion synapses is not to plug it in.