Get App
Download App Scanner
Scan to Download
Advertisement
This Article is From Oct 28, 2022

No, Artists and Designers Aren’t About to Lose Their Jobs to AI

As companies such as Facebook, Google, and OpenAI claim to offer groundbreaking advances in artificial intelligence, don’t forget to ask if it works.

No, Artists and Designers Aren’t About to Lose Their Jobs to AI
A Chipotle restaurant in New York, US, on Monday, Oct. 24, 2022. Chipotle Mexican Grill Inc. is scheduled to release earnings figures on October 25. Illustration: Sam Lyon for Bloomberg Businessweek

Silicon Valley has a new obsession. It's called “generative artificial intelligence,” and it refers to the idea of having computers take over creative tasks such as writing, filmmaking, and graphic design.

This may sound a little strange if you've been paying attention to the industry's forecasts about AI over the past decade. But, having wrongly predicted the demise of truck and taxi driving and human customer service, venture capitalists and big tech companies are pouring billions into software tools designed to replace certain kinds of creative work. In doing so, they're attracting awed media coverage and, because these things tend to go together, prompting a full-blown moral panic.

If you're still wasting your time trying to wrap your head around the metaverse or crypto, you're already behind the curve. Sequoia Capital, one of Sand Hill Road's most respected investors, has predicted that by 2025—that is, in only three years—computers will write better than the average human. By 2030, the company says, professional writers, artists, and video game makers will essentially be replaced by computers, generating trillions of dollars. Such excitement has to do with apparent breakthroughs at OpenAI, Google, Facebook, and other companies that have recently unveiled software that can take a text prompt and use it to create a more or less grammatical short story or an interesting-looking image or video. The most prominent of these is OpenAI's Dall-E, which has produced weird and memorable images of, for instance, an astronaut riding a horse in outer space.

I hesitate to say the AI-written stories are convincing, or that the art is good, because neither is true. Any AI text that's longer than a sentence tends to be incoherent; and when my colleagues in the art department attempted to use Dall-E to generate illustrations for an article about the prospect of professional illustrators using the technology to augment their work, the results, which included contorted faces and misshapen arms and rendered a “business executive” as a mime, suggested that widespread professional use is probably a good ways off.

Even so, media accounts have tended to focus on the prospect that these obviously limited tools might soon get so good, they'll threaten to put most creative professionals out of business. When an AI artist won a blue ribbon at a state fair in Colorado that had a category for digitally created or manipulated art, the treated the achievement as a watershed moment and included panicked quotes that suggested the modest achievement constituted “the death of artistry.” The offered similar warnings about another text-to-art tool, Stability Diffusion, while playing down some obvious shortcomings. Describing AI-generated porn created by the app, the publication noted “naked models sporting extra limbs and placed in physically impossible poses” but assured readers this wouldn't limit its impact, because “the quality of this output will certainly improve in the near future, bringing with it new questions about the ethics of AI-generated porn.”

As far as assumptions go, the idea that AI will solve the mystery of human sexual desire in the future is a rather big one. But leaving that aside, the belief that advances in generative AI are creating an ethical crisis is more or less conventional wisdom in tech circles at the moment. “We need to talk about how good AI is getting,” wrote columnist Kevin Roose. The impulse to debate the ethics of a poorly understood, brand-new technology developed by some of the world's largest and, at times, most ethically compromised companies is understandable. Silicon Valley has a track record of shrugging off the consequences of its software algorithms and addressing ethical failings only when it's way too late. And there are obvious problems with the current crop of AI apps, which tend to magnify existing biases and may be using copyrighted works to train their algorithms without compensating artists.

But in addition to the question Roose suggests—“Is AI dangerous?”—we should be debating his premise: Just how good are these AI systems, really? And what are they good at? On the day OpenAI unveiled its latest version of Dall-E (the one that produced the astronaut on the horse), the company's chief executive officer, Sam Altman, suggested that it represented a preview of his ultimate goal, artificial general intelligence. The term refers to computers that can essentially think for themselves and can thus replace most human tasks. “AGI is gonna be wild,” Altman tweeted. The implication was that Dall-E 2 represented an improvement in understanding—and thus a huge step toward building a computer that can think.

But some AI experts have questioned that concept and have suggested that Dall-E may be creating more of an illusion of progress rather than the real thing. “People think that it understands human language,” says Gary Marcus, an entrepreneur and retired NYU professor who writes a Substack newsletter on the limitations of AI. “In reality it's doing something on the language side that's more primitive.” Marcus, who recently co-authored a research paper about Dall-E's struggles to understand basic grammar, says that although Dall-E and similar systems will sometimes draw a picture more or less correctly, they don't seem to understand relationships between words. This inability can lead to all sorts of confusion: A request for a cat and a bunny produces two cat-bunny hybrids, or one for salmon in a river might yield salmon filets rather than living fish. “It's like you're talking to a second language learner, and they understand some of the words but don't understand how they fit together,” Marcus says.

This is not to say that generative AI services aren't cool—especially when they're used by humans who know how to craft interesting prompts for the AI to respond to. But these new services don't necessarily say much about the prospects for AI to take over more types of work. Perhaps Dall-E represents a novel form of intelligence that will pave the way for robot assistants and a world beyond human labor; on the other hand, maybe Dall-E is more akin to a next-generation version of Photoshop, the popular image-editing software. “The image synthesis is spectacular,” Marcus says. “If your task is to make stock images for a PowerPoint presentation, it's the bomb. But if you're trying to put this technology in a robot, then it's just a demo.”

A decade ago, futurists seemed convinced that self-driving cars were already as good as human drivers in most circumstances. They spun up themselves (and the press) into fantasies about the end of traffic fatalities and the creation of new utopian landscapes that would allow people to catch up on sleep as they commuted from the exurbs. These predictions had enormous consequences, pushing companies to invest about $100 billion into the field and prompting policymakers and futurists to spend untold hours debating the nuances of self-driving car ethics and the potential mass unemployment that might follow from robots that were capable of replacing humans. The industry tended to encourage these ethical debates; after all, they were conditioned on the assumption that the technology worked.

That idea seems less clear today. Automotive automation has made lots of progress. Advanced driver assistance systems are widely available and may increase safety in certain situations. But the long-promised robo-taxis are still, for the most part, nonexistent. Part of the problem, as I argued in a cover story earlier this month, is that the industry essentially fell for its own demos. You see a self-driving car navigating a single route and you assume, falsely, that it can navigate any route. Or, to take another example, you train an AI chatbot to act like a human and then mistake its answer to the question “Are you sentient?” for actual sentience.

It seems possible that the current backers of generative AI are making a similar mistake. This isn't to say that these new AI services won't be valuable; just that, like the current generation of “self-driving” cars, they'll probably still need human artists to drive them.

More stories like this are available on bloomberg.com

©2022 Bloomberg L.P.

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search
Add NDTV Profit As Google Preferred Source