How AI went from weird to boring

How AI went from weird to boring

In 2018, a viral joke started going around the internet: scripts based on “making a bot watch 1,000 hours” of just about anything. The premise (concocted by comedian Keaton Patti) was that you could train an artificial intelligence model on vast quantities of Saw films, Hallmark specials, or Olive Garden commercials and get back a bizarre funhouse-mirror version with lines like “lasagna wings with extra Italy” or “her mouth is full of secret soup.” The scripts almost certainly weren’t actually written by a bot, but the joke conveyed a common cultural understanding: AI was weird.

Strange AI was everywhere a few years ago. AI Dungeon, a text adventure game genuinely powered by OpenAI’s GPT-2 and GPT-3, touted its ability to produce deeply imagined stories about the inner life of a chair. The first well-known AI art tools, like Google’s computer vision program Deep Dream, produced unabashedly bizarre Giger-esque nightmares. Perhaps the archetypal example was Janelle Shane’s blog AI Weirdness, where Shane trained models to create physically impossible nuclear waste warnings or sublimely inedible recipes. “Made by a bot” was shorthand for a kind of free-associative, nonsensical surrealism — both because of the models’ technical limitations and because they were more curiosities than commercial products. Lots of people had seen what “a bot” (actually or supposedly) produced. Fewer had used one. Even fewer had to worry about them in day-to-day life.

But soon, generative AI tools would explode in popularity. And as they have, the cultural shorthand of “chatbot” has changed dramatically — because AI is getting boring.

“If you want to really hurt someone’s feelings in the year 2023, just call them an AI,” suggested Caroline Mimbs Nyce in The Atlantic last May. Nyce charted the rise of “AI” as a term of derision — referring to material that was “dull or uninspired, riddled with clich​​és and recycled ideas.” The insult would reach new heights at the start of the Republican primary cycle in August, when former New Jersey governor Chris Christie dissed rival Vivek Ramaswamy as “a guy who sounds like ChatGPT.”

And with that, “AI” — as an aesthetic or as a cultural descriptor — stopped signifying weird and is pretty much just shorthand for mediocre.

The insult would reach new heights at the start of the Republican primary cycle in August, when former New Jersey governor Chris Christie dissed rival Vivek Ramaswamy as “a guy who sounds like ChatGPT.”

Part of the shift stems from AI tools getting dramatically better. The surrealism of early generative work was partially a byproduct of its deep limitations. Early text models, for instance, had limited memory that made it tough to maintain narrative or even grammatical continuity. That produced the trademark dream logic of systems like early AI Dungeon, where stories drifted between settings, genres, and protagonists over the span of sentences.

When director Oscar Sharp and researcher Ross Goodwin created the 2016 AI-written short film Sunspring, for instance, the bot they trained to make it couldn’t even “learn” the patterns behind proper names — resulting in characters dubbed H, H2, and C. Its dialogue is technically correct but almost Borgesian in its oddity. “You should see the boys and shut up,” H2 snaps during the film’s opening scene, in which no boys have been mentioned. “I was the one who was going to be a hundred years old.” Less than a decade later, a program like Sudowrite (built on OpenAI’s GPT-3.5 and GPT-4 models) can spit out paragraphs of text that closely imitates cliched genre prose.

But AI has also been pushed deliberately away from intriguing strangeness and toward banal interactions that often end up wasting humans’ time and money. As companies fumble toward a profitable vision of generative artificial intelligence, AI tools are becoming big business by blossoming into the least interesting version of themselves.

AI is everywhere right now, including many places it fits poorly. Google and Microsoft are pitching it as a search engine — a tool whose core purpose is pointing users to facts and information — despite a deep-seated propensity to completely make things up. Media outlets have made some interesting attempts at leveraging AI’s strengths, but it’s most visible in low-quality spam that’s neither informative nor (intentionally) entertaining, designed purely to lure visitors into loading a few ads. AI image generators have shifted from being seen as bespoke artistic experiments to alienating huge swathes of the creative community; they’re now overwhelmingly associated with badly executed stock art and invasive pornographic deepfakes, dubbed the digital equivalent of “a fake Chanel bag.”

AI tools are becoming big business by blossoming into the least interesting versions of themselves

And as the stakes around AI tools’ safety have risen, guardrails and training seem to be making them less receptive to creatively unorthodox uses. In early 2023, Shane posted transcripts of ChatGPT refusing to play along with scenarios like being a squirrel or creating a dystopian sci-fi technology, delivering its now-trademark “I’m sorry, but as an AI language model” short-circuit. Shane had to resort to stage-setting with what she dubbed the “AI Weirdness hack,” telling ChatGPT to imitate older versions of AI models producing funny responses for a blog about weird AI. The AI Weirdness hack has proven surprisingly adept at getting AI tools like Bloom to shift from dull or human-replicating results to word-salad surrealism, an outcome Shane herself has found a little bit unsettling. “It is creepy to me,” she mused in one post, “that the only reason this method gets BLOOM to generate weird designs is because I spent years seeding internet training data with lists of weird AI-generated text.”

AI tools are still plenty capable of being funny, but it’s most often due to their over-the-top performance of commercialized inanity. Witness, for instance, the “I apologize but I cannot fulfill this request” table-and-chair set on Amazon, whose selling points include being “crafted with materials” and “saving you valuable and effort.” (You can pay a spammer nearly $2,000 for it, which is less amusing.) Or a sports-writing bot’s detail-free recaps of matches, complete with odd phrases like “close encounter of the athletic kind.” ChatGPT’s absurdity is situational — reliant on real people doing painfully serious work with a tool they overestimate or fundamentally misunderstand.

It’s possible we’re simply in an awkward in-between phase for creative AI use. AI models are hitting the uncanny valley between “so bad it’s good” and “good enough to be bad,” and perhaps with time we’ll see them become genuinely good, adept at remixing information in a way that feels fresh and unexpected. Maybe the schism between artists and AI developers will resolve, and we’ll see more tools that amplify human idiosyncrasy instead of offering a lowest-common-denominator replacement for it. At the very least, it’s still possible to guide AI tools into clever juxtaposition — like a biblical verse about removing a sandwich from a VCR or a hilariously overconfident evaluation of ChatGPT’s art skills. But for now, you probably won’t want to read anything that sounds “like a bot” any time soon.

Source link

post a comment