Why the AI ​​industry could be slowing down a bit

0
482

What a difference four months can make.

If you had asked in November how I thought AI systems were progressing, I might have shrugged. Sure, by then OpenAI had released DALL-E, and I found myself captivated with the creative possibilities it offered. But overall, after years of watching the big platforms hype artificial intelligence, few products on the market seemed to live up to the more grandiose visions that have been laid out for us over the years.

Then OpenAI released ChatGPT, the chatbot that captivated the world with its generative capabilities. from Microsoft GPT powered Bing browserAnthropic Claudeand Google’s Bard followed each other in rapid succession. AI-powered tools are quickly making their way to other Microsoft products and more are coming to Google’s.

At the same time, as we move closer to a world of ubiquitous synthetic media, some signs of danger appear. Over the weekend, an image of Pope Francis showing him in a gorgeous white puffer coat went viral — and I was among those fooled into believing it was real. The founder of the open-source intelligence site Bellingcat was banned from Midjourney after using it to create and distribute some eerily plausible footage of Donald Trump’s arrest. (The company has since disabled free trials after an influx of new applications.)

A group of prominent technologists are now asking the makers of these tools to slow down

Synthetic text is quickly finding its way into the workflows of students, copywriters and anyone involved in knowledge work; this week BuzzFeed became the latest publisher to start experimenting with AI-written messages.

At the same time, technology platforms are cutting members from their AI ethics teams. A large language model created by Meta leaked and posted on 4chanand soon someone figured out how to make it work on a laptop.

Elsewhere, OpenAI has released plug-ins for GPT-4, allowing the language model to access APIs and interact more directly with the Internet, raising fears that it would create unpredictable new avenues for damage. (I asked OpenAI directly about that; the company has not responded to me.)

It is against the backdrop of this maelstrom that a group of prominent technologists are now asking the creators of these tools to slow down. Here is Cade Metz and Gregory Schmidt at the New York Times:

More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems. an open letter that AI tools pose “major risks to society and humanity”.

AI developers are “locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter said, which the nun said. for-profit Future of Life Institute released on Wednesday.

Others who signed the letter include Steve Wozniak, an Apple co-founder; Andrew Yang, an entrepreneur and 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which is the Doomsday clock.

If nothing else, the letter strikes me as a milestone in the march from existential AI dread to mainstream consciousness. Critics and academics have been warning for years about the dangers of these technologies. But last fall, few people playing DALL-E or Midjourney worried about “an out-of-control race to develop and deploy more and more digital minds.” And yet here we are.

There are some valuable critiques of the technologists’ letter. Emily M. Bender, a linguistics professor at the University of Washington and AI critic, called it a “hot gangclaiming in part that this kind of doomerism ultimately benefits AI companies by making them appear far more powerful than they are.See also Max Read on that topic.)

Embarrassed by a group nominally concerned about AI-powered deception, a number of people initially presented themselves as signatories to the letter appeared not to have signed. And Forbes noted that the institution that organized the letter campaign is primarily funded by Musk, who has AI ambitions of their own.

The pace of change in AI feels like it could soon overtake our collective ability to process it

There are also arguments that speed shouldn’t be our primary concern here. Last month, Ezra Klein argued that our real focus should be on the business models of these systems. The fear is that ad-supported AI systems will prove to be more powerful at manipulating our behavior than we currently consider – and that will be dangerous no matter how fast or slow we get here. “Society will have to figure out what it’s comfortable for AI to do, and what AI shouldn’t try, before it’s too late to make those decisions,” Klein wrote.

These are good and necessary critiques. And yet, whatever flaws we might identify in the open letter — I’m applying a pretty hefty discount these days to anything Musk in particular has to say — ultimately I’m convinced of their collective argument. The pace of change in AI feels like it could soon overtake our collective ability to process it. And the changes the signers are calling for — a brief pause in the development of language models larger than those already released — feels like a small request in the grand scheme of things.

Tech coverage tends to focus on innovation and the immediate disruptions that result from it. It tends to be less adept at thinking about how new technologies can drive societal change. And yet AI can potentially impact it dramatically job market, the information environment, cyber security And geopolitics — to name just four concerns — should give us every reason to think bigger.

Aviv Ovadya, who studies the information environment and whose work I have covered here before, was on a red team for OpenAI prior to the launch of GPT-4. Red-teaming is essentially a role-playing game where participants act as opponents of a system to identify its weaknesses. The red GPT-4 team found that the language model would do all sorts of things we wouldn’t want, like hire an unwitting TaskRabbit to solve a CAPTCHA. OpenAI was then able to fix that and other issues before the model was released.

In a new piece inside WiredHowever, Ovadya argues that red-teaming alone is not enough. It’s not enough to know what material the model is spewing out, he writes. We also need to know what effect the release of the model may have on society as a whole. How does it affect schools, journalism or military operations? Ovadya suggests bringing in experts in these areas before releasing a model to build resilience in public goods and institutions, and to see if the tool itself can be modified to prevent abuse.

Ovadya calls this process “violet teaming”:

You can see this as a kind of judo. General purpose AI systems are a massive new form of power being unleashed on the world, and that power can harm our public goods. Just as judo redirects an attacker’s power to neutralize them, violet teaming seeks to redirect the power of AI systems to defend those public goods.

In practice, performing violet teaming can entail a kind of ‘resilience incubator’: linking experienced experts in institutions and public goods with people and organizations that can quickly develop new products using the (prerelease) AI models to mitigate those risks. to help reduce

If violet teaming is adopted by companies like OpenAI and Google, either voluntarily or at the urging of a new federal agency, it could better prepare us for how more powerful models will affect the world around us.

At best, however, violet teams would be just one part of the regulation we need here. There are so many fundamental issues that we have to work through. Should models as large as GPT-4 be able to run on laptops? Should we limit the extent to which these models can access the broader internet, as OpenAI’s plugins do now? Will a current government agency regulate these technologies, or should we create a new one? If so, how soon can we do that?

The speed of the internet often works against us

I don’t think you have to have fallen for the AI ​​hype to believe that we need an answer to these questions – if not now, then soon. It will take time for our sclerotic government to come up with answers. And if the technology continues to evolve faster than the government’s ability to understand it, we will probably regret letting it accelerate.

Anyway, over the coming months we’ll be able to observe the real-world effects of GPT-4 and its rivals and help us understand how and where to trade. But the knowledge that larger models won’t be released during that time I think would provide comfort to those who believe AI can be just as harmful as some believe.

If I’ve learned one lesson from dealing with the social media backlash, it’s that the speed of the internet often works against us. Lies travel faster than anyone can moderate them; hate speech inspires violence faster than tempers can be calmed. By slowing down social media posts as they go viral, or annotating them with additional context, those networks have become more resilient to bad actors who would otherwise use them to do harm.

I don’t know if AI will eventually wreak the havoc some alarmists are now predicting. But I believe that damage will occur sooner if the industry continues to run at full capacity.

Delaying the release of larger language models is not a complete answer to future problems. But it could give us a chance to develop one.