How to Pause AI Before It’s Too Late?

By | May 16, 2024

Credit – Getty Images

HEIt’s only been 16 months, but ChatGPT’s November 2022 launch already feels like ancient AI history. Hundreds of billions of dollars, both public and private, are flowing into AI. This week alone, thousands of AI-powered products have been created, including the new GPT-4o. Everyone from students to scientists now uses these large language models. Our world, and especially the world of artificial intelligence, has certainly changed.

But the real reward of human-level AI (or artificial general intelligence (AGI)) has yet to be achieved. Such a breakthrough would mean an artificial intelligence that can perform the most economically efficient tasks, interact with others, do science, create and maintain social networks, conduct politics, and wage modern wars. The main constraint of all these tasks today is cognition. Removing this restriction will change the world. But many of the world’s leading AI laboratories believe this technology could become a reality before the end of this decade.

This could be a huge blessing for humanity. But artificial intelligence can also be extremely dangerous, especially if we cannot control it. Unchecked AI can infiltrate the online systems that power much of the world and use them to achieve its goals. It can gain access to our social media accounts and create personalized manipulations for large numbers of people. Worse still, military personnel responsible for nuclear weapons could be manipulated by an AI to share their credentials, posing a major threat to humanity.

A constructive step would be to make it as difficult as possible for these to happen by strengthening the world’s defenses against negative online actors. But when AI can persuade humans, which it is already better than us at, there is no known defense.

For these reasons, many AI security researchers working at AI labs and security-focused nonprofits like OpenAI, Google DeepMind, and Anthropic have given up on trying to limit the actions AI can take in the future. Instead, they focus on creating “adaptive” or inherently safe AI. Aligned AI could be powerful enough to destroy humanity, but it shouldn’t happen request To do this.

There are big question marks around aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the top researchers trying to harmonize superhuman AI have left OpenAI in dissatisfaction; This is a move that does not inspire confidence. Second, it’s unclear what a super-intelligent AI would adapt to. If there were an academic value system like utilitarianism, we would quickly discover that most people’s values ​​do not actually match these distant ideas, and then the unstoppable superintelligence could continue to act against the will of most people indefinitely. If harmony were based on people’s true intentions, we would need a way to bring these very different intentions together. Idealistic solutions like UN council or AI-powered decision aggregation algorithms Although possible, there is concern that the absolute power of superintelligence could be concentrated in the hands of too few politicians or CEOs. This of course poses an unacceptable and immediate danger to all other people.

Read more: The Only Way to Deal with the Threat of Artificial Intelligence? turn it off

Dismantling the time bomb

If we can’t at least find a way to protect humanity from extinction, and preferably from the alignment dystopia, artificial intelligence that could become uncontrollable should not be created in the first place. This solution, which postpones human-level or super-intelligent AI unless we address security concerns, has the downside that AI’s great promises, from curing diseases to creating massive economic growth, will need to wait.

Pausing AI may seem like a radical idea to some, but it will be necessary if AI continues to evolve before we reach a satisfactory adaptation plan. When AI capabilities reach near-takeover levels, the only realistic option is for governments to strictly require laboratories to pause development. Otherwise it will be suicide.

Pausing AI may not be as difficult as some think. Currently, only a relatively small number of large companies have the opportunity to carry out pioneering training studies; This means that the implementation of a pause is mostly limited to political will, at least in the short term. But in the long term, hardware and algorithmic improvements mean a pause may be seen as difficult to implement. Implementation will need to be made between countries, for example through an agreement, and within countries through steps such as strict hardware controls.

In the meantime, scientists need to better understand the risks. Although there are widely shared academic concerns, no consensus has yet been reached. In the new International Scientific Report on the Safety of Advanced AI, which will become the “Intergovernmental Climate Change Panel on AI Risks,” scientists should formalize what they agree on and show where and why their views diverge. Leading scientific journals should open up more to existential risk research, even if it seems speculative. The future doesn’t provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have a huge role to play in how AI emerges. This starts with formal recognition of the existential risk of AI, as has already been done by the US, UK and US. EU. and establishing AI security institutes. Governments should also prepare plans for what to do in the most important, plausible scenarios and how to deal with the many non-existent problems of AGI, such as mass unemployment, runaway inequality and energy consumption. Governments should make NPD strategies publicly available, enabling scientific, industrial and public evaluation.

It is a major advance that major AI countries are constructively discussing common policy at biennial AI security summits, including the one to be held in Seoul from May 21-22. However, this process needs to be preserved and expanded. Working on a common ground truth about the existential risks of AI and expressing common concerns with all 28 invited countries would already be a major advance in this direction. Beyond this, relatively easy measures need to be agreed upon, such as establishing licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from education. An international artificial intelligence agency needs to be established to protect the execution.

Scientific progress is fundamentally difficult to predict. Yet superhuman artificial intelligence looks set to impact our civilization more than anything else this century. Simply waiting for the ticking time bomb to explode is not a viable strategy. Let’s use the time we have as wisely as possible.

Contact us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *