Carving AI controls into silicon could keep doomsday away

[ad_1]

Even the smartest, most cunning artificial intelligence algorithms will likely have to follow the rules of silicon. Its capabilities will be constrained by the hardware it is running on.

Some researchers are exploring ways to exploit that connection to limit the ability of AI systems to cause harm. The idea is to encode the rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them.

In theory – an area where there is currently much debate about dangerously powerful AI – it could provide a powerful new way to prevent rogue countries or irresponsible companies from secretly developing dangerous AI. Is. And it is more difficult to avoid than traditional laws or treaties. A report published earlier this month by the Center for a New American Security, an influential US foreign policy think tank, details how discreetly silicon can be used to implement a range of AI controls.

Some chips already contain trusted components designed to keep sensitive data secure or protected from misuse. For example, the latest iPhones keep a person's biometric information in a “secure enclave.” Google uses a custom chip in its cloud servers to ensure that nothing is tampered with.

The paper suggests using similar features built into GPUs or adding new features to future chips to prevent AI projects from accessing more than a certain amount of computing power without a license. Because the most powerful AI algorithms like ChatGPT require enormous computing power to train, this will limit who can build the most powerful systems.

CNAS says licenses may be issued by a government or international regulator and refreshed periodically, making it possible to cut off access to AI training by refusing new licenses. “You can design protocols such that you can only deploy a model if you've run a particular assessment and got a score above a certain threshold — the value,” says Tim Feist, a CNAS fellow and one of the three authors. Take it for safety.” paper.

Some AI veterans worry that AI is now becoming so smart that one day it may prove uncontrolled and dangerous. More immediately, some experts and governments are concerned that existing AI models could also make it easier to develop chemical or biological weapons or automate cybercrime. Washington has already imposed a series of AI chip export controls to limit China's access to the most advanced AI over fears it could be used for military purposes – although smuggling and clever engineering have Some ways around them have been provided. Nvidia declined to comment, but said the company has lost billions of dollars in orders from China due to previous U.S. export controls.

CNAS's Feist says that although hard-coding restrictions into computer hardware may seem excessive, there is precedent in establishing infrastructure to monitor or control critical technology and enforce international treaties. “If you think about security and nonproliferation in the nuclear field, verification technologies were absolutely critical to guaranteeing the treaties,” says CNAS's Feist. “The network of seismometers that we now have to detect underground nuclear tests is the basis of treaties that say we will not test weapons underground above a certain kiloton limit.”

The ideas presented by CNAS are not entirely theoretical. All of Nvidia's critical AI training chips – critical for building the most powerful AI models – already come with secure cryptographic modules. And in November 2023, researchers from the Future of Life Institute, a non-profit dedicated to protecting humanity from existential threats, and Mithril Security, a security startup, created a demo showing how the security of Intel CPUs for cryptographic How the module can be used. A scheme that can restrict unauthorized use of AI models.