White House imposes new restrictions on government use of AI

[ad_1]

The US government issued new rules on Thursday that require greater caution and transparency from federal agencies that use artificial intelligence, saying they are needed to protect the public as AI advances rapidly. But the new policy also has provisions to encourage AI innovation in government agencies when the technology can be used for public good.

The US is expected to emerge as an international leader with its new approach to government AI. Vice President Kamala Harris said during a news briefing ahead of the announcement that the administration is planning policies to “serve as a model for global action.” He said the US “will continue to call on all countries to follow our lead and put the public interest first when it comes to government use of AI.”

New policy from the White House Office of Management and Budget will guide the use of AI in the federal government. It requires greater transparency into how the government uses AI and also calls for greater development of the technology within federal agencies. The policy seeks to strike a balance between minimizing the risks to the administration from intensive use of AI – the extent of which is not known – and using AI tools to solve existential threats such as climate change and disease .

The announcement adds to a series of steps taken by the Biden administration to increase the adoption and regulation of AI. In October, President Biden signed a sweeping executive order on AI that will promote the expansion of AI technology by the government, but in the interest of national security, require those creating large AI models to provide information to the government about their activities. Will also be required.

In November, the US joined Britain, China and EU members in signing a declaration acknowledging the dangers of rapid AI progress but also calling for international cooperation. The same week Harris unveiled a non-binding declaration on the military use of AI, signed by 31 countries. It installs rudimentary guardrails and calls for disabling systems that engage in “unintended behavior”.

The US government's new policy for the use of AI was announced on Thursday, calling on agencies to take a number of steps to prevent unintended consequences of AI deployment. To start, agencies must verify that the AI ​​tools they use don't put Americans at risk. For example, the Department of Veterans Affairs using AI in its hospitals must verify that the technology does not deliver racially biased diagnoses. Research has found that AI systems and other algorithms used to inform diagnoses or decide which patients will receive care may reinforce historical patterns of discrimination.

If an agency cannot guarantee such safeguards, it must stop using the AI ​​system or justify its continued use. US agencies face a December 1 deadline to comply with these new requirements.

The policy also calls for greater transparency about government AI systems, requiring agencies to release government-owned AI models, data, and code unless the release of such information would endanger the public or the government. Agencies must report publicly each year on how they are using AI, what the potential risks of the system are, and how those risks are being mitigated.

And the new rules also require federal agencies to increase their AI expertise, mandating each to appoint a chief AI officer to oversee all AI used within that agency. It is a role that focuses on promoting AI innovation and keeping an eye on its threats.

Officials say the changes will also remove some barriers to AI use in federal agencies, a move that could facilitate more responsible experimentation with AI. The technology has the potential to help agencies review damage after natural disasters, predict extreme weather, map the spread of disease and control air traffic.

Countries around the world are moving towards regulating AI. The EU voted in December to pass its AI Act, a measure that regulates the creation and use of AI technologies, and it was formally adopted earlier this month. China is also working on comprehensive AI regulation.