The Biden administration has announced new artificial intelligence (AI) regulations for federal agencies, following the president’s executive order from last year. These regulations include mandatory risk reporting and transparency rules to ensure the safe, secure, and responsible use of AI by government agencies. Vice President Kamala Harris emphasized the importance of verifying that AI tools do not endanger the rights and safety of the American people, citing an example of ensuring racial bias is not present in AI diagnoses used in VA hospitals. Federal agencies will also be required to appoint a chief AI officer to oversee AI technology and provide an online database listing their AI systems and potential risks.

The new AI regulations were developed in collaboration with leaders in both the public and private sectors, including computer scientists and civil rights leaders. The White House fact sheet on the new policy highlights the goal of advancing equity and civil rights, as well as standing up for consumers and workers. The regulations aim to promote responsible use of AI, with agencies required to independently evaluate and monitor their AI systems to guard against risks of discrimination and mistakes. The Biden administration views AI as presenting both risks and opportunities, with the potential to improve public services and address societal challenges when used responsibly.

The Biden administration has taken steps to address potential dangers of AI, with President Biden signing a landmark executive order in October aimed at protecting Americans from the risks of AI systems. Among the actions included in the executive order is the requirement for AI developers to share their safety-test results, known as red-team testing, with the federal government. However, a coalition of state attorneys general has expressed concerns about the potential for the federal government to centralize control over AI and potentially use it for political purposes, such as censoring what may be considered disinformation. This group of state attorneys general warned that the order could inject partisan purposes into decision-making around AI development and control.

OMB Director Shalanda Young emphasized the need for agencies to independently evaluate their use of AI, monitor for mistakes and failures, and guard against the risk of discrimination. The new AI policy aims to harness the opportunities presented by AI to improve public services, address challenges like climate change and public health, and advance equitable economic opportunities. Federal agencies will need to have independent auditors assess the risks posed by their AI systems, with each agency potentially using different AI systems. The Biden administration is committed to ensuring the responsible use of AI by government agencies and promoting transparency and accountability in their AI practices.

Overall, the new AI regulations announced by the Biden administration seek to ensure that AI is used responsibly by federal agencies, with a focus on protecting the rights and safety of the American people. The regulations include mandatory risk reporting, transparency rules, and the appointment of chief AI officers within agencies to oversee AI technology. Collaboration with leaders in the public and private sectors has shaped the development of these regulations, with a focus on advancing equity and civil rights while standing up for consumers and workers. The administration views AI as presenting both risks and opportunities, with the potential to improve public services and address societal challenges when used and overseen responsibly.

Share.
Exit mobile version