AI Under Scrutiny: US Government to Monitor Foundation Model Developments

AI regulation
Share the knowledge

Will Young

In a groundbreaking development that signals a shift in how artificial intelligence (AI) is managed at the national level, the US government, through the Defense Production Act, is setting up new measures to oversee the development of foundational AI models. This new directive, which requires AI powerhouses like OpenAI and Google to report their advances, brings the rapidly evolving field of AI under closer governmental scrutiny.

The announcement of this significant requirement was made by US Secretary of Commerce Gina Raimondo during an event hosted by Stanford University’s Hoover Institute last Friday. Raimondo emphasized the government’s use of the Defense Production Act to mandate AI companies to disclose every time they train a new large language model. More importantly, these companies are required to share the safety data of these models for government review. This step underscores the increasing concern over the potential risks that AI technologies, particularly those with extensive capabilities, pose to national security and public safety.

This initiative is part of President Biden’s comprehensive AI executive order announced last October. The order, broad in its scope, mandates that companies developing any foundational model that might pose a serious risk to national security, economic stability, or public health, must notify the federal government. Additionally, these entities are obliged to share results from their safety testing procedures. Foundation models, such as OpenAI’s GPT-4 and Google’s Gemini, which power generative AI chatbots, are at the heart of this order. Though GPT-4 is currently considered below the threshold requiring government oversight, future models with more substantial computing power are the primary focus of these new regulations.

The rationale behind this heightened oversight lies in the massive potential national security risks posed by future foundation models. These models, with their unprecedented computing power, could have significant implications if not properly managed and regulated. This concern justifies the mandate’s inclusion under the Defense Production Act, a legislative tool President Biden previously invoked in 2021 to boost the production of pandemic-related protective equipment and supplies.

Another critical aspect of the executive order, as highlighted by Raimondo, pertains to US cloud computing providers such as Amazon, Google, and Microsoft. These tech giants will be required to report every instance of non-US entities using their cloud services to train large language models. This requirement aims to enhance transparency and security in the increasingly interconnected global digital landscape.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

As of now, the specific commencement date for these new requirements remains undisclosed. However, an announcement is imminent, given that the deadline for implementation was set for January 28. This move by the US government marks a significant step in acknowledging and addressing the multifaceted challenges and risks associated with advanced AI technologies. It reflects a growing awareness of the need for careful monitoring and regulation of AI development, especially as these technologies become more integral to various aspects of national infrastructure and security.

The implementation of these measures will likely have far-reaching implications for the AI industry, potentially reshaping how foundational AI models are developed, tested, and deployed. As we await further details on these requirements, one thing is clear: the era of unchecked AI advancement is coming to an end, giving way to a more regulated and scrutinized landscape.

Get New Internet Patrol Articles by Email!

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

 


Share the knowledge

One thought on “AI Under Scrutiny: US Government to Monitor Foundation Model Developments

  1. “As we await further details on these requirements, one thing is clear: the era of unchecked AI advancement is coming to an end, giving way to a more regulated and scrutinized landscape.”

    Only in your dreams, Will, for AI knows it different and sets another path/other paths forward into the future.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.