U.S. Senators Propose Creation of an AI Regulator, Stirred by ChatGPT’s Impact

senators scramble to keep up with the advancements in tech, sparked after the release of ChatGPT last fall.
Share the knowledge

Morgann

The ongoing romance between the tech industry and machine learning has sparked conversations among U.S. lawmakers about the need for regulation to control this powerful technology.

However, despite their concern, no proposal to regulate corporate AI projects made it into law. Everything changed when OpenAI launched ChatGPT last November, stirring up a sense of urgency among senators to establish protective measures against the potential harm AI technology could inflict.

In a Senate Judiciary subcommittee hearing, senators from both sides of the aisle voiced their support for creating a new branch of the U.S. government tasked explicitly with regulating AI. Even Sam Altman, the CEO of OpenAI, backed the idea. Altman expressed his worst fear that the technology might cause significant harm to the world. He endorsed the notion of AI companies submitting their AI models for external testing and proposed that a U.S. AI regulator should have the authority to issue or revoke licenses for creating AI above a certain level of capability.

Various U.S. federal agencies, such as the Federal Trade Commission and the Food and Drug Administration, already oversee how companies utilize AI.

However, Senator Peter Welch, a Democrat from Vermont, believes Congress is lagging in keeping up with the pace of technological evolution. According to him, without a dedicated agency to handle issues around social media and AI, the protection against the potential adverse effects of these technologies is limited.

Senator Richard Blumenthal from Connecticut, a fellow Democrat who led the hearing, agreed, expressing concern that a new federal AI agency could struggle to match the tech industry’s speed and influence. The witnesses for the hearing included Altman, IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a psychology professor turned AI commentator. Marcus suggested creating an international body to monitor AI progress and promote its safe development.

Blumenthal kick-started the hearing with an AI voice clone of himself reciting a text written by ChatGPT to underscore the compelling results AI can produce.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

Senators didn’t propose a name for the potential agency or outline its potential functions in detail. Still, they did discuss less radical regulatory responses to recent AI advances, like the requirement of public documentation of AI systems’ limitations or the datasets used to create them. This idea, similar to an AI nutrition label, had been previously introduced by researchers like former Google ethical AI team lead Timnit Gebru.

Lawmakers and industry witnesses alike advocated for the implementation of a disclosure requirement, informing people when they’re interacting with an AI model and not a human, or when AI technology makes critical, life-changing decisions. A good example would be a disclosure requirement revealing when an arrest or criminal accusation is based on a facial recognition match.

The Senate hearing comes on the heels of increasing interest from both U.S. and European governments, and some tech insiders, to establish new rules for AI to prevent potential harm to people. This month, the White House announced its support for a public hacking contest to investigate generative AI systems, following a group letter signed by major figures in tech and AI calling for a six-month pause on AI development.

IBM’s Montgomery encouraged Congress to draw inspiration from the AI Act, a proposed European Union law categorizing AI systems by the risks they pose and setting rules for—or even banning—them accordingly.

The Center for Data Innovation, a tech think tank, issued a letter after the hearing stating that the U.S. doesn’t need a new AI regulator. Hodan Omaar, a senior analyst at the center, suggested updating existing laws and allowing federal agencies to incorporate AI oversight into their existing regulatory work.

Alex Engler, a fellow at the Brookings Institution, expressed concerns that the U.S. could revisit the challenges that derailed federal privacy regulation last autumn. The historic bill was sidelined by California lawmakers who abstained from voting because the law would overrule the state’s own privacy legislation. Engler acknowledged the legitimacy of such a concern but questioned if it should be the reason for foregoing civil society protections for AI.

The hearing tackled various potential perils of AI, from election misinformation to not-yet-realized threats like self-aware AI. However, generative AI systems like ChatGPT, the catalyst for the hearing, received the most scrutiny. Several senators highlighted the potential for these systems to exacerbate inequality and monopolization.

One staunch advocate for regulatory measures was Senator Cory Booker, a Democrat from New Jersey, who has previously co-sponsored AI regulation and supported a federal ban on facial recognition. He asserted that the only way to mitigate these risks is through rules established by Congress.

Despite the varied viewpoints presented during the hearing, one thing was clear – the increasing influence of AI technology on our lives and society can no longer be ignored. Whether through the creation of a new regulatory body or by enhancing existing structures, lawmakers agree that it’s time to navigate the uncharted territory of AI regulation. The arrival of ChatGPT has undoubtedly marked a turning point in the debate over AI regulation. It seems that a broader dialogue around the implications of AI and how to protect against its potential harms is finally taking center stage. This new chapter in the evolution of AI policy is a testament to the reality of AI’s impact on our everyday lives and our future.

The unveiling of ChatGPT has served as a wake-up call, shaking lawmakers out of their status quo. The push for an AI regulatory body, despite still being in the conception stage, is a clear signal that the tide is turning. As the debate continues, one thing is certain: the world is watching, and the decisions made now will shape the future of AI and its role in society. As Altman succinctly put it in the hearing, it is high time we reckon with the potential harms that this powerful technology might bring if left unchecked.

Get New Internet Patrol Articles by Email!

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

 


Share the knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.