Questionable Ethics Come to Light in the Race to AI

Google logo as employees speak out on questionable ethics at Google
Share the knowledge

Morgann

In the world of artificial intelligence (AI), tech giants are racing to keep up with the latest advancements. However, Google’s rush to win the race to AI has led to ethical lapses, according to employees.

The November 2022 debut of OpenAI’s popular chatbot sent Google into a frenzy to weave generative AI into all its most important products within a matter of months. This meant that the group working on ethics that Google had pledged to fortify was disempowered and demoralized. The staff responsible for the safety and ethical implications of new products were told not to get in the way or try to kill any of the generative AI tools in development.

Google is aiming to revitalize its maturing search business around this cutting-edge technology, which could put generative AI into millions of phones and homes around the world – ideally before OpenAI, with the backing of Microsoft Corp., beats the company to it.

As AI ethics takes a back seat to the race for innovation, the importance of responsible AI becomes a pressing issue. Meredith Whittaker, president of the Signal Foundation and a former Google manager, said: “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

Google had pledged in 2021 to double its team studying the ethics of AI and pour more resources into assessing the technology’s potential harms. However, this pledge has not been fulfilled. The team working on responsible AI shed at least three members in a January round of layoffs at the company, including the head of governance and programs.

Google has always been cautious of the power of AI and the ethical considerations that come with embedding it into search and other marquee products. By December, senior leadership decreed a competitive “code red” and changed its appetite for risk. Google’s leaders decided that as long as they called new products “experiments,” the public might forgive their shortcomings. Still, it needed to get its ethics teams on board.

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

The company assigns scores to its products in several important categories, meant to measure their readiness for release to the public. In some, like child safety, engineers still need to clear the 100% threshold. But Google may not have time to wait for perfection in other areas, as suggested by Jen Gennai, the AI governance lead, who convened a meeting of the responsible innovation group in December. Gennai suggested that some compromises might be necessary in order to pick up the pace of product releases.

The pressure to keep up with the competition has led to the launch of products that do not meet Google’s own ethical standards. For example, shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool. One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.”

Despite these shortcomings, Google launched Bard anyway, providing low-quality information in a race to keep up with the competition while giving less priority to its ethical commitments. One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash. Another said it gave answers on scuba diving “which would likely result in serious injury or death.”

In February, one employee raised concerns in an internal message group: “Bard is worse than useless: please do not launch.” The note was viewed by nearly 7,000 people, many of whom agreed that the AI tool’s answers were contradictory or even egregiously wrong on simple factual queries. However, Jen Gennai overruled a risk evaluation submitted by members of her team stating Bard was not ready because it could cause harm, according to people familiar with the matter. Shortly after, Bard was opened up to the public with the company calling it an “experiment.”

 

Google logo as employees speak out on questionable ethics at Google
All eyes are on Google as employees speak out about questionable ethical practices at the tech giant.

Get New Internet Patrol Articles by Email!

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link

 


Share the knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.