Geoffrey Hinton Leaves Google: Why You Should Know

google featured image
Share the knowledge


Geoffrey Hinton, a prominent computer scientist and AI researcher, has left Google to speak out about the potential risks posed by AI systems to humanity. Hinton, who played a significant role in the development of deep learning algorithms, has expressed concerns about the possibility of AI technology being manipulated by malevolent actors, such as authoritarian leaders, to cause harm. He emphasized that the kind of intelligence being developed is very different from human intelligence, as digital systems can learn separately but share their knowledge instantly. Hinton also warned of the potential for AI advances in text to enable the creation of “very effective spambots” that could be used by authoritarian leaders to manipulate their electorates.

In a separate interview with the New York Times, Hinton revealed that the possibility of AI surpassing human intelligence was becoming reality faster than he had previously anticipated. He called for top scientists to work together to control the technology to mitigate potential risks. However, he acknowledged that there are no international regulations to prevent or penalize the secret use of AI by governments or companies, unlike with nuclear weapons.

Hinton’s resignation from Google after 10 years at the company allows him to speak freely about the dangers of AI without considering how this impacts Google. In response to Hinton’s resignation and comments, Google’s chief scientist, Jeff Dean, said that the company remains committed to a responsible approach to AI and is continually learning to understand emerging risks while also innovating boldly.

Hinton, who received his PhD in Artificial Intelligence from the University of Edinburgh in 1978, began working part-time for Google in 2013. In 2018, he was one of three computer scientists to win the prestigious Turing Award for their work in artificial intelligence. The launch of OpenAI’s ChatGPT earlier this year forced tech giants such as Meta and Google to move more quickly to release their own AI products. In March, around 1,000 business leaders, academics, and tech workers signed an open letter calling on companies such as OpenAI, Google, and Microsoft to “pause” work on AI systems until their risks can be determined. No senior executives from OpenAI or companies such as Google added their names to the letter, however.

Get New Internet Patrol Articles by Email!

The Internet Patrol is completely free, and reader-supported. Your tips via CashApp, Venmo, or Paypal are appreciated! Receipts will come from ISIPP.

CashApp us Square Cash app link

Venmo us Venmo link

Paypal us Paypal link


Share the knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.