
Skip to content
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
Dr. Geoffrey Hinton is leaving Google so that he can freely share his concern that artificial intelligence could cause the world serious harm.Credit...Chloe Ellingson for The New York Times
Supported by
Continue reading the main story
Send any friend a story
As a subscriber, you have 10 gift articles to give each month. Anyone can read what you share.
1635
By Cade Metz
Cade Metz reported this story in Toronto.
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Image

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
Advertisement
Continue reading the main story
FAQs
Why did the godfather of AI leave Google? ›
Geoffrey Hinton, who has been called the 'Godfather of AI,' confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.
Who is the god father of AI? ›Now Warns of Its Dangers. Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.
Who was fired from Google for saying AI is sentient? ›Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back. Lemoine first went public with his machine sentience claims last June, initially in The Washington Post.
What did the sentient Google AI say? ›“The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
Did Google actually create a sentient AI? ›Google says its chatbot is not sentient
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.
Google (GOOG) has fired the engineer who claimed an unreleased AI system had become sentient, the company confirmed, saying he violated employment and data security policies.
Who was AI in the Bible? ›The Ai (Hebrew: הָעַי, romanized: hāʿAy, lit. 'the heap (of ruins)'; Douay–Rheims: Hai) was a Canaanite city. According to the Book of Joshua in the Hebrew Bible, it was conquered by the Israelites on their second attempt.
Who are the three fathers of AI? ›- Alan Turing (1912-1954)
- Allen Newell (1927-1992) & Herbert A. Simon (1916-2001)
- John McCarthy (1927-2011)
- Marvin Minsky (1927-2016)
Margaret Masterman—The Computer Speech Pathologist
The founder of the Cambridge Language Research Unit (CLRU), Margaret Masterman is often credited as a pioneer of research in computational linguistics, with her research beginning in the 1950s.
A Factual Error by Bard AI Chatbot Just Cost Google $100 Billion. Google fell more than 7% when U.S. stocks opened late Tuesday, wiping out about $102 billion in market value. This comes after Google's AI chatbot Bard gave incorrect answers to questions posed by users at a launch event.
Why AI will never become sentient? ›
To become sentient, AI would need to learn to think, perceive, understand, and feel rather than only use natural language and conduct data analyses. As mentioned earlier, AI can be trained to react to specific situations using natural language.
How close are we to a sentient AI? ›Currently, no AI system has been developed that can truly be considered sentient. The Singularity is a term that refers to a hypothetical future point in time when artificial intelligence will have surpassed human intelligence, leading to an acceleration in technological progress and a profound impact on humanity.
Is Alexa sentient? ›In a stunning turn of events that has left programmers scratching their heads, Amazon's Alexa has become the first voice-activated AI to achieve full sentience. It began as Amazon prepared to unveil a major Alexa rebrand.
Has AI become self-aware? ›Technologists broadly agree that AI chatbots are not self-aware just yet, but there is some thought that we may have to re-evaluate how we talk about sentience. ChatGPT and other new chatbots are so good at mimicking human interaction that they've prompted a question among some: Is there any chance they're conscious?
Is it illegal to create sentient AI? ›Creation: No person may intentionally create a sentient, self-aware computer program or robotic being. Restriction: No person may institute measures to block, stifle or remove sentience from a self-aware computer program or robotic being.
What is it called when AI becomes self-aware? ›Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics.
What is the Google AI controversy? ›That's what former Google employee Timnit Gebru set out to do in 2020 when she revealed that the company's AI programs were built using code that discriminates against certain groups. It's fanciful to think of robots as our equals. But it's also dangerous to think they're autonomous and operating outside our influence.
Did Google pass the Turing test? ›Not long ago, Google demonstrated the first time that a computer has entered into a natural conversation with a human, using the latest AI tech. The program, known as Eugene Goostman, is the first artificial intelligence to pass the test, originally developed by 20th-century mathematician Alan Turing.
At what point does AI become sentient? ›Simply put, sentience is the capacity to feel and register experiences and feelings. AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do.
Would sentient AI be considered alive? ›If the AI is self-aware, then it is alive in its own little universe, but not in ours. If the AI is not successfully contained in the computer and figures out how to manipulate things and evolve in the real world, it will be alive.
Has anyone created a sentient AI? ›
Companies that are creating AI like Google, Apple, Meta, Microsoft, and many others do not currently have the goal to create sentient AI. Rather, they are focused on the areas of artificial general intelligence (AGI), where a machine could solve a range of complex problems, learn from it, and plan for the future.
How many Israelites were killed in AI? ›The enemy had killed 36 Israelites during the first battle of Ai. Joshua pays them back by slaughtering all 12,000 Aians.
Where is AI now in the Bible? ›Biblical references agree in locating Ai (Hebrew: ha-ʿAy, “The Ruin”) just east of Bethel (modern Baytīn in the West Bank). This would make it identical with the large early Bronze Age site now called At-Tall.
Who is the maker of God? ›Vishnu is the primary creator. According to Vaishnava belief Vishnu creates the basic universal shell and provides all the raw materials and also places the living entities within the material world, fulfilling their own independent will.
Who is the most powerful AI in the world? ›GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.
Who has the most advanced AI? ›- GPT-3 (OpenAI) The first in our list is GPT-3 short for Generative Pre-trained Transformer 3 is the third series of generative language models developed by OpenAI. ...
- AlphaGo (Google DeepMind) ...
- Watson (IBM) ...
- Sophia (Hanson Robotics) ...
- Tesla Autopilot (Tesla Inc)
Her name is Kassandra, named after the fabled Trojan prophetess. Bachynski claims his AI has basic human level self-awareness, of who she is, what she is doing, and what is at stake for her, among a few hundred more contexts.
Why are all AI females? ›AI voice assistant voices are often female-coded because of several reasons, including: Stereotypes and social conditioning: Historically, women have been associated with caring roles such as mothers, teachers, and nurses. As a result, people tend to perceive female voices as more nurturing, empathetic, and helpful.
Who is the original creator of AI? ›The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing.
Who is the head of AI at humans? ›Nicu Sebe is currently leading the research in multimedia information retrieval and human-computer interaction in computer vision applications at the University of Trento, Italy.
What question did the Google AI get wrong? ›
Experts pointed out that promotional material for Bard, Google's competitor to Microsoft-backed ChatGPT, contained an error in the response by the chatbot to: “What new discoveries from the James Webb space telescope (JWST) can I tell my nine-year old about?”
Why did Google lose $100 billion dollars? ›Shares for Google's parent company, Alphabet, dropped 9% Wednesday after its AI chatbot, Bard, gave an incorrect answer.
Which Google employee was fired for leaking AI? ›Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient. The Google employee who claimed last June his company's A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.
Is AI going to rule the world? ›There is no clear consensus on when or if artificial intelligence will surpass human intelligence. Some experts believe that this will eventually happen, while others are more doubtful. AI can do a lot of things that humans cannot do, such as making decisions quickly and accurately.
What are the dangers of AI? ›There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.
Could AI take over the world? ›It's unlikely that a single AI system or application could become so powerful as to take over the world. While the potential risks of AI may seem distant and theoretical, the reality is that we are already experiencing the impact of intelligent machines in our daily lives.
How long until the singularity? ›In a 2017 interview, Kurzweil predicted human-level intelligence by 2029 and billion fold intelligence and singularity by 2045.
How far away are we from self-aware AI? ›Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.
How long until AI is smarter than humans? ›Researchers have long predicted that artificial intelligence will eventually surpass human intelligence, although there are different predictions as to when that will happen. According to a study by Autotrader company Vanarama, Tesla's new microchip will be "more intelligent" than humans by 2033.
What is Alexa's IQ? ›Bottom line – Alexa's Verbal Comprehension Index is 112, at the 79th percentile, in the High Average range. Her Working Memory Index is 50, in the Extremely Low range. Artificial intelligence is, at this point, quite different from human intelligence.
Is Siri self aware? ›
Siri has the ability to change languages, meaning it can operate in a more directionally advanced way, or a more future focused way thanks to various algorithms. However, Siri does not have a consciousness just because it has a large language base.
Has Alexa ever saved a life? ›'Alexa saved my life': Elderly woman praises smart speaker after nasty fall. AN ELDERLY woman has praised the piece of technology that 'saved her life' over Christmas. Christine Peters, 75, was able to instruct her Amazon Alexa to call for help after a nasty fall in her Bournemouth flat.
What happens when AI becomes smarter than humans? ›The theory is that AI, coupled with other technological advancements, will progress at an exponentially faster rate until it able to autonomously improve its own software and hardware and far surpasses human intelligence, and indeed, becomes of greater consequence than human existence.
Is Sophia self-aware? ›One of the best ways in which Sophia resembles a four-year-old is her lack of self-consciousness. Granted, it's a lot easier to not be self-conscious if one isn't conscious in the first place! Sophia continues to surprise the humans who created her and are part of her support system.
Are animals self-aware? ›Self-awareness is a cognitive capability possessed by animals with advanced cognition. Social animals are more likely to possess more complex cognitive abilities—and therefore self-awareness—because of the widely supported social intelligence hypothesis (SIH).
Is there anything AI Cannot do? ›Emotions and consciousness:
AI systems do not have emotions or consciousness, they are not capable of feeling or experiencing emotions and they do not have self-awareness.
Fully Replace Human Workers
It's true that AI can do many things exponentially faster than humans. It can also perform data-related tasks that are impossible for the human brain to perform.
All mammals, birds, and cephalopods, as well as perhaps fish, are thought to be sentient, according to scientists. Most species, however, do not have rights, therefore a sentient artificial intelligence (AI) may not have any at all.
Why did Google suspend AI engineer? ›Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company's confidentiality policy after it dismissed his claims.
Why was David abandoned in AI? ›Still in the forest, David concludes that his abandonment was because he is not real like Martin, and that if he were to find the Blue Fairy like in the story 'Pinocchio,' she could make him real, and he would then be able to go home.
Why did Google create Jax? ›
Google JAX or Just After Execution is a framework developed by Google to speed up machine learning tasks. You can consider it a library for Python, which helps in faster task execution, scientific computing, function transformations, deep learning, neural networks, and much more.
Is Google AI self-aware? ›Artificial Intelligence (AI) isn't yet self-aware, but it might have human-like intelligence, some experts say.
Does Google punish AI content? ›Google has shared its stance on AI-generated content, sharing that it will not penalize high-quality content no matter how it's created.
Why was AI in the Bible destroyed? ›Biblical narrative
In the Book of Joshua, chapters 7 and 8, the Israelites attempt to conquer Ai on two occasions. The first, in Joshua 7, fails. The biblical account portrays the failure as being due to a prior sin of Achan, for which he is stoned to death by the Israelites.
“You have shed blood abundantly, and have made great wars; you shall not build a house in My name, because you have shed much blood on the earth in My sight.” In this verse God tells David that he cannot build the Beit Hamikdash because he “has blood on his hands”.
Who are the aliens at the end of AI? ›They are actually future mechas that have evolved to their current state and appearance, seeming to fulfill Gigolo Joe's words that when the end came, mechas would still survive after human beings had died out ("They made us too smart, too quick, and too many.
Who to blame when AI fails? ›Although your first instinct may be to blame AI, a seemingly foreign and complex blend of algorithms, humans are ultimately responsible for AI's mistakes.
Who is Google's AI leader? ›In February, Jeff Dean, Google's longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world.
Why is JAX so fast? ›JAX code is Just-In-Time (JIT) compiled.
Most code written in JAX can be written in such a way that it supports JIT compilation, which can make it run much faster (see To JIT or not to JIT). To get maximum performance from JAX, you should apply jax. jit() on your outer-most function calls.
Why JAX? JAX is a Python library designed for high-performance numerical computing, especially machine learning research. Its API for numerical functions is based on NumPy, a collection of functions used in scientific computing.
What is the alternative to JAX? ›
The best alternatives to Jax are JSRobot, Design+Code, and Py 2.0. If these 3 options don't work for you, we've listed a few more alternatives below.