But Hinton has come to partly regret his life’s work, as he told the NYT- “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”He decided to leave Google so that he could speak freely about the dangers of AI and ensure that his warnings don’t impact the company itself.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
According to the interview, Hinton was motivated to take action by Microsoft’s decision to include ChatGPT into its Bing search engine, which he believes will push industry titans into an unavoidable arms race. Because of this, there may be so many phony images, movies, and text messages that the typical individual won’t be able to “tell what’s true anymore.”
However, Hinton also expressed concern that AI could become more intelligent than humans much sooner than expected, which could lead to the loss of jobs and even the ability of AI to write and run its code.
Hinton believes that the more companies improve artificial intelligence without control, the more dangerous it becomes. “Look at how it was five years ago and now. Take the difference and propagate it forward. That’s scary.”
The necessity to regulate AI development
Many, not just Geoffry Hinton, share concerns about artificial intelligence’s unchecked progress.
The training of systems more advanced than GPT-4, ChatGPT’s successor, was halted for six months in late March after more than 2,000 industry professionals and executives in North America signed an open letter asking for the stop.
Noting the need for regulatory policies, signatories such as DeepMind researchers, computer scientist Yoshua Bengio, and Elon Musk warned that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Over in Europe and the United States, ChatGPT’s expansion has stoked efforts to regulate AI progress in a way that doesn’t stifle innovation.
Various states’ efforts to regulate the use of sophisticated models at the operational level. Several European countries, including Spain, France, and Italy, are looking into ChatGPT because of privacy concerns; Italy has taken the first step toward regulating the service by temporarily banning it.
Vestager said when the bill was first announced-
“With these landmark rules, the EU is spearheading the development of new global norms to ensure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive.”
Unless regulatory efforts in Europe and the globe are sped up, we might risk repeating the approach of Oppenheimer, of which Hinton is now sounding the alarm:
“When you see something technically sweet, you do it and argue about what to do about it only after you have had your technical success.”