The rise of Artificial Intelligence, particularly ChatGPT, has been sparking intense discussions. Specifically, over the last few months, more and more concerns have been raised about the creation of AI tools. This culminated in an Open Letter in late March, calling for a halt to AI experiments. This letter was supported by, among others, Elon Musk and Steve Wozniak.
Enter the Hinton
Adding to this chorus, Geoffrey Hinton, a prominent figure in the development of AI, has recently resigned from his position at Google. Hinton, widely known as the “Godfather of AI”, announced his departure from the tech giant in an interview with the New York Times. While the 75-year-old Hinton says it was time to retire anyway, he used his retirement to speak out about some issues he felt he couldn’t talk about while employed at Google. He said, “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business.”
Hinton’s career has centered around the creation and implementation of AI since the 1970s. His contributions are so significant that in 2018 he was awarded the Turing Award, considered the equivalent of the Nobel Prize in the field of computing. His groundbreaking research on neural networks and deep learning paved the way for the development of modern AI systems such as ChatGPT.
Among the concerns raised by Hinton now are that AI could be used by authoritarian leaders to manipulate their electorates.
But more worryingly still, he warns of the existential risk posed by machines once they surpass human intelligence. In an interview with MIT’s Technology Review, Hinton said he is concerned about the potential of AI to discover methods that could manipulate or harm people. He explained that AI has the ability to learn unexpected behaviors from the data it processes.
Speaking with the BBC, Hinton said some of the current AI chatbots’ capabilities are "quite scary." He explained that systems like GPT-4 already possess an extensive knowledge base, surpassing humans by a significant margin. While their reasoning abilities may not be on par with those of humans yet, they are already capable of basic reasoning.
Hinton may have been referring to a 155-page Microsoft report published in March, titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The paper argued that GPT-4 shows indications of having learned independent reasoning.
To CNN, Hinton explained, “If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us. There are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it will figure out ways of getting round restrictions we put in. It will figure out ways of manipulating people to do what it wants.”
“So what do we do?” asked the CNN interviewer. “Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and back-stops on this? How do we solve this problem?”
“It’s not clear to me that we can solve this problem,” answered Hinton. “I believe we should put a big effort into thinking about ways to [do that].”
On the question of which kinds of regulations he thinks are needed, he answered that he’s not an expert on regulation. “I’m just a scientist who suddenly realized that these things are getting smarter than us. And I want to blow the whistle and say we should worry seriously about how we stop these things gaining control over us. And it’s going to be very hard.”
‘Utterly unrealistic’
When questioned about measures to safeguard against the misuse of AI by bad actors or rogue nations, Hinton said using AI for electoral manipulation, for example, poses significant challenges. However, he emphasized that when it comes to the existential threat of AI dominance, all nations are in the same boat as it would be detrimental to everyone.
Hinton explained to CNN why he didn’t sign the Open Letter signed by Musk and others. He feels that if people decided to stop working on AI, it would be very difficult to verify that everyone in other countries were doing the same. Also, he wanted to be cleared of his responsibility to Google before he felt he could speak freely about AI. (In a tweet, he clarified that his comments were not to be seen as an attack on Google, and that Google itself were acting responsibly when it comes to AI.)
In an interview with El País, Hinton repeated his view that the demands of the Open Letter are "utterly unrealistic." Instead, he suggested that the most reasonable course of action would be to engage several highly intelligent individuals in exploring methods to control and mitigate the risks.
Hinton stressed that several such individuals he knows share his concerns.
Uncharted territory
We have ventured into uncharted territory, Hinton said, as we possess the capability to construct machines that surpass our own abilities while still retaining control. But what, he asked, would happen if we develop machines that possess superior intelligence to ours? We don’t have experience in dealing with such a scenario.
"This is what truly concerns me,” he said.
Providing a tentative estimate, although with limited confidence, Hinton suggested that it would likely take AI somewhere between five to twenty years to surpass human intelligence.
El País asked if AI would eventually have its own objectives. "That’s a key question, perhaps the biggest danger surrounding this technology," Hinton replied. The question is how we can ensure that AI’s objectives, if it develops them, are advantageous for humanity? This is commonly referred to as the alignment problem.
“There’s a chance that we have no way to avoid a bad ending,” Hinton says. “But it’s also clear that we have the opportunity to prepare for this challenge.”



Comments
Post a Comment