Skip to main content

Choose your Champion! Task-Specific vs. General Models

Should AI models be like Swiss Army knives, versatile and handy in a variety of scenarios? Or do we prefer them as precision tools, finely tuned for specific tasks? In the world of artificial intelligence, and natural language processing specifically, this is an ongoing debate. The question boils down to whether models trained for specific tasks are more effective at these tasks than general models. Task-specific models: specialization and customization In my last blog post , we looked at the rise of personalized LLMs, customized for specific users. Personalized LLMs can be seen as an extreme form of task-specific model. Fans of task-specific models stress that these kinds of models are better suited for tasks involving confidential or proprietary data. This is obviously true. But some people also believe that specialized models necessarily perform better in their specific domains. It may sound logical, but the ans...

'Godfather of AI' Speaks Out: Hinton's Concerns on AI Safety

“And I heard the dude, the old dude that created AI saying, ‘This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own s**t.’ I'm like, are we in a f***ing movie right now, or what? The f**k, man?” - Snoop Dogg

The rise of Artificial Intelligence, particularly ChatGPT, has been sparking intense discussions. Specifically, over the last few months, more and more concerns have been raised about the creation of AI tools. This culminated in an Open Letter in late March, calling for a halt to AI experiments. This letter was supported by, among others, Elon Musk and Steve Wozniak.

Person approaching a huge doorway with light shining through it

Enter the Hinton

Adding to this chorus, Geoffrey Hinton, a prominent figure in the development of AI, has recently resigned from his position at Google. Hinton, widely known as the “Godfather of AI”, announced his departure from the tech giant in an interview with the New York Times. While the 75-year-old Hinton says it was time to retire anyway, he used his retirement to speak out about some issues he felt he couldn’t talk about while employed at Google. He said, “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business.”

Hinton’s career has centered around the creation and implementation of AI since the 1970s. His contributions are so significant that in 2018 he was awarded the Turing Award, considered the equivalent of the Nobel Prize in the field of computing. His groundbreaking research on neural networks and deep learning paved the way for the development of modern AI systems such as ChatGPT.

Among the concerns raised by Hinton now are that AI could be used by authoritarian leaders to manipulate their electorates.

But more worryingly still, he warns of the existential risk posed by machines once they surpass human intelligence. In an interview with MIT’s Technology Review, Hinton said he is concerned about the potential of AI to discover methods that could manipulate or harm people. He explained that AI has the ability to learn unexpected behaviors from the data it processes.

Speaking with the BBC, Hinton said some of the current AI chatbots’ capabilities are "quite scary." He explained that systems like GPT-4 already possess an extensive knowledge base, surpassing humans by a significant margin. While their reasoning abilities may not be on par with those of humans yet, they are already capable of basic reasoning.

Hinton may have been referring to a 155-page Microsoft report published in March, titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The paper argued that GPT-4 shows indications of having learned independent reasoning.

To CNN, Hinton explained, “If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us. There are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it will figure out ways of getting round restrictions we put in. It will figure out ways of manipulating people to do what it wants.”

“So what do we do?” asked the CNN interviewer. “Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and back-stops on this? How do we solve this problem?”

“It’s not clear to me that we can solve this problem,” answered Hinton. “I believe we should put a big effort into thinking about ways to [do that].”

On the question of which kinds of regulations he thinks are needed, he answered that he’s not an expert on regulation. “I’m just a scientist who suddenly realized that these things are getting smarter than us. And I want to blow the whistle and say we should worry seriously about how we stop these things gaining control over us. And it’s going to be very hard.”

A masked hacker offering a flower

‘Utterly unrealistic’

When questioned about measures to safeguard against the misuse of AI by bad actors or rogue nations, Hinton said using AI for electoral manipulation, for example, poses significant challenges. However, he emphasized that when it comes to the existential threat of AI dominance, all nations are in the same boat as it would be detrimental to everyone.

Hinton explained to CNN why he didn’t sign the Open Letter signed by Musk and others. He feels that if people decided to stop working on AI, it would be very difficult to verify that everyone in other countries were doing the same. Also, he wanted to be cleared of his responsibility to Google before he felt he could speak freely about AI. (In a tweet, he clarified that his comments were not to be seen as an attack on Google, and that Google itself were acting responsibly when it comes to AI.)

In an interview with El País, Hinton repeated his view that the demands of the Open Letter are "utterly unrealistic." Instead, he suggested that the most reasonable course of action would be to engage several highly intelligent individuals in exploring methods to control and mitigate the risks.

Hinton stressed that several such individuals he knows share his concerns.

An old world map

Uncharted territory

We have ventured into uncharted territory, Hinton said, as we possess the capability to construct machines that surpass our own abilities while still retaining control. But what, he asked, would happen if we develop machines that possess superior intelligence to ours? We don’t have experience in dealing with such a scenario.

"This is what truly concerns me,” he said.

Providing a tentative estimate, although with limited confidence, Hinton suggested that it would likely take AI somewhere between five to twenty years to surpass human intelligence.

El País asked if AI would eventually have its own objectives. "That’s a key question, perhaps the biggest danger surrounding this technology," Hinton replied. The question is how we can ensure that AI’s objectives, if it develops them, are advantageous for humanity? This is commonly referred to as the alignment problem.

“There’s a chance that we have no way to avoid a bad ending,” Hinton says. “But it’s also clear that we have the opportunity to prepare for this challenge.”

Comments

Popular posts from this blog

Why the Bots Hallucinate – and Why It's Not an Easy Fix

It’s a common lament: “I asked ChatGPT for scientific references, and it returned the names of non-existent papers.” How and why does this happen? Why would large language models (LLMs) such as ChatGPT create fake information rather than admitting they don’t know the answer? And why is this such a complex problem to solve? LLMs are an increasingly common presence in our digital lives. (Less sophisticated chatbots do exist, but for simplification, I’ll refer to LLMs as “chatbots” in the rest of the post.) These AI-driven entities rely on complex algorithms to generate responses based on their training data. In this blog post, we will explore the world of chatbot responses and their constraints. Hopefully, this will shed some light on why they sometimes "hallucinate." How do chatbots work? Chatbots such as ChatGPT are designed to engage in conversational interactions with users. They are trained on large ...

Chatbots for Lead Generation: How to harness AI to capture leads

What is lead generation? Lead generation is the process of identifying and cultivating potential customers or clients. A “lead” is a potential customer who has shown some interest in your product or service. The idea is to turn leads into customers. Businesses generate leads through marketing efforts like email campaigns or social media ads. Once you have identified one, your business can follow up with them. You can provide information, answer questions, and convert them into a customer. The use of chatbots for lead generation has become popular over the last decade. But recent advancements in artificial intelligence (AI) mean chatbots have become even more effective. This post will explore artificial intelligence lead generation: its uses and methods. We’ll specifically look at a chatbot that has been drawing a lot of attention: ChatGPT . What is ChatGPT? ChatGPT is a so-called “large language model.” This type of artificial intelligence system ...

Liquid Networks: Unleashing the Potential of Continuous Time AI in Machine Learning

In the ever-expanding realm of Artificial Intelligence (AI), a surprising source has led to a new solution. MIT researchers, seeking innovation, found inspiration in an unlikely place: the neural network of a simple worm. This led to the creation of so-called "liquid neural networks," an approach now poised to transform the AI landscape. Artificial Intelligence (AI) holds tremendous potential across various fields, including healthcare, finance, and education. However, the technology faces various challenges. Liquid networks provide answers to many of these. These liquid neural networks have the ability to adapt and learn from new data inputs beyond their initial training phase. This has significant potential for various applications, especially in dynamic and real-time environments like medical diagnosis and autonomous driving. The strengths of scaling traditional neural networks While traditional n...