Skip to main content

Barely Legal

By guest author: Nielo Wait, VRZ Champions LinkedIn: https://www.linkedin.com/in/nielo/ YouTube: Slopfiction Caveat: These ideas were articulated with the assistance of artificial intelligence — barely legal em dashes and all. Two AIs walk into a bar. Bartender: “Sorry, we don’t serve minors.” As the western AI begins to litigate, the eastern AI forks the bartender, open-sources the quantized version, and shouts, "The next round is on me!" USA, run by lawyers, is trying to legislate its way into AI dominance. China, run by engineers, is shipping fast, hard-coding its own vision of what AI should be. Both are building futures. But the difference in approach is already warping the GenAI landscape — and who gets to shape it. That’s the frame: GenAI isn’t good or bad. It’s just barely legal . Not in the smirking, R-rated LoRa sense. In the sense that the rulebook doesn’t exist yet, the court cases are unresolved, the ethics are wea...

'Godfather of AI' Speaks Out: Hinton's Concerns on AI Safety

“And I heard the dude, the old dude that created AI saying, ‘This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own s**t.’ I'm like, are we in a f***ing movie right now, or what? The f**k, man?” - Snoop Dogg

The rise of Artificial Intelligence, particularly ChatGPT, has been sparking intense discussions. Specifically, over the last few months, more and more concerns have been raised about the creation of AI tools. This culminated in an Open Letter in late March, calling for a halt to AI experiments. This letter was supported by, among others, Elon Musk and Steve Wozniak.

Person approaching a huge doorway with light shining through it

Enter the Hinton

Adding to this chorus, Geoffrey Hinton, a prominent figure in the development of AI, has recently resigned from his position at Google. Hinton, widely known as the “Godfather of AI”, announced his departure from the tech giant in an interview with the New York Times. While the 75-year-old Hinton says it was time to retire anyway, he used his retirement to speak out about some issues he felt he couldn’t talk about while employed at Google. He said, “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business.”

Hinton’s career has centered around the creation and implementation of AI since the 1970s. His contributions are so significant that in 2018 he was awarded the Turing Award, considered the equivalent of the Nobel Prize in the field of computing. His groundbreaking research on neural networks and deep learning paved the way for the development of modern AI systems such as ChatGPT.

Among the concerns raised by Hinton now are that AI could be used by authoritarian leaders to manipulate their electorates.

But more worryingly still, he warns of the existential risk posed by machines once they surpass human intelligence. In an interview with MIT’s Technology Review, Hinton said he is concerned about the potential of AI to discover methods that could manipulate or harm people. He explained that AI has the ability to learn unexpected behaviors from the data it processes.

Speaking with the BBC, Hinton said some of the current AI chatbots’ capabilities are "quite scary." He explained that systems like GPT-4 already possess an extensive knowledge base, surpassing humans by a significant margin. While their reasoning abilities may not be on par with those of humans yet, they are already capable of basic reasoning.

Hinton may have been referring to a 155-page Microsoft report published in March, titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The paper argued that GPT-4 shows indications of having learned independent reasoning.

To CNN, Hinton explained, “If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us. There are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it will figure out ways of getting round restrictions we put in. It will figure out ways of manipulating people to do what it wants.”

“So what do we do?” asked the CNN interviewer. “Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and back-stops on this? How do we solve this problem?”

“It’s not clear to me that we can solve this problem,” answered Hinton. “I believe we should put a big effort into thinking about ways to [do that].”

On the question of which kinds of regulations he thinks are needed, he answered that he’s not an expert on regulation. “I’m just a scientist who suddenly realized that these things are getting smarter than us. And I want to blow the whistle and say we should worry seriously about how we stop these things gaining control over us. And it’s going to be very hard.”

A masked hacker offering a flower

‘Utterly unrealistic’

When questioned about measures to safeguard against the misuse of AI by bad actors or rogue nations, Hinton said using AI for electoral manipulation, for example, poses significant challenges. However, he emphasized that when it comes to the existential threat of AI dominance, all nations are in the same boat as it would be detrimental to everyone.

Hinton explained to CNN why he didn’t sign the Open Letter signed by Musk and others. He feels that if people decided to stop working on AI, it would be very difficult to verify that everyone in other countries were doing the same. Also, he wanted to be cleared of his responsibility to Google before he felt he could speak freely about AI. (In a tweet, he clarified that his comments were not to be seen as an attack on Google, and that Google itself were acting responsibly when it comes to AI.)

In an interview with El País, Hinton repeated his view that the demands of the Open Letter are "utterly unrealistic." Instead, he suggested that the most reasonable course of action would be to engage several highly intelligent individuals in exploring methods to control and mitigate the risks.

Hinton stressed that several such individuals he knows share his concerns.

An old world map

Uncharted territory

We have ventured into uncharted territory, Hinton said, as we possess the capability to construct machines that surpass our own abilities while still retaining control. But what, he asked, would happen if we develop machines that possess superior intelligence to ours? We don’t have experience in dealing with such a scenario.

"This is what truly concerns me,” he said.

Providing a tentative estimate, although with limited confidence, Hinton suggested that it would likely take AI somewhere between five to twenty years to surpass human intelligence.

El País asked if AI would eventually have its own objectives. "That’s a key question, perhaps the biggest danger surrounding this technology," Hinton replied. The question is how we can ensure that AI’s objectives, if it develops them, are advantageous for humanity? This is commonly referred to as the alignment problem.

“There’s a chance that we have no way to avoid a bad ending,” Hinton says. “But it’s also clear that we have the opportunity to prepare for this challenge.”

Comments

Popular posts from this blog

Why the Bots Hallucinate – and Why It's Not an Easy Fix

It’s a common lament: “I asked ChatGPT for scientific references, and it returned the names of non-existent papers.” How and why does this happen? Why would large language models (LLMs) such as ChatGPT create fake information rather than admitting they don’t know the answer? And why is this such a complex problem to solve? LLMs are an increasingly common presence in our digital lives. (Less sophisticated chatbots do exist, but for simplification, I’ll refer to LLMs as “chatbots” in the rest of the post.) These AI-driven entities rely on complex algorithms to generate responses based on their training data. In this blog post, we will explore the world of chatbot responses and their constraints. Hopefully, this will shed some light on why they sometimes "hallucinate." How do chatbots work? Chatbots such as ChatGPT are designed to engage in conversational interactions with users. They are trained on large ...

OpenAI's ChatGPT plugins: To infinity and beyond

“The thing’s hollow – it goes on forever – and, oh my God, it’s full of stars!” - David Bowman, 2001: A Space Odyssey (novel) Last week, OpenAI co-founder Greg Brockman demoed some of the largely unreleased ChatGPT plugins at TED2023. At the same time, he provided more clarity on these plugins and their intended use-cases. Yesterday marked exactly one month ago that OpenAI tantalisingly announced that they will be introducing support for plugins in ChatGPT. We knew that this would enhance ChatGPT's capabilities beyond its built-in functionalities. But the specifics were a little bit unclear until now. Essentially, the various plugins will enable the language model to access real-time information from the web and other sources such as databases. Third-party services will allow it to perform actions such as booking a flight or ordering food on behalf of the user. And we will be able to access all of this func...

Chatbots for Lead Generation: How to harness AI to capture leads

What is lead generation? Lead generation is the process of identifying and cultivating potential customers or clients. A “lead” is a potential customer who has shown some interest in your product or service. The idea is to turn leads into customers. Businesses generate leads through marketing efforts like email campaigns or social media ads. Once you have identified one, your business can follow up with them. You can provide information, answer questions, and convert them into a customer. The use of chatbots for lead generation has become popular over the last decade. But recent advancements in artificial intelligence (AI) mean chatbots have become even more effective. This post will explore artificial intelligence lead generation: its uses and methods. We’ll specifically look at a chatbot that has been drawing a lot of attention: ChatGPT . What is ChatGPT? ChatGPT is a so-called “large language model.” This type of artificial intelligence system ...