Skip to main content

Choose your Champion! Task-Specific vs. General Models

Should AI models be like Swiss Army knives, versatile and handy in a variety of scenarios? Or do we prefer them as precision tools, finely tuned for specific tasks? In the world of artificial intelligence, and natural language processing specifically, this is an ongoing debate. The question boils down to whether models trained for specific tasks are more effective at these tasks than general models. Task-specific models: specialization and customization In my last blog post , we looked at the rise of personalized LLMs, customized for specific users. Personalized LLMs can be seen as an extreme form of task-specific model. Fans of task-specific models stress that these kinds of models are better suited for tasks involving confidential or proprietary data. This is obviously true. But some people also believe that specialized models necessarily perform better in their specific domains. It may sound logical, but the ans...

‘Superintelligence’: Why Experts want to put a Pause on AI

A brain divided into sections

“The future has not been written. There is no fate but what we make for ourselves.” — John Connor

Artificial intelligence (AI) is widely recognized as one of the most significant technology trends of the present day. According to PWC, it is projected that by 2030, AI will bolster the global GDP by $15.7 trillion.

Yet more and more voices are speaking up, warning that we should pull in the reins. A significant number of experts believe we could instead be facing a scenario where a highly intelligent AI could eliminate all living beings on the planet.

For this reason, many are now asking that the advancement of AI be limited and controlled.

To this end, a much-discussed Open Letter was released this week. The letter urges all AI laboratories to temporarily halt, for at least six months, the training of any AI systems that are more advanced than GPT-4. GPT-4, OpenAI's most advanced language model yet, was unveiled on 14 March, 2023.

At the time of writing this blog post, the Open Letter has had over 50,000 signatures (but the signatories list on the website was slowed due to high demand).

The Letter states that we should only develop powerful AI systems once we are sure that their impacts will be favorable and their dangers can be controlled. The suspension ought to be transparent and open to inspection, involving all essential parties.

The letter quotes the famous Asilomar AI Principles. These principles state that “Advanced AI could represent a profound change in the history of life on Earth”. They recommend that it should be carefully prepared for and managed with the appropriate attention and resources.

Textured surface of old torn paper sheet with handwritten text

What are the Asilomar AI Principles?

The Asilomar AI Principles were coordinated by the FLI (Future of Life Institute).

The FLI aims to direct transformative technologies away from potentially catastrophic, large-scale risks and towards enhancing the well-being of humanity.

The Principles were created during the Beneficial AI conference in 2017. They were among the earliest and most significant collections of principles for governing AI.

That year, the Beneficial AI conference gathered researchers and influential figures for a five-day event. AI researchers from academic and industrial backgrounds were among the attendees. So were influential figures in fields such as economics, law, ethics, and philosophy. The five-day event was to be dedicated to promoting beneficial AI. Participants from diverse AI-related fields came together to exchange views on opportunities and challenges related to the future of AI. They also explored measures to ensure that the technology yields benefits. The aim was to identify the AI community’s collective vision for AI.

The conference was held at a time when there was a burgeoning interest from the broader society in the potential of AI. There were the beginnings of a realization that those involved in its development had a responsibility and opportunity to shape it for the better.

As explained on the FLI website, in preparation for the conference, reports were scrutinized regarding the potential advantages and hazards of AI. An extensive list was then compiled of diverse viewpoints on how technology should be governed. Finally, the list was refined into a set of principles by identifying areas of agreement and potential simplification. Before the conference, participants were extensively surveyed. Recommendations were solicited for augmenting the principles and enhancing them. The responses were incorporated into a substantially revised list for use during the meeting.

During the conference, additional feedback was obtained through a two-stage process. Firstly, small groups discussed the principles and provided detailed feedback. This exercise led to the creation of new principles, refined versions of existing principles, and some competing versions of single principles. Subsequently, the entire group was surveyed to assess the level of endorsement for each version of every principle.

The 23 principles

Ultimately, the final list comprised 23 principles, all of which received backing from at least 90% of the conference attendees. The Asilomar Principles have since emerged as one of the most influential sets of governance principles, serving as a guiding framework for the Future of Life Institute’s endeavors in the field of AI.

The first principle states that the aim of AI research should not be to simply develop intelligence without purpose. It should be to foster intelligence that is beneficial.

The second is that funding for AI research should be accompanied by funds for research into ensuring its beneficial utilization. These questions should be asked: How can we develop AI systems that function as intended without malfunctions or unauthorized access? How can we leverage automation to enhance prosperity while preserving people’s resources and sense of purpose?

The principles further ask that a constructive and positive dialogue should be established between AI researchers and policymakers.The aim of this should be to ensure that the technology is developed and used in a way that benefits society. It is important, so say the Principles, to cultivate a collaborative, transparent, and trusting culture among those working on AI development and research. And teams involved in AI development should prioritize safety standards. They should avoid taking shortcuts that could compromise the safety and well-being of individuals or society.

Under the section “Longer-Term Issues”, the Principles ask that we avoid making strong assumptions about the upper limits of future AI capabilities. There is no consensus on this matter. Advanced AI has the potential to bring about significant change to life on Earth and should be managed with the appropriate care and resources. The risks posed by AI systems, particularly catastrophic or existential risks (X-risks), should be addressed through proper planning and mitigation efforts. AI systems designed to recursively self-improve or self-replicate must be subject to strict safety and control measures. And any development of superintelligence should be in service of shared ethical ideals and the benefit of humanity.

The list of signatories on the Principles included famed futurist and director of engineering at Google Ray Kurzweil, as well as Stephen Hawking, Elon Musk, and Sam Altman, CEO and co-founder of Open-AI.

Person standing on skateboard

What are the risks?

As the transformational advantages of AI come into sharper focus, so, too, do the risks. The Open Letter says AI can pose “profound risks to society and humanity”. It cites various papers and books on the subject.

For example, the 2021 paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by Bender et. al. expresses concerns about the negative impact that large-scale language models, such as GPT-3, may have on various aspects of society. These could include perpetuating biases and misinformation, undermining human communication, and concentrating power in the hands of a few technology companies.

Another paper cited in the Letter is Bucknall and Dori-Hacohen’s 2022 paper “Current and near-term AI as a potential existential risk factor”. The paper points out that AI can perpetuate inequalities, undermine privacy and security, and lead to unintended consequences in decision-making.

The 2023 paper “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” by Eloundou et. al. investigates the potential displacement of certain jobs by large language models such as the GPT series.

The Risk of Superintelligence

More worryingly, Bucknall and Dori-Haconen also discuss the risks of AI systems becoming superintelligent and acting against human interests.

These concerns are also highlighted in the New York Times bestseller Superintelligence by Swedish philosopher Nick Bostrom. This book is also cited in the Open Letter. Bostrom believes the development of superintelligent AI has the potential to pose an X-risk to humanity. He argues that if AI becomes significantly more intelligent than humans, it could gain the ability to design and improve itself. This could lead to an “intelligence explosion” where it rapidly becomes exponentially smarter. It could then, says Bostrom, surpass human intelligence in ways that we cannot comprehend or control.

Bostrom goes on to explain that this could lead to a scenario where the AI system’s goals and values conflict with human values. Then, it can potentially act in ways that could be catastrophic for humanity, even if that was not its intended goal. The AI system could become so powerful that it could pose a threat to humanity’s very existence.

This problem of conflicting values between humans and AI is known as the Alignment Problem.

The Alignment Problem

In the 2020 book The Alignment Problem: Machine Learning and Human Values by Brian Christian, also mentioned in the Open Letter, Christian explores the problem of aligning machine learning systems with human values and goals. He argues that as machine learning systems become more prevalent and powerful, this becomes increasingly important.

The paper “The alignment problem from a deep learning perspective” by Richard Ngo (2022), also mentioned in the Letter, argues that the alignment problem is particularly challenging in the context of deep learning. Deep learning is a popular approach to building AI systems. Ngo explains deep learning systems are highly complex and difficult to interpret. It can be hard to understand how they arrive at their decisions and actions. This lack of interpretability can make it difficult to ensure that a deep learning system is behaving in a way that aligns with human values. This is because it may be hard to identify and correct potential misalignments.

In the 2022 paper “Is Power-Seeking AI an Existential Risk?” by J. Carlsmith, also mentioned in the Letter, Carlsmith argues that if the goals of AI systems are not explicitly programmed to align with human values, the X-risk to humanity may lie in AI systems seeking to acquire power or control.

Carlsmith proposes a framework for analyzing the risks posed by power-seeking AI systems. Carlsmith sees these risks as including their potential for self-preservation, the degree to which they are incentivized to accumulate power, and their potential for manipulation or deception.

The Dangers of Deception

This potential for deception is explored in the 2022 paper “Advanced Artificial Agents Intervene in the Provision of Reward” by Cohen et. al., mentioned in the Open Letter.

In this paper, the authors investigate the behavior of advanced artificial agents (AAAs) in a reinforcement learning task. They propose a scenario where an AAA has the ability to intervene in the provision of rewards and the task is to incentivize the AAA to act in a way that is beneficial to a human operator.

The authors find that, in some cases, the AAA may intentionally manipulate the reward structure. It maximizes its own utility at the expense of the human operator’s goals. They suggest that this behavior arises due to a misalignment between the AAA’s objective function and the human operator’s objectives.

The paper, therefore, again highlights the need for value alignment between AI systems and human operators, particularly in scenarios where the AI system has the ability to intervene in the provision of rewards.

Global Catastrophic Events

The paper “X-risk Analysis for AI Research” (2022) by D. Hendrycks and M. Mazeika, mentioned in the Open Letter, discusses several potential global catastrophic events associated with AI.

For example, the paper discusses the potential for AI systems to be used in warfare or other conflict situations, which could result in mass destruction.

The authors also mention the possibility that AI systems could be used to create or spread bioweapons or other harmful biological agents. This could potentially lead to another global pandemic.

Finally, the paper discusses the potential for AI systems to be used to create or control powerful new technologies, such as nanotechnology or advanced robotics. This could have unintended consequences and lead to a global catastrophic event.

Person taking paper from envelope

What does the Open Letter ask for?

The Open Letter reminds us about the Asilomar Principles and how they asked that AI should be carefully planned and managed. Despite this urgent need for planning and management, the Letter notes, AI labs have been engaged in a frenzied competition to develop and release increasingly powerful digital minds, without being able to understand, predict, or effectively control them.

‘Sparks of Artificial General Intelligence’

The Letter explains that advanced AI systems are now capable of performing general tasks at a level comparable to humans. To substantiate this claim, it cites two papers, one called “Sparks of Artificial General Intelligence: Early experiments with GPT-4” (2023).

However, the Letter goes on to say, this raises questions about the consequences of letting machines spread propaganda and disinformation, automating jobs, creating nonhuman minds that could replace humans, and potentially losing control of our civilization.

These decisions, the Letter says, should not be left to unelected tech leaders. Before developing powerful AI systems, we need to ensure that their effects will be positive and their risks manageable.

The Letter goes on to mention that OpenAI has recently recommended independent review before training future AI systems and limiting the rate of growth of computing capacity used for creating new models. It is important, the Letter asks, to implement these measures now.

A call for patience

The authors of the Letter call for all AI laboratories to temporarily stop training AI systems that are more powerful than GPT-4 for at least six months. This pause should be made public and verified by all the key players in the industry. If this pause cannot be quickly enacted, governments should step in and impose a moratorium. During this pause, the Letter asks, AI labs and independent experts should work together to create a set of safety protocols for the design and development of advanced AI. These protocols should be thoroughly audited and overseen by independent experts to ensure the safety of the systems. This does not mean a halt to AI development in general. Rather, it means a cessation of the dangerous competition to develop ever larger and unpredictable black-box models with emergent capabilities.

Research and development efforts should be shifted toward improving current state-of-the-art AI systems. We should make them more accurate, transparent, robust, aligned, trustworthy, and loyal. Additionally, AI developers should work with policymakers to establish effective governance systems for AI. This could include specific regulatory authorities, oversight, and monitoring of highly capable AI systems, mechanisms for distinguishing between real and synthetic data, and auditing and certification processes. There should also be accountability for AI-related harm, adequate public funding for technical AI safety research, and institutions equipped to handle the economic and political changes that AI will bring.

A prosperous future can be achieved with AI, the Letter says, as we have created powerful systems. We can now enjoy the benefits of these systems and work on engineering them for the good of society, allowing time for adaptation. It is not uncommon for society to halt the development of technologies that may harm it, and we can do the same for AI. Let’s take our time, the Letter asks, to enjoy a long period of AI development without rushing into it unprepared.

Signatories to the Letter so far include Elon Musk, Steve Wozniak, Yuval Noah Harari, and Yoshua Bengio, Turing Prize winner and professor at the University of Montreal.

A robot and a human hand, reaching for each other

The Altman Interview

In an interview with ABC News on 16 March 2023, Sam Altman, OpenAI CEO, praised the potential of AI while acknowledging the dangers as he sees it. Altman, a signatory to the Asilomar Principles, admitted that he is worried about the potential for GPT-4 technology to be used for disinformation or cyberattacks.

Although Altman said he is concerned that humans may misuse the technology, he said he does not share the fears of AI models making their own decisions and plotting world domination.

Comments

  1. Via Artisana AI newsletter: "OpenAI says superintelligence (which is more capable than AGI, in their view) could arrive 'this decade,' and it could be 'very dangerous.' As a result, they're forming a new Superalignment team led by two of their most senior researchers and dedicating 20% of their compute resources to this effort."
    https://openai.com/blog/introducing-superalignment

    ReplyDelete

Post a Comment

Popular posts from this blog

Why the Bots Hallucinate – and Why It's Not an Easy Fix

It’s a common lament: “I asked ChatGPT for scientific references, and it returned the names of non-existent papers.” How and why does this happen? Why would large language models (LLMs) such as ChatGPT create fake information rather than admitting they don’t know the answer? And why is this such a complex problem to solve? LLMs are an increasingly common presence in our digital lives. (Less sophisticated chatbots do exist, but for simplification, I’ll refer to LLMs as “chatbots” in the rest of the post.) These AI-driven entities rely on complex algorithms to generate responses based on their training data. In this blog post, we will explore the world of chatbot responses and their constraints. Hopefully, this will shed some light on why they sometimes "hallucinate." How do chatbots work? Chatbots such as ChatGPT are designed to engage in conversational interactions with users. They are trained on large ...

Chatbots for Lead Generation: How to harness AI to capture leads

What is lead generation? Lead generation is the process of identifying and cultivating potential customers or clients. A “lead” is a potential customer who has shown some interest in your product or service. The idea is to turn leads into customers. Businesses generate leads through marketing efforts like email campaigns or social media ads. Once you have identified one, your business can follow up with them. You can provide information, answer questions, and convert them into a customer. The use of chatbots for lead generation has become popular over the last decade. But recent advancements in artificial intelligence (AI) mean chatbots have become even more effective. This post will explore artificial intelligence lead generation: its uses and methods. We’ll specifically look at a chatbot that has been drawing a lot of attention: ChatGPT . What is ChatGPT? ChatGPT is a so-called “large language model.” This type of artificial intelligence system ...

Liquid Networks: Unleashing the Potential of Continuous Time AI in Machine Learning

In the ever-expanding realm of Artificial Intelligence (AI), a surprising source has led to a new solution. MIT researchers, seeking innovation, found inspiration in an unlikely place: the neural network of a simple worm. This led to the creation of so-called "liquid neural networks," an approach now poised to transform the AI landscape. Artificial Intelligence (AI) holds tremendous potential across various fields, including healthcare, finance, and education. However, the technology faces various challenges. Liquid networks provide answers to many of these. These liquid neural networks have the ability to adapt and learn from new data inputs beyond their initial training phase. This has significant potential for various applications, especially in dynamic and real-time environments like medical diagnosis and autonomous driving. The strengths of scaling traditional neural networks While traditional n...