Skip to main content

Posts

Showing posts from May, 2023

Choose your Champion! Task-Specific vs. General Models

Should AI models be like Swiss Army knives, versatile and handy in a variety of scenarios? Or do we prefer them as precision tools, finely tuned for specific tasks? In the world of artificial intelligence, and natural language processing specifically, this is an ongoing debate. The question boils down to whether models trained for specific tasks are more effective at these tasks than general models. Task-specific models: specialization and customization In my last blog post , we looked at the rise of personalized LLMs, customized for specific users. Personalized LLMs can be seen as an extreme form of task-specific model. Fans of task-specific models stress that these kinds of models are better suited for tasks involving confidential or proprietary data. This is obviously true. But some people also believe that specialized models necessarily perform better in their specific domains. It may sound logical, but the ans...

Neuralink and Transhumanism: Dreams of Immortality

“I wanna be software / The best design / Infinite princess / Computer mind” – Recent tweet by Grimes (singer and on/off girlfriend of Elon Musk) A lot has been made of the FDA’s recent go-ahead for human clinical trials for Elon Musk’s Neuralink . This marks a significant achievement for Musk’s brain-implant startup. Neuralink, founded in 2016, has the potential to greatly improve the quality of life of individuals with disabilities. However, Musk also envisions it being used to enhance the cognitive abilities of “anyone who wants” in the medium term. A network of ultra-thin electrodes implanted in the brain will allow for seamless information exchange between humans and machines.(Physicist Stephen Hawking, who lost his ability to speak and ultimately died from ALS in 2018, believed that brain-computer interfaces would be the future of communication.) Eventually, however, Musk sees Neuralink offering a pathway to ...

Awaiting the Shoggoth: Why AI Emergence is Uncertain – for Now

“It is absolutely necessary, for the peace and safety of mankind, that some of earth’s dark, dead corners and unplumbed depths be let alone; lest sleeping abnormalities wake to resurgent life, and blasphemously surviving nightmares squirm and splash out of their black lairs to newer and wider conquests.” ― H.P. Lovecraft, At the Mountains of Madness Horror fans might be familiar with author H.P. Lovecraft's fictional “shoggoths”, the shape-shifting and amorphous entities that he wrote about in his Cthulhu Mythos. In the context of AI emergence, the term "shoggoth" is sometimes used to refer to a possible futuristic advanced form of artificial intelligence. It highlights the idea of an AI system that can rapidly learn, evolve, and assimilate new information and skills, much like how Lovecraft's shoggoths can change their forms and abilities. Much has been made of so-called emergent abilities in AI....

'Godfather of AI' Speaks Out: Hinton's Concerns on AI Safety

“And I heard the dude, the old dude that created AI saying, ‘This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own s**t.’ I'm like, are we in a f***ing movie right now, or what? The f**k, man?” - Snoop Dogg The rise of Artificial Intelligence, particularly ChatGPT, has been sparking intense discussions. Specifically, over the last few months, more and more concerns have been raised about the creation of AI tools. This culminated in an Open Letter in late March, calling for a halt to AI experiments. This letter was supported by, among others, Elon Musk and Steve Wozniak. Enter the Hinton Adding to this chorus, Geoffrey Hinton, a prominent figure in the development of AI, has recently resigned from his position at Google. Hinton, widely known as the “Godfather of AI”, announced his departure from the tech giant in an interview with the New York Times. While t...

Of Leaks and Llamas: The Great Open/Closed Debate

On 4 May, a purported leaked document from Google appeared online. The document, titled “We have no moat, and neither does OpenAI”, seems to be an admission that the big companies working on AI are unable to keep competing with open-source researchers. This document, and admission, created quite a stir. To understand why, we need to take a step back... Tale as Old as ... The question of whether AI research should be open source has long been a hot topic of debate in the AI community. On the one hand, proponents of open source argue that making AI research openly available to the public will encourage collaboration and innovation, ultimately leading to the development of better technologies. Open source allows for transparency and accountability. This is particularly important in areas such as healthcare, where the consequences of AI errors could be catastrophic. There are also concerns that closed AI research ...

Yudkowsky's Warning: When Intelligence Becomes a Threat

“Well, first of all I will say that there is some chance of that” – Sam Altman, CEO of OpenAI, when asked about the possibility that a superhuman AI will annihilate all humans On 7 Febuary of this year, Microsoft had just announced a new version of Bing powered by OpenAI's GPT-4 . In an interview with The Verge , Satya Nadella, CEO, predicted that the development would make Google “come out and show that they can dance” (with their own AI technology). He added, “I want people to know that we made them dance." My mind goes to the old question that fascinated me as a child: How many angels can dance on the head of a pin? I should explain that I am imagining, here, angels (quantity unknown) dancing not on a pin this time, but a paperclip. Gather round. Storytime. Eliezer Yudkowsky and the Paperclip Maximizer Twenty years ago, Eliezer Shlomo Yudkowsky, an unknown young starry-eyed dancer from Chi...