Catch up on the latest AI articles

Are We As Closer To Artificial General Intelligence(AGI) Than We Think?

Are We As Closer To Artificial General Intelligence(AGI) Than We Think?

Neural Network

3 main points
✔️ 4 fallacies in our understanding of pure artificial intelligence.
✔️ Historical trends in AI: winters and springs.
✔️ Need to develop a concrete theory of intelligence.

Why AI is Harder Than We Think
written by Melanie Mitchell
(Submitted on 26 Apr 2021 (this version), latest version 28 Apr 2021 (v2))
Comments: Accepted by arXiv.
Subjects: Artificial Intelligence (cs.AI)

code:   

Introduction

In the past decade, we have seen tremendous improvements in the field of AI. The evolution of deep learning has given new hope to the AI community, and along with it rise to countless startups, and increased research interest/funding. Many AI experts, CEOs, and prominent tech authors believe that "true" AI is only a few decades away from us. “From 2020 you will become a permanent backseat driver”. This was a quote in the Guardian just five years ago. There are several other optimistic predictions like this: “10 million self-driving cars will be on the road by 2020”, Business Insider 2016. However, like true AI, the problem of autonomous driving seems harder than we think. 

This optimism makes some AI experts recall the time of the two AI springs of the 1970s and 1980s. Herbert Simon in 1960 popularly said that “Machines will be capable, within twenty years, of doing any work that a man can do”. But the enthusiasm in both the AI springs was short-lived, and tapering progress in the field severely reduced research funding, and the confidence in true AI disappeared. The question to ask is: will the current AI spring last forever and lead to true AI or are we headed towards another winter.

In this paper, we discuss four main reasons behind these periods of overconfidence (AI spring), and periods of stagnant progress (AI winter). Understanding the causes of these fallacies can help propel AI research in the right direction and take us closer to actual AI systems.

Fallacies in the Path to True AI

Fallacy 1: Narrow intelligence is in continuum with general intelligence.

Whenever a computer achieves some task with a hint of intelligence like IBM's Deep Blue beating a chess grandmaster at chess, AlphaGO beating world champions at GO, or GPT-3 writing a college essay, they are crowned as the stepping stone to general intelligence. Through common sense, we assume that it is possible to reach general intelligence with continuous improvements like that. However, it could equally likely be the case that general intelligence is dissimilar to the limited intelligence of current machines and the route to it might be completely different.

Fallacy 2:Easy things are easy and hard things are hard.

There is a widespread acceptance of the idea that things that are done easily by humans could be done better by AI. If it takes 100 milliseconds to recognize your grandmother, an AI system should be able to do it faster, and Google's FaceNet could actually do it. But, computers have no problem playing two of the most mentally demanding games: Chess and GO. However, they have problems learning the language or walking around, which even a 2-year-old toddler does with ease. So, there is another notion that 'easy things are hard and hard things are easy. Therefore, when we describe GO as “the most challenging of domains”, it is necessary to question "difficult for who?". 

We are ourselves unaware of the complexity of our thought processes. Billions of years of evolution have encoded the sensory and motor regions of our brain to make difficult things like walking seem very easy. In Marvin Minsky's words, “In general, we’re least aware of what our minds do best”.

Fallacy 3The lure of wishful mnemonics

Artificial neural networks are indeed inspired by the brain. Nevertheless, the way ANNs work is vastly different from how the brain actually works. When IBM says that IBM Watson can "read and understand context and nuances in seven languages", they do not mean that the machine "actually" does so like a human comprehends language. In fact, Watson does not have any idea of a "game" or "winning" like a human. Such descriptions are intended towards the layman, who does not understand what is going on behind the scenes. These wishful mnemonics increase the crowd's enthusiasm for the project, but can also make us over-optimistic in what these systems can do.

Other such examples are current deep learning benchmarks like Stanford Question Answering Datasets, the RACE Reading Comprehension Dataset, and the General Language Understanding Evaluation. NLP models have already surpassed human-level performance in these datasets, but it is important to note that the metrics used to measure the performance of these models, and the tasks themselves are very limited. These benchmarks test only limited QnA, reading comprehension, and language understanding abilities, and the models that perform well in these tasks cannot generalize well to other tasks.

Fallacy 4Intelligence is all in the brain.

The information-processing model of the mind views the mind as an information processing system just like a computer, which has input, output, storage, and processing components. It assumes that cognition takes place solely in the "physical brain". The symbolic approach to AI and even recent neural networks assume brains to be information processing systems. This speculation has led to a belief that appropriately scaling up the machines to match the computational capacity of the brain with the right software can result in human-level performance. In theory, according to deep learning pioneer Geoffery Hinton, having trillions of connection in an ANN like the brain could enable the ANNs to understand documents.

"Pure intelligence" systems are generally thought to be devoid of emotions, irrationality, hunger, and need to rest, while at the same time they are able to conserve qualities like accuracy, speed, and programmability. Such a perfectly rational model of AI naturally makes us think of the existential threats they pose to humanity. Ex: A super-intelligent system given the objective to reduce global warming could decide to end humanity.

However, many cognitive scientists argue for the centrality of the body in all cognitive activities, called embodied cognition. Our thoughts and intelligence are deeply correlated to perception, action, and emotion. Many of our abstract concepts are rooted in interactions with the physical world. Is it right to separate intelligence and think of it independently? Don't emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world play an essential role in the intelligence of humans?

Summary

AI researcher Terry Winograd said in 1977 that the current state of AI comparable to the medieval practice of Alchemy. Sadly, that is still true. Like alchemy, where people used to mix several substances expecting to get valuable results, most of the current ongoing work in AI is similar. These experiments will surely contribute to the final goal. But still, we need better methods to assess the state of AI and should stop fooling ourselves with misleading interpretations of our achievements. We lack a concrete theory of intelligence and it is indispensable in achieving true AI.

Thapa Samrat avatar
I am a second year international student from Nepal who is currently studying at the Department of Electronic and Information Engineering at Osaka University. I am interested in machine learning and deep learning. So I write articles about them in my spare time.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us