A6 months old baby won’t even notice if a toy truck drives off a platform and seems to fly in the air. However, if the same experiment is repeated 2 to 3 months later, the baby will immediately identify that something is wrong. This means that the baby has already learned the concept of gravity.
“Nobody tells a baby that objects are supposed to fall,” said the chief AI scientist at Facebook and a professor at NYU, Dr. Yann LeCun, during a webinar organized by the Association for Computing Machinery, an industry body. Because babies do not have very sophisticated motor control, LeCun hypothesizes, “a lot of what they learn about the world is through observation.”
That theory could have significant suggestions for researchers hoping to push the boundaries of artificial intelligence (AI) further.
A branch of AI algorithms, ”deep learning” wears the crown for jump-starting the Bottom of Form field’s most recent revolution and also for making tremendous progress in giving machines perceptual abilities like vision. However, it lacks in injecting them with sophisticated reasoning, grounded in a conceptual model of reality. To put it in a layman language, it means that machines have their limitations. They don’t truly understand the world around them, which makes them fall short in their ability to engage with it. The researchers are working on new techniques which will help to overcome this limit—for example, by giving computers a kind of working memory so that as they derive and learn basic principles and facts, they can accumulate them to draw on in future interactions.
But Dr. Yann believes that is only a piece of the puzzle. “Obviously we are missing something,” he said. Unlike a baby who develop an understanding of an elephant after seeing two photos, deep-learning algorithms need to see thousands, if not millions. Similarly, a human being can learn to drive safely by practicing for 20 hours and manage to avoid crashes without first experiencing one. However, reinforcement-learning algorithms, which is a subcategory of deep learning, must go through tens of millions of trials, including many glaring failures.
LeCun thinks that the reason lies in the underrated deep-learning subcategory known as unsupervised learning. Algorithms based on reinforcement and supervised learning are taught to achieve an objective through human input. On the contrary, unsupervised ones extract patterns in data entirely on their own. LeCun has used the term “self-supervised learning” because it primarily uses a fragment of the training data to envisage the rest of the training data.
In recent years, similar algorithms have earned exceptional traction in natural-language processing because of their skill to find the relationships between billions of words and sentences. This proves beneficial for building text-based prediction systems like autocomplete or for generating a poem. But the vast majority of Artificial Intelligence research in other domains have focused on reinforcement or supervised learning.
Professor LeCun believes the focus should be changed. “Almost everything we learn like humans—is learned through self-supervised learning. There’s a very thin layer we learn through supervised learning, and a small amount we learn through reinforcement learning,” he said. “If AI or machine learning, is a cake, the vast majority of the cake is self-supervised learning.”
So how far this thing is practical? Researchers should begin by focusing on chronological prediction. In other words, train large neural networks to forecast the 2nd half of a video when given the 1st. While not everything in our world can be foretold, this is the fundamental skill behind a young child’s ability to realize that a toy plane should fall. “This is kind of a simulation of what is going on in your head if you want,” Professor LeCun said.
To start with, once the field develops techniques that refine those abilities, they will have many significant practical uses as well. “It’s a great idea to make video prediction in the context of cars that self-drive because one might want to know in advance what other vehicles on the streets are going to do,” he said.
Ultimately, unsupervised learning will hopefully help machines develop a model of the world that can then predict future states of the world, LeCunn said. It’s a very lofty desire that has eluded AI research but would open up many new host of capabilities. LeCun is confident: “The next revolution of AI won’t be supervised.”