There’s no escaping the hype around artificial general intelligence. Barely a day passes without a new headline about the concept, which envisions computer systems outperforming humans at various cognitive tasks.
In the last month alone, a trio of tech luminaries have added fresh proclamations. Nvidia CEO Jensen Huang suggested AGI would arrive within five years. Ben “father of AGI” Goertzel forecasted a mere three. Elon Musk typically made the boldest prediction for the tipping point: the end of 2025.
Still, not everyone is so bullish. One notable sceptic is Yann LeCun, Meta’s chief AI scientist and a winner of the prestigious Turing Award.
Often referred to as one of three “godfathers of AI,” LeCun goes as far as to argue that “there is no such thing as AGI” because “human intelligence is nowhere near general.” The Frenchman prefers to chart a path towards “human-level AI.”
At an event on Tuesday in London — Meta’s flagship engineering hub outside the US — LeCun said even that remains a distant destination.
He pointed to a quartet of cognitive challenges: reasoning, planning, persistent memory, and understanding the physical world.
“Those are four essential characteristics of human intelligence — also animal intelligence, for that matter — that current AI systems can’t do,” he said.
Without these capabilities, AI applications remain limited and error-prone. Autonomous vehicles still aren’t safe for public roads. Domestic robots struggle with basic household chores. Our smart assistants can only complete basic tasks.
These intellectual shortcomings are particularly prominent in large language models (LLMs). In LeCun’s view, they’re severely restricted by their reliance on one form of human knowledge: text.
“We’re easily fooled into thinking they are intelligent because of their fluency with language, but really, their understanding of reality is very superficial,” he said.
“They’re useful, there’s no question about that. But on the path towards human-level intelligence, an LLM is basically an off-ramp, a distraction, a dead end.”
Why LLMs aren’t as smart as they seem
The likes of Meta’s LLaMA, OpenAI’s GPT-3, and Google’s Bard are trained on enormous quantities of data. According to LeCun, it would take a human around 100,000 years to read all the text ingested by a leading LLM. But that’s not our primary method of learning.
We consume far more information through our interactions with the world. LeCun estimates that a typical four-year-old has seen 50 times more data than the world’s biggest LLMs.
“Most of human knowledge is actually not language so those systems can never reach human-level intelligence — unless you change the architecture,” LeCun said.
Naturally, the 63-year-old has an alternative architecture. He calls it “objective-driven AI.”
Objectives of intelligence
Objective-driven AI systems are built to fulfil specific goals set by humans.
Rather than being raised on a diet of pure text, they learn about the physical world through sensors and training on video data.
The result is a “world model” that shows the impact of actions. All the potential changes are then updated in the system’s memory.
What would be the difference, for instance, if a chair is pushed to the left or the right of a room? By learning through experience, the end states begin to become predictable. As a result, machines can plan the steps needed to complete various tasks.
LeCun is quietly confident about the pay-off.
“Eventually, machines will surpass human intelligence… it’s gonna take a while though,” he said. “It’s not just around the corner — and it’s certainly not next year like our friend Elon has said.”
One of the themes of this year’s TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), we’ve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).
Get the TNW newsletter
Get the most important tech news in your inbox each week.