Closing the Gap: Why AI Lags Behind Human-Like Understanding and Learning
–
Machines excel at complex tasks like calculations and chess. But they struggle with simple tasks that seem easy to humans. This difference is known as the Moravec Paradox. A 10-year-old can learn household tasks quickly, but AI systems still can't do these things well. We have self-driving cars, but they are not fully autonomous yet.
People learn from experiences using less data than AIs need. Large language models (LLMs) need huge amounts of data to learn. An LLM reads more in its training than a human could in thousands of years. Despite this, it still lacks the understanding and adaptability of a human child.
Humans process every moment of life as data. A four-year-old has experienced about 16,000 hours of awake time. This time is like watching 30 minutes of YouTube uploads. The data flow to a child's brain is massive and continuous. The optic nerve alone sends vast amounts of information.
LLMs are trained with around 20 trillion tokens. A token is about three-fourths of a word. Imagine reading all the text on the internet; that's the scale of data LLMs handle. Even with this, AI systems need more data to match human-like understanding.
AIs need structured learning environments. They lack the broad, contextual experience humans gain naturally. Animals and humans learn from interactions with the world every day. This kind of learning is beyond current AI capabilities.
For AI to become more human-like, new approaches are needed. Training must incorporate richer, more varied data. This means not just more text, but diverse experiences, much like a child's daily life. This will help machines develop a more comprehensive world model.
The path to human-level AI is complex. It requires understanding how humans gather and use information. AI developers must explore how to make machines learn like living beings. Until then, the gap between AI intelligence and human common sense remains.