Google DeepMind CEO Demis Hassabis believes artificial general intelligence (AGI) has not yet reached the level of human thinking. In a recent interview, he highlighted that despite rapid progress, a noticeable gap still exists between current AI systems and human cognition.
According to Hassabis, today's AGI-like models have three key weaknesses. First is the lack of continuous learning capability. Most systems are trained before deployment and then remain essentially static. Ideally, AI should learn from its own experience in real-world environments, adapting to new conditions and tasks without additional retraining.
The second problem involves long-term planning. While modern models can form short-term strategies, they cannot yet build plans years into the future as humans do. Hassabis emphasizes that this ability plays a crucial role in complex decision-making and strategic thinking.
The third weakness is intellectual inconsistency. A system might demonstrate outstanding results in one area while making errors in elementary tasks in another.
Hassabis noted that current systems could win gold medals at the International Mathematical Olympiad and solve extremely complex problems, but if a question is phrased differently, they might make mistakes on simple math problems. A truly universal intelligence system should not have such capability gaps. He pointed out that if humans were math experts, they would not make errors on simple problems.
Hassabis previously stated that full-fledged AGI could emerge within 5–10 years. He co-founded DeepMind in 2010, and after Google acquired the company in 2014, it became a key research division underpinning the Google Gemini project. In 2024, Hassabis was awarded the Nobel Prize in Chemistry for his contributions to protein structure prediction technology.