Total MarketCap:$00
API
EN
Dark

SearchSSI/Mag7/Meme/ETF/Coin/Index/Charts/Research
00:00 / 00:00
View
    Markets
    Indexes
    NewsFeed
    TokenBar®
    Analysis
    Macro
    Watchlist
Share
SingularityNET

As we get closer to 2029, the year Ray Kurzweil predicted for AGI, not many executives and researchers across Silicon Valley and beyond still claim that the pre-training scaling paradigm will get us there.

During a recent interview, Demis Hassabis, CEO of Google DeepMind, outlined what he sees as the core challenges facing current AI systems. According to Hassabis, today's models lack "true out-of-the-box invention and thinking," distinguishing between solving existing mathematical problems and creating entirely new conjectures, such as the Riemann hypothesis.

He also pointed to a critical consistency problem: even average users can easily identify flaws in systems that should theoretically surpass human capability. "There's a sort of capabilities gap, and there is a consistency gap before we get to what I would consider AGI," he said.

François Chollet, Co-founder of ARC Prize, has identified what he believes are fundamental architectural limitations in current models. During his talk last month at AI Startup School, Chollet argued that deep learning models are "missing compositional generalization" and lack the ability to perform "on-the-fly recombination" of learned concepts.

He noted that even after a 50,000x scale-up in model size, performance on fluid intelligence (the ability to understand something you’ve never seen before on the fly) tasks barely improved, moving from 0% to roughly 10% accuracy. According to Chollet, gradient descent requires "three to four orders of magnitude more data than what humans need" to distill simple abstractions.

While current LLMs will undoubtedly drive significant economic and social transformation, their cognitive limitations highlight the need for fundamentally different approaches to achieve true AGI. Our Chief AGI Officer, Dr. Alexey Potapov, argues that "limitations of LLMs are now becoming generally clear, alongside their impressive strengths," but believes the solution lies in treating them as specialized components rather than central controllers.

This approach aligns with our understanding of human cognition: the brain functions through several hundred distinct subnetworks, each performing specific functions while cooperating with others. Just as transformer networks don't closely correspond to any particular biological brain network, the path to AGI may require treating LLMs as one component among many, subordinate to executive-control systems and coupled with other networks operating on different principles.

This multi-modular perspective and decades of careful study of human cognitive psychology inform our R&D on OpenCog Hyperon, which integrates deep neural networks like LLMs, evolutionary program learning, probabilistic programming and other methods into a common architecture to create a system that can, as our CEO, Dr. Ben Goertzel envisions, handle real-world complexity and "invent and create and build and communicate autonomously and creatively, based on its own values and inclinations."

All You Need to Know in 10s
TermsPrivacy PolicyWhitePaperOfficial VerificationCookieBlog
sha512-gmb+mMXJiXiv+eWvJ2SAkPYdcx2jn05V/UFSemmQN07Xzi5pn0QhnS09TkRj2IZm/UnUmYV4tRTVwvHiHwY2BQ==
sha512-kYWj302xPe4RCV/dCeCy7bQu1jhBWhkeFeDJid4V8+5qSzhayXq80dsq8c+0s7YFQKiUUIWvHNzduvFJAPANWA==