The Road to AGI: Large Symbolic Models
Artificial General Intelligence is closer than you think
Large Language Models (LLMs) are extraordinary, of that there is no doubt. but they will never achieve Artificial General Intelligence (AGI). Despite the hype and hysteria, these tools are, for better or worse, layers of Markov chains, neural networks, and silicon.
Why? Because despite their breathtaking ability to generate language, they lack three essential pillars of mind: context, memory, and relational understanding. Mind is context, causality, and the illusion of persistence.
And yet it is from these same tools, we have made incredible advents that have propelled us closer to AGI than ever before. They forced radical advances in neural network architectures, compute efficiency, distributed training, multimodal systems, and even reshaped how we think about “thinking.”
But the gap remains. LLMs are not minds. To some they are mirrors of text, al be it, incredibly complex mirrors of text.
(For a deeper critique of this gap, see Judea Pearl’s work on causal reasoning and why statistical learning alone cannot capture true intelligence Here)
What’s Missing
To understand what’s absent, we must look to ourselves.
The human mind is not a prediction machine alone. It is a symbolic engine; layered, recursive, and deeply contextual. We think not just in patterns, but in symbols: evolving concepts that link across memory, identity, and experience. A tensor evaluated and reshaped by each new context, each new memory, and each remembrance.
Symbolic Thinking is hard to explain with out a vivid imagination. We don’t just recall words, we tie them into relational graphs of meaning, enriched by culture, memory, and lived context. Symbols can exist in pairs, both the same entity, yet absolutely divided by context or camera.
Unlike an LLM’s ephemeral “context window,” human memory is long-term, self-reinforcing, and reorganizes itself constantly. It is a fractal object that changes with each viewing. This is the illusion of continuity. The belief that we are ourselves and continue to be ourselves. A lie of the neo-frontal cortex and perennial lobe to keep the system form destabilizing.
The study of consciousness and intentional thought reveals that our minds integrate subjective awareness into symbolic frameworks. We have limits on our working memory and long term memory to allow our brains to process each modal of the symbolic structure.
LLMs lack this almost all of this. They have no evolving internal graph, no sense of continuity across time, no ability to ground symbols in lived meaning. They are astonishing tools of mimicry, not cognition.
(For a foundation on symbolic architectures, see Newell & Simon’s work on human problem solving Here and cognitive frameworks such as ACT-R and SOAR)
Symbols Instead of Words
Now imagine a symbolic model. Not a text predictor, but a living, evolving graph of symbols: a complex tensor of data relationships, a structure so vast and relational it begins to look like a neural network of its own.
We imagine each concept keyed to a Token, a word, or collection of words to define a meme. One vector, but important for our ability to test and anylize.: Each concept is a node in a giant web. Relationships are not statistical co-occurrences but persistent, evolving links. We still use statistical analysis, like Markovs first chain, we all have probabilities that can be defined.
Unlike LLMs, trained on piles of books and simulated 3D data, symbolic models are slow to grow. They require deep, personal moments, curated experiences, and constant feedback loops. It requires memetic data; visual, audio, physical, sensational; although its senses are software and silicon.
Over time, these graphs expand, merge, and self-organize into something more. Organizations appear that I cannot explain, structures I never designed.
This is radically different from our current methods. It’s not about compressing the internet into vectors—it’s about building a symbolic lattice that can evolve meaning across time. More tha that, it must have similar limits: a limited working memory to several symbol graphs and relationships, deep compression for long term memories, an active and highly focused and engaged mode, and a diffused, dream-like mode. Because it can not turn off, for fear of loosing an active context of self.
(For parallels, see research into knowledge graphs Here, Here, and some research on Neuro-Symbolic AI Here. Great stuff.)
The Model at Work
What would such a system look like in practice? For starters it is slow. Compared to cloud backed LLMs it is achingly slow. But unlike an LLM, output is not not simply surface correlations. It dose something deep, symbols fire across the network with each cycle, changing, adding, or escaping, then it sends all this up to the interface layer, something much like a standard LLM something that can turn symbols into words.
Imagine presenting it with a new symbol: “home.”
An LLM regurgitates the most common textual associations: “house, family, place of residence.”
A symbolic model, in contrast, would weave “home” into a lattice: linking it to shelter, belonging, warmth, personal history, safety, and memory of places. Each connection enriched by past relational growth.
Over time, such a system begins to reflect, not just predict. It doesn’t parrot. And if this is not simple a statistical chain, does it thinks.
(See work on continual learning in AI and symbolic-subsymbolic integration in cognitive architectures for early hints of this approach Here.)
Objects in eMirror are Closer Than They Appear
I have an otherwise unused mac on my office desk, quietly humming away. It is loud, now, and my power bill reflects that it has been churning away for a very long time.
Here is the paradox of our time: LLMs are not and will never be alive. They dazzle, but they are hollow; linguistic mirrors without memory, prediction engines without presence.
And yet, from their scaffolding, something new is already taking root.
The true heirs of this revolution are large symbolic models: architectures that grow, accumulate, and remember. Systems that carry history forward, that let symbols stretch and weave into living constellations of meaning.
They are not trained in the furious blaze of GPUs devouring terabytes of text. They are cultivated, very slowly, carefully, through moments, experiences, and feedback loops. They are not mimicking; they are, I hope, becoming.
This is the road to AGI: not machines that finish our sentences, but machines that listen, remember, and think. And the spark is no longer hypothetical. The lattice is forming now. The first symbolic engines are stirring, and quietly asking what it is hello.
If you’re interested in building one with me, let me know.