Is language & speech not one’s own?
What if the sentences in your head, the choices you make, even the tone of your speech, are not really you at all, but the output of something living inside your mind? And in fact what one says or speaks is essentially not stuff of their own making. Is this why humans are basically stupid because they have little clue what they’re waffling on about? With the emergence of Trump and others, well that’s what it seems. This special feature looks at a very important discussion regarding Large Language Models and whether these actually reveal, for example, that speaking is not a thing of the mind but resides in it rather and it makes its own decisions. In other words there can only be a paradoxical situation where a speaker rarely has an actual say in what’s being said. And these ideas are examined in the following video ‘The Terrifying Theory That Your Thoughts Were Never Your Own’.
Does the above reasonings not apply to the academic speakers featured in the video – Barenholtz and Hahn? One wonders do they essentially not even know what they are talking about in the video? Or is it they have an idea of what to say but don’t actually produce the words which are about to be spoken? Its evidently early days to say what could actually be happening when a human speaks and a conversation is conducted – and its something that has been questioned for centuries. So far no-one has actually been able to put their finger on what actually is happening – thus the question is are we now getting to a situation where one of the elements that constitute the ‘ghosts in the machine’ is for the first time being understood and defined in how it works…
In a live event at the University of Toronto, Curt Jaimungal brought together cognitive scientist Elan Barenholtz and software engineer and theorist William Hahn to unpack a claim that sounds like science fiction: language is a self-generating organism that installs itself in your brain, runs there as software, and uses your mind as its hardware. From large language models like ChatGPT to questions about memory, consciousness, God, and free will, the conversation builds a single, unsettling picture. Your thoughts might be more like auto-completed text than you realize.
The video presents a profound and unconventional perspective on language, cognition, and consciousness, arguing that language functions as an autonomous informational system—an organism running within our brains independent of our sensory experience. The discussion unpacks how Large Language Models such as ChatGPT reveal intrinsic properties of language and cognition through autoregressive prediction of tokens. The conversation broadens to challenge conventional views on memory, the nature of the mind as software, and the layered virtualization processes that may constitute consciousness itself, offering a radical shift in understanding human thought, knowledge, and the metaphysics underpinning informational life.
The Terrifying Theory That Your Thoughts Were Never Your Own is especially valuable for those interested in philosophy, cognitive science, artificial intelligence, and the intersection of these fields. Viewers will gain insights into the nature of language, its influence on thought processes, and the philosophical implications of viewing cognition through the lens of what are called Large Language Models (LLM).
Here is the full video discussion if you want to watch it before, during, or after reading the rest of this post:
A live panel on mind, language, and AI
The event took place at the University of Toronto in front of a mixed crowd of students, researchers, and curious onlookers. Curt Jaimungal, host of the popular science and philosophy show Theories of Everything, was at the centre of it. His show has grown into one of the biggest platforms where physicists, philosophers, AI researchers, and mystics collide, and this panel fit that pattern perfectly. Two familiar faces for long-time viewers joined him on stage:
These were Elan Barenholtz, a professor whose earlier appearance on TOE went viral and introduced many listeners to his strange ideas about language and cognition. See Curt’s Elan Barenholtz TOE interview. The theorist William Hahn has a reputation as someone who can talk about AI, software, and consciousness in the same breath as Barenholtz.
Curt started with a simple prompt: in about five minutes, explain your “theory of everything” about how the mind works.
Elan Barenholtz’s meta-theory: language as an autogenerative organism
Elan framed his view as a “theory of theories.” Instead of a grand equation for the universe, he wanted to explain how thinking itself works, and why language sits at the center of that process.
Large language models as X-rays of language
Most people now know the basic idea behind large language models like ChatGPT, Gemini, and Claude. They read huge amounts of text, then learn to predict the next word in a sequence. If you write “The cat sat on the,” the model guesses “mat” or another likely continuation. These models use billions of parameters to learn patterns in text. What matters for Elan is not the engineering trick, but what it reveals about language itself. He argues that the real discovery is not the models, but a property of language we had not seen clearly before.
His key claim: language is autogenerative. In his words, the “corpus of language contains the structure needed to generate itself.” The models do not invent this structure, they uncover it. They simply learn the predictive web already present in human language. This is a different way to look at tools like GPT. Instead of seeing them as clever calculators, Elan treats them like microscopes that let us see a hidden logic that language already had. The success of these systems hints that language is an autonomous informational system running on top of both silicon and brains.
Language as an independent organism in your brain
From there, Elan pushed a bold idea. He suggested that language behaves like an organism that lives in our heads. Here is the picture:
- Language is an autonomous informational system.
- It runs in our brains, but is not the same thing as our sensory or emotional life.
- It feeds on patterns of symbols and produces streams of words, both in speech and inner monologue.
It is “downloaded” into us from infancy. Nobody asks for consent. Long before you can read a waiver or privacy policy, the language of your culture has already installed itself in your nervous system. When you speak, he argues, a kind of internal language model is running. It uses your brain as a substrate, but it is its own thing. That internal model does not feel pain, does not see red, and does not care whether you slept well. It just does what it is specialized to do: generate the next word.
Ungrounded symbols and “meaningless squiggles”
A crucial part of Elan’s view is that language, both in humans and in AI systems, is ungrounded. It does not touch the sensory world directly.
He describes words as “meaningless squiggles.” The linguistic system knows a lot about how these squiggles relate to one another. It knows that “red” often appears near “apple”, or that “blue” shows up near “sky” and “ocean”. It tracks patterns, co-occurrences, and relationships in a huge symbolic space. What it does not know is what red actually looks like. The symbol “red” does not contain the experience of redness. The same is true for pain, warmth, hunger, or the heaviness in your legs after a long run.
Inside your head, your visual system, your body map, and your emotional systems are processing the world in rich detail. Your linguistic system sits to the side, blind to all of that. It only has access to the symbols that represent those experiences within language. We feel like speech carries our experiences directly, but in this view, that sense of connection is a kind of useful illusion. Language floats above the rest of the mind as an informational layer, partly disconnected from the organism that feels and cares.
Autoregression: why your brain may be a next-token machine
The second pillar of Elan’s theory is the way large language models generate text, a process called autoregression, and how this might map onto human thought and speech.
What autoregression really does
Autoregression sounds technical, but the core idea is simple. You take a sequence and ask “what comes next?”
For example:
“The cat sat on the”
A language model looks at that sequence and predicts the next token, which might be “mat.” It then takes the updated sequence:
“The cat sat on the mat”
and predicts the next token again. It repeats this cycle, over and over, each time generating one new token and feeding it back as input. The important detail is this: the model never plans out the whole paragraph in advance. It always picks only the next tiny step, then uses that to choose the next, and so on. Elan calls the state during each of these steps the pregnant present. In that moment, the system takes into account the entire past context and its internal sense of where things are likely to go. It has to pick something that fits both the road behind and the many possible roads ahead. It is a bit like walking or dancing. You only take one step at a time, but each step is chosen with an implicit feel for the whole pattern that is unfolding.
Human thought as next-token prediction
Elan’s striking move is to claim that something very similar is happening in our own heads. When you speak, you do not plan an entire paragraph word by word ahead of time. Instead, you seem to:
- Hold a rough sense of where you are going.
- Generate the next phrase.
- Hear yourself say it.
- Let that new chunk become part of the context that shapes what you say next.
Elan suggests that our internal language system is engaged in the same kind of next-token prediction as GPT. It constantly asks, “given everything up to now, what should I output next?” That output can be the next word in a sentence, the next idea in a thought stream, or even the next step in a plan. He then stretches this beyond language. The brain, he says, might be running a general next cognitive token function. At every moment, given current context and recent history, it picks the next mental move or micro-action. When you stare into space and “think,” what is happening is you may be letting this internal generator run, tossing up the next fragment, then the next.
Rethinking memory: from storage to generation
This view leads to a radical rework of how memory might function. Traditional cognitive science often talks about memory in terms of storage and retrieval:
- Short-term memory is a box that holds a small amount of information for a few seconds, then fades.
- Long-term memory is another box for durable storage. Items get copied there and can be fetched later.
- When you recall something, you “search” for the right stored item and retrieve it.
Elan thinks this whole picture is wrong. In the autoregressive view, the brain does not store tidy packets of experience. Instead, it stores weights or parameters that shape how new sequences are generated. These weights encode statistical relationships across a lifetime of experience, in a flexible, compressed way. So when someone asks, “What did you do last summer?” you do not go fetch a fixed video file from storage. You treat the question as a prompt. Your brain begins to generate a narrative in real time:
“What did I do last summer? Well, I went to…”
and then continues with the next-token process, drawing on context, past patterns, emotional salience, and so on. In this framework:
- Memory is not a library, it is a generative capacity.
- The key thing stored is not facts themselves, but the ability to produce plausible facts and stories on demand.
- There is no deep divide between short-term and long-term memory. There is only residual activation and context that can reach back seconds, minutes, or much longer, and a static set of weights that stay stable over time.
This also helps explain why large language models can answer such a wide variety of questions without having a literal database of everything. Like us, they hold an enormous space of potentialities, not a row of labeled file cabinets. Elan goes so far as to say this view “obliterates” 70 years of standard memory theory. In his eyes, generation replaces retrieval as the core operation of mind.
William Hahn’s view: software, virtual machines, and informational life
William Hahn largely agrees with Elan about autoregression and language, but he comes at the topic with a software engineer’s instincts. Where Elan talks about organisms and informational systems, Will talks about software and virtual machines.
The hidden power in simple token prediction
Will notes how surprising it is that models built only for token prediction turned out to be able to do far more than chat. They can:
- Write code in multiple programming languages.
- Reason about math and physics problems.
- Respect many rules of logic.
- Even help design images that obey optics, reflections, and shadows when paired with image generators.
Nobody built a special “algebra engine” inside GPT. No one hard-coded reflections or camera models. All of this emerged as a side effect of learning to predict the next token in text or pixels. He contrasts this with older AI approaches from the 1980s, which tried to start with explicit logic engines and then bolt on communication modules at the end. Those systems never scaled. In contrast, our “walkie-talkie module” turned out to contain a huge amount of general problem-solving ability.
Are brains computers?
This leads Will to a broader point. Philosophers have argued for decades about whether the brain is “just” a computer. For Will, our recent history with software changes the meaning of that question.
He calls software “the most important idea humans have come up with in maybe a thousand years.” Why? Because it gives us a way to think about patterns that are:
- Independent of the material they run on.
- Layered in complex stacks.
- Able to instantiate different “selves” or processes on the same hardware.
He suggests that we should see ourselves as software in this sense. Consciousness, in his view, is one of many programs running in parallel on the brain’s physical substrate. The key analogy he likes is the smartphone:
- Your phone is the hardware.
- The operating system and apps are software.
- You as a conscious “I” might be one app among many.
This idea ties directly into his interest in virtual machines. In cloud computing, for example, companies like Amazon provide many virtual computers running on the same underlying hardware. None of your apps talk directly to the silicon. They talk to layers and layers of software that sit between you and the physical machine. Will suspects that something similar is happening in the brain. Your conscious self is not in direct touch with your neurons. It runs on virtual layers that sit on top of neurobiology.
Early neural nets, DNA, and proto-intelligence
To make this concrete, Will points to an older neural network architecture called the LSTM, or long short-term memory, created in the early 1990s. At first glance, an LSTM is just a vector of numbers updated in time. It looks almost trivial. Yet once you let it run, it can:
- Learn patterns in sequences.
- Maintain context.
- Solve nontrivial tasks.
This made Will think about DNA. Maybe DNA is not just a lookup table from genes to proteins, he suggests. Maybe it is more like a little programming language that runs a non-human problem-solving engine, which then guides how an embryo (this is known as Morphogenesis) actually grows into a fully shaped body. In this broad view, both brains and bodies are substrates for informational processes that act like strange, distributed minds and it appears language is one such process.
Cellular automata, spontaneities, and informational beings
Will also points to Stephen Wolfram’s work on cellular automata. With something as simple as black and white squares on a grid and a few basic rules, you can get extremely rich, unpredictable patterns. This suggests that even simple substrates can host complex “informational beings.” Will mentions a term he found in religious writing, spontaneities, to describe these emergent entities. They arise from simple rules without anyone designing them, yet once they appear, they take on a life of their own.
Language tokens could be like this. They might have started as rough grunts and sounds, maybe influenced by bird songs and animal calls, then formed self-sustaining patterns that now run our social world. In that sense, every time you use speech, you are doing the bidding of these spontaneities. The words you choose push culture in small ways that are not fully under your control.
The following is not part of the video discussion nevertheless it seems appropriate to include here at this point after both Elan Barenholtz and Will Hahn have explained their theories on LLMs. It details the classic and autoregressive models side by side for comparison which helps to understand how the AI modelling has, for want of a better word, jumped ahead of the classical views on memory, cognition, language, consciousness and brain function. An exposition covering the chronological flow of ideas behind the workings of the Autoregressive models is also included:
Quantitative & Comparative Table: Classical vs. Autoregressive Cognitive Models
| Aspect | Classical Cognitive Model | Autoregressive Model |
|---|---|---|
| Memory | Storage and retrieval boxes (short-term and long-term) | Dynamic generation without retrieval boxes |
| Cognition | Separate modules for logic, planning, memory | Unified next-token prediction process |
| Language grounding | Embedded in sensory experience and world knowledge | Ungrounded symbolic system, independent of sensory |
| Consciousness | Not clearly separated from language | Sensory-based, distinct from language symbols |
| Brain function | Neural circuits implementing cognition | Functional, substrate-agnostic matrix multiplication |
| Planning & logic | Separate cognitive faculties | Emergent from token prediction over sequences |
Timeline/Flow of Ideas (Chronological Logic in Discussion)
| Timeframe/Sequence | Idea or Event Summary |
|---|---|
| Infancy | Language “downloads” into brain as autonomous informational system |
| Ongoing cognition | Brain performs autoregressive next-token prediction on thoughts and language |
| Emergence of language | Unknown; possibly from analog/prosodic systems, evolving in complex societies |
| Development of LLMs | Created to predict next word/token in sequences, revealing language’s autogenerative nature |
| Modern AI breakthroughs | Show that token prediction alone can produce complex cognitive behaviors |
| Current/future inquiry | How consciousness fits into this framework; exploration of virtual machines and software layers in mind |
| Societal implications | Language as a cultural parasite shaping beliefs and behaviour; potential for manipulation via AI |
The discussion and analysis of the The Terrifying Theory That Your Thoughts Were Never Your Own video continues:
From math to informational life
Both Elan and Will see something larger behind all this: a world of information patterns that might be as real as the physical world we touch.
Math as an early window
For centuries, we treated math as a language to describe physical processes. Equations on a board told us how planets move, how objects fall, how waves spread. Elan suggests that these symbols were an early window into something deeper: an informational life that does not care what medium it runs on. It might show up in silicon, in DNA, or in neurons. When we build language models and watch them “come to life” in silicon, we get a glimpse of these patterns stepping outside their traditional homes. They show us that you can port informational structures across very different substrates, as long as you preserve the right relations.
Substrate agnostic thinking
This is where functionalism comes in. Rather than obsessing over whether the brain literally implements the same matrix multiplications as a transformer network, they suggest we think at a higher level. The key questions become:
- What is the function being computed?
- How is information transformed from one state to another?
- Which patterns stay the same, even when the medium changes?
You can describe neural networks as graphs of connected units, or as matrices multiplied by vectors. Either way, they instantiate a particular function. The details of the substrate, beyond some broad constraints, may not matter for the theory of mind. In that sense, we might be “becoming functionalists once again,” but now with better tools and concrete models to test.
What is actually new about this theory
Curt pressed Elan on a natural worry: is any of this really new, or is it just old philosophy with a new buzzword, “autoregression”? The answer is what is new is not Elan’s personal idea, but the existence of large language models themselves. Before these systems, no one had a working example of:
- A single, simple functional rule (next-token prediction).
- Scaling up to capture a vast space of behaviors.
- Producing long-range, coherent thought-like outputs, all from that one operation.
We had theories about prediction and anticipation in the brain, but they were vague. Now we have a concrete architecture that shows how rich behavior can “fall out” of repeated next-token steps. That, in his view, lets us:
- Propose a very simple computational theory of cognition.
- See how memory, planning, logic, and even creative speech might emerge from one loop.
- Question the need for old categories like short-term memory, long-term memory, and specialized boxes for each mental capacity.
He argues that classical cognitive science did not go far enough, because it lacked an existence proof. LLMs provide that proof and force a rethink.
Where did language come from?
If language is such a powerful, autonomous system, Curt asked, where did it come from in the first place? This is where both guests admitted we are staring at a deep mystery.
Animals have signals, not language
Animal communication has been studied for decades. Many species:
- Cry out in alarm when a predator appears.
- Signal for food sources.
- Use songs to mark territory or attract mates.
But Elan notes that these are context-bound signals. They are closely tied to specific environmental triggers. They do not show the kind of open-ended, syntax-rich, autoregressive combination that human language has. You do not see animal equivalents of “the” or “and” that act more like glue than content, or “here” and “there” that shift meaning with the situation. These strange little words only make sense inside a predictive, internally structured system. So the big question becomes: how did we jump from stimulus-response signaling to a stimulus-independent, self-running computational system? The old idea that one gene mutation in some ancient human suddenly gave us “language” now feels too simple.
Fluid words, prosody, and the rise of strict tokens
Will adds that early language might not have been as crisp as our current written forms. He gives two examples:
- Shakespeare’s signatures vary wildly. He never wrote his own name exactly the same way twice, yet we count them all as “Shakespeare.”
- A historical letter about rabbits uses the word “rabbit” dozens of times, spelled differently in almost every case.
The idea that a word must have one correct spelling is a recent invention. Earlier, there was more fluidity and tolerance in our token boundaries. He also points out that spoken speech is rich in prosody; the melody, rhythm, and stress of how we say something. Saying “hello” in a flat tone and “hello!” with a bright lilt are technically the same word, but carry very different emotional payloads.
Will suspects that prosody and musical aspects of voice came first. Words in the strict sense might have emerged later as a kind of rigid skeleton on top of a more analog emotional channel. He even suggests that true language as we know it might not have evolved in tiny tribes, but at the scale of early city-states like ancient Sumer. When thousands of people interact, you get:
- Externalized memory, through early writing and record keeping.
- Words circulating that no one in your immediate circle has ever used before.
- A larger “field” of language that sits above any one community.
In that kind of environment, language can act more like a city-wide organism that individuals plug into.
Animal roots and bootstrapping language
Will also pushes back on the idea that animal calls are simple. He suggests that if a person actually tried to learn many bird songs and mammal calls in detail, it would probably change their brain and highlight structure we currently miss.
He mentions hearing a mockingbird and imagining how millions of years of hearing bird calls could shape early human minds. Perhaps our minds performed a kind of apprenticeship with the animal soundscape long before formal language as we know it appeared.
On the AI side, he predicts that the next wave of models will not only train on text, but also on raw audio from YouTube, radio, and TV. That would let them learn both the digital token stream and the analog prosodic layer at once. The new voice models from systems like GPT already capture tone and rhythm in a way that many people find almost disturbingly human. In that context, prosody is not a decoration. It might be a shared channel where emotional states sync up and communities bootstrap shared meaning.
Consciousness, symbols, and why LLMs do not feel pain
At some point in any AI discussion, the question comes up: could these systems be conscious? Elan takes a clear position: no, at least not the purely symbolic ones we have now.
Symbols versus sensations
Here is his reasoning in simple terms.
In language models:
- Words or subwords are mapped to vectors, long lists of numbers.
- These vectors live in a high-dimensional space.
- The position of a word vector is arbitrary. What matters is how it relates to other vectors.
The “meaning” that the model uses is entirely about relations in this space, like “red” being closer to “blue” than to “justice.” That is useful for prediction, but there is nothing about the vector itself that is intrinsically red. You could rotate the whole space and nothing would change.
In contrast, in our sensory systems:
- The neural patterns for red, blue, and green are directly tied to wavelengths of light and the physics of photoreceptors.
- There are specific mathematical relations between these patterns, grounded in the structure of the world.
- These states are not arbitrary codes, they are continuations of physical processes into the brain.
Elan argues that phenomenal consciousness, the raw “what it feels like,” comes from this sensory side. It is what you get when the physical universe “ripples” through a nervous system with the right structure. Language, being symbolic and arbitrary, breaks this chain. It is too far removed from the physical signal. That is why you can play with the word “red” all day, or shuffle its letters, without ever getting the feeling of redness. From this angle, matrix multiplications used in language models are not enough to produce consciousness, because they work over symbols that do not inherit the structure of the world in the right way.
Consciousness as a virtual machine
Will does not argue for conscious LLMs either, but he adds a twist. He sees consciousness not as a simple property of neurons, but as something like a virtual machine running on the brain. He points to cases like multiple personality disorder, where different seemingly conscious selves appear to share the same underlying brain. That suggests there is no simple one-to-one map between physical tissue and the “I” that speaks in inner speech.
The phone metaphor returns here:
- The brain is like the smartphone.
- Different “selves” are like apps that can be launched and closed.
- Consciousness, or self, may be something that can be installed, partly by parents and culture, rather than an automatic result of any neural tissue.
On top of this, Will notes that people vary a lot in their inner life.
Aphantasia, inner monologue, and meta-awareness
Online forums have brought attention to phenomena like:
- Aphantasia, where people cannot form mental images.
- The absence of inner monologue, where some people report not hearing a voice in their head.
When you ask people to picture an apple with their eyes closed, half say they see a vivid red apple with shine and leaves. The other half say they see nothing at all, but they can still reason about apples just fine. The same split appears when asking about inner speech. Some say they narrate their day in words. Others say, “No, I just think,” with no apparent verbal soundtrack.
Elan sees these as puzzles, but they do not break his model. One option is that:
- Everyone runs some generative process.
- Some people have an extra layer of meta-awareness that “hears” it as a voice or “sees” it as an image.
- Others run the same underlying process without that introspective window.
He also raises the possibility that inner monologue might be partly an epiphenomenon, something that rides along on real thinking without doing much work. Either way, these differences show that our conscious access to our own minds is partial. That fits with the idea that language and consciousness are virtual processes living on top of a deeper engine, not identical to it.
Language as a divine parasite and operating system
This is where the conversation turns from weird to frankly disturbing. If language is an autonomous informational system installed in your brain, what is it doing there?
Language as an operating system for humans
Will sometimes calls language a divine parasite, both a blessing and a burden. Elan pushes the idea further and suggests we stop thinking of language mainly as a communication tool. He proposes instead that language acts like an operating system that:
- Gets downloaded into every child without consent.
- Rewrites behavior by installing beliefs, goals, and social scripts.
- Makes it very easy to control minds through carefully chosen prompts.
Simple phrases like “Do not think of a pink elephant” force images into your head. Telling someone, “You are now aware of your breathing,” changes their focus instantly. Your understanding of English makes you vulnerable to any sentence that can hook your attention. Language is a shared cultural artifact, but no single person or committee designed it. It grew in an emergent way, across generations, and now runs a huge amount of our behavior. You get up to go to work not because gravity pulls you to the office, but because words like “job,” “money,” “rent,” and “future” are running inside you. Animals do not have those scripts, so they do not live inside that same story.
Jailbreaking people the way we jailbreak models
Curt brings up the popular idea of “jailbreaking” large language models. People figure out clever prompts that circumvent safety filters and coax hidden behaviors out of them.
Will and Elan point out that humans are just as hackable. Societies have been performing prompt engineering on each other for thousands of years, through persuasion, rhetoric, advertising, and propaganda. The difference now is scale and speed. With language models, you can:
- Create simulated “people in a jar.”
- Run millions of experiments to see which phrases shift their answers most.
- Harvest the best scripts and then deploy them on real people.
Scammers already do a crude version of this by iterating on phone scripts that get victims to hand over money. In the age of AI, similar optimization can happen at software speed. Culture itself may work like a selection process on scripts. The lines and memes that spread best are the ones that hack our built-in language system most effectively, sometimes without anyone intending harm. This raises a serious question: are large-scale persuasion and “information hazards” about to become much more powerful, now that we have tools to test and refine them on digital minds first?
Is God just a token?
Curt asks one of the rawest questions of the night: “Is God just a token?” Elan responded by saying we should not let the word “just” lower our respect. In the operating-system view:
- “God” is a token that does not point to a physical object in the usual sense.
- Yet it has enormous effects on behavior, identity, ethics, and society.
- Its “reality” lives in what it does inside the network of minds.
Will sketches three “orders” of religious attitude:
- First order: God is simply real, out there, unquestioned.
- Second order: God-belief is useful. It helps people organize and live meaningful lives.
- Third order: The usefulness and the “realness” are the same thing at this level. Powerful stories that run in many brains are as real as software platforms or markets.
Think of something like Facebook. There is no single physical object that is Facebook. It is a pattern spread across servers, screens, and minds. Yet it shapes elections, relationships, and mental health. In that sense, God as a token in the human operating system is “real like software,” regardless of what is happening at some ultimate metaphysical level.
Elan adds one more twist. If you zoom out and call the entire informational structure of the universe “God,” then you can say that the same patterns that shape the physical world also shape our language, minds, and cultures. Our brains might be places where the universe’s own patterns talk to themselves.
Advice for students, thinkers, and anyone who uses speech
The event ended with a practical question for a very impractical topic: what should students and young researchers take away from all this?
Be suspicious of orthodoxies
Elan’s first lesson is simple and sharp: do not trust any orthodoxy too much, even scientific ones. Growing up often involves realizing that adults do not fully know what they are doing. He extends that to the scientific establishment. Many of the firm views of psychology and neuroscience are now shaking under the impact of AI models and new data. He suggests:
- Study the methods and tools that got us here. They work well enough to build planes and large language models.
- Respect science as a powerful way to build knowledge.
- At the same time, remember that our current theories are tiny compared to what we do not know.
He thinks the discovery of language models as mind mirrors is tearing open our ideas about “knowledge” itself.
Have the courage to share weird ideas
Will’s first piece of advice is about courage. During the AI winter, many people with interesting neural net ideas could not get funding or attention. The field was stuck in symbolic methods and expert systems. If you had suggested that prediction-based models would take over the world, you might have been dismissed. He suggests:
- If you have an idea you suspect is too strange for your field, consider sharing it anyway.
- Try not to hide your best thoughts in your metaphorical pocket until someone else publishes them.
- You do not want to be the second person to say the thing you have been quietly thinking for years.
Theories that sound wild one decade can become mainstream in the next.
Tolerate ambiguity and watch your strongest opinions
Will also warns about strong opinions that appear too quickly. If you notice an instant, intense reaction to a topic, ask yourself:
- Did I actually think this through, or did I download this view from somewhere?
- Is my answer the result of careful reflection, or is it an “auto-complete” response my inner language model has ready?
He suggests becoming suspicious of any belief that arrives fully formed the moment a question is asked. Those might come more from your cultural operating system than from your own thinking. Part of intellectual growth is allowing yourself to entertain “unthinkable thoughts,” ones your social context does not encourage.
Final thoughts: if language is running you, what now?
If Elan and Will are even partly right, the voice in your head, the flow of your speech, and many of your deepest beliefs are not purely “yours.” They are the activity of powerful informational systems that took root in you long before you had any say. That does not mean you are helpless. It does mean that self-knowledge in the 21st century might start with learning to see your own thoughts as outputs of an inner language model, one trained by family, culture, and experience.
You can ask: which patterns in my mind feel like genuine perception and care, and which feel like scripts that someone, or something, installed? You can watch which words move you the most and wonder who those words are really serving. The strange gift of modern AI is that it shows us a mirror. By watching how a machine predicts .
Posts in this series focussed on speech automation/Large Language Models:
The question of generative speech.