"They’re made out of meat"
"Meat?"
"Meat. They’re made out of meat."
"Meat?"
"There’s no doubt about it. We picked several from different parts of the planet, took them aboard our retcon vessels, probed them all the way through. They’re completely meat."
This is the conversation of the very puzzled non-carbon-based aliens in the short story “Alien/Nation” by sci-fi writer Terry Bisson. The aliens' puzzlement increases upon learning that the meaty strangers of Earth were not even built by non-meat intelligences and do not harbor even a simple non-carbon-based central processing unit hidden inside. Instead, it's meat all the way down. Even the brain, as one of them exclaims, is made of meat.
"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?"
I think this exchange gives delightful contrast to the surprise that many of us feel in response to increasingly intelligent AI today. Who hasn’t watched an LLM write a Shakespearian sonnet, or a stable diffusion model create a vibrant masterpiece of color and life, and thought with bewilderment: but this thing is a machine! Made out of silicon!
But really, what should be surprising is the fact that matter can give rise to intelligence at all, on any substrate. I’m not interested, here, in the question of consciousness
AI and the Biological Brain
One of the most fascinating consequences of AI development has been its intersection with neuroscience. These two sciences have been closing in from two sides on the question of how cognition develops, advances in each one bolstering the other.
It wasn’t always this way. In the early days of AI, researchers wanted to stay as far away as possible from reverse engineering the brain. In their defense, as far as they could tell the brain is and was an inscrutable tangled web of neurons, growing out of some Rube-Goldberg bootstrapping of genomic instructions. What the pioneers of AI wanted to do was figure out the pristine mathematical structure underlying all intelligence, and then code it up in all its elegant beauty.
Spoiler alert, that didn’t go so well. The field of AI pivoted to techniques called deep learning, in what is called the “deep learning revolution”
With retrospect, we can now see the striking similarities between modern AI systems, and the thing inside your cranium. To be sure, there are numberless differences. But the evidence keeps pouring in that the very core principles that structure the two are the same.
Predictive Processing
So how is the brain like deep learning? Among the best contenders for a unified theory of the brain is called Predictive Processing
We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the “blind spot” where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, stabilized for our eye and head movements, and massively reinterpreted based on our previous experience of similar visual scenes.
Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world?
The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary. So perception isn’t just building a representation of the world from nothing but the sensory input, it’s also the overlay of an interpretive framework to match that input.
The whole prediction thing isn’t only key to how the brain works, but also how it develops in the first place! We know that the information contained in an individual’s DNA isn’t sufficient to build a complete working brain. There’s only room in the genome for a general blueprint, not for the precise map of how each neuron connects to each other with what strength. So how does the general blueprint and the resulting blob of neurons develop into an exquisitely fine-tuned system?
Predictive Processing answers: the blob of neurons adjusts itself
Deep Learning
Sounding familiar? The one thing that everyone knows about ChatGPT, besides the fact that it’s your best friend when that essay is due in 10 minutes, is that it’s trained on predicting the next word of text from the internet— the most gloriously supercharged autocorrect in the world. How does it learn to predict the next word? By adjusting the connections between all the neurons in its internal neural network to nudge it towards minimizing the error
At this point it's reasonable to say, sure, the similarities between biological error minimization and machine learning are interesting, but "minimizing error" is pretty vague— how confident should we be that there is a truly deep similarity here? Well, you might be surprised to find out that research recently published in Nature suggests that it's literally the same algorithm that adjusts both neural networks and developing brains. For those versed in ML, you heard that right— it appears that, more or less, the brain implements backpropagation
Isn’t that crazy? It certainly makes you skeptical of claims that there is a qualitative difference between human cognition and machine intelligence, and that AI will always be missing some secret sauce that humans possess. There are certainly endless differences between, say, ChatGPT and myself. But it looks like as far as the development of intelligence is concerned, they are mostly superficial differences. The core mechanism which gives rise to things which understand the world in which they are embedded appears to be pretty much the same.
Golems
There’s an old Jewish myth,
A theme of the story is self-understanding— that those who created the Golem didn’t fully understand aspects of themselves until they were reflected in their creation.
It’s a cautionary tale, but also a hopeful one. As our society builds its own Golems, we will better understand ourselves-- as the increasing entwinement of neuroscience and machine learning has demonstrated. This revelation of the secrets of our own inner workings, that key to the relationship between matter and intelligence, will come with both new dangers and profound insights.