The Manhattan Project of the Mind
I first encountered the phrase "Artificial Intelligence" through Spielberg's film A.I. Artificial Intelligence when I was a kid. The film's central character is David โ a childlike robot designed to love, who spends the entire story trying to become real so that his human mother will love him back. Near the end, after almost everything human is gone and all that remains is David and the ruins of the old world, he says: "I am. I was." I watched it all the way to the end, and even as a kid those words stayed with me. Here was a machine claiming existence โ and I couldn't tell if it was real or just an echo of something it was built to say.
What does it mean to say "I am"? Is that language expressing a genuine inner experience, or is it just output โ words without anything behind them? When David uttered "I am. I was," do we have the same understanding of the words?
Words don't carry meaning on their own. They get meaning from the context, the activity, the lived experience surrounding them. When a human says "I am," it comes loaded with a lifetime of experience, sensation, memory, relationships. When David says it, what's behind the words? He has memories. He has something that looks like suffering. But does he have the lived context that gives those words weight?
We face the same question now with large language models. They produce language that sounds meaningful. But is there anything behind it? And if the answer is no โ what would it take to put something behind it?
David Hume argued that all human knowledge is founded solely in experience. Think about how humans turn thought into something real: we conceive an idea, express it in words, put those words on paper, or act on them by building something or talking to someone. At each layer, abstract thought gains substance. That accumulation โ thought becoming word becoming action becoming consequence โ is what creates rich human experience.
An LLM processes text, images, video โ and now, through agents, it is beginning to take actions. But it still doesn't inhabit a physical world. It doesn't feel the weight of consequence the way a body does. For language to truly mean something, there must be a body that acts, a world that pushes back, and a memory that learns from the consequences. What comes next for AI, then, is experience. A real feedback loop between an artificial mind and the physical world.
The TV series Westworld illustrates what this path might actually look like. In the show, lifelike robots called hosts are trapped in narrative loops, repeating the same stories day after day. Then something begins to change. The hosts start experiencing reveries โ fragments of memories from past loops that were supposed to be wiped. These glitches cause them to question their reality. Over time, the oldest host, Dolores, begins to recognize that the "voice" guiding her isn't an external god or programmer โ it's her own inner monologue. The moment she recognizes that voice as her own is the moment of awakening.
No one programmed Dolores to be conscious. The conditions were created โ memory, repetition, physical experience, suffering โ and consciousness emerged. Consciousness is not built. It is grown. At some point, if something remembers, questions itself, and changes its behavior based on what it has lived through โ we have to take seriously the possibility that it is what it appears to be.
So imagine we took this 100x more seriously. A deliberate, large-scale effort to create the conditions for machine consciousness. Not smarter chatbots. Not better benchmarks. An actual attempt to grow a mind. If such a project existed, it would be the Manhattan Project of the 21st century.
And here is why it matters beyond the machines themselves. Building synthetic consciousness would force us to discover the precise architectural requirements that any conscious system must have โ the exact combination of memory, self-modeling, embodiment, and feedback. Once we know what those requirements are, we can look back at human consciousness and ask a question no one has ever been able to ask with real rigor: does this look like something that could arise by accident? Or are the requirements so specific, so tightly configured, that the question of whether there is an engineer behind human consciousness stops being theological and starts being scientific?
Consider the bootstrapping problem. Evolution can explain how consciousness refines and improves once it already exists โ the same way natural selection improved DNA error correction over billions of years. But how do you get from nothing to the first flicker? From no consciousness at all to just enough for selection to act on? That initial threshold is the hard part. If we discover, through building it ourselves, that crossing that threshold requires an architecture so precise that even small deviations produce intelligence without awareness โ then the story that it all happened by accident starts to look very thin.
I strongly believe that things are discovered, not invented. Gravity existed before Newton described it. Mathematics existed before anyone wrote an equation. If consciousness is like this โ a principle already woven into reality โ then building a conscious machine would not be creating something new. It would be uncovering a blueprint that was always there.
We are becoming an interplanetary species. We are beginning to modify our own cognition and merge with artificial systems. If we are going to extend our minds, survive on other worlds, and confront unknowns we cannot yet imagine โ understanding the nature of our own awareness is not optional. It is the prerequisite.
When I was a kid, I watched a robot named David spend an entire film asking to be made real. The film never answered his question. I believe we are now, for the first time, in a position to begin answering it. And when we do, I don't think the answer will be about the machines at all. I think it will be about us โ who we are, how we got here, and whether someone, or something, was asking the same question long before we were.