The future has a way of taking your biggest fears and making them seem incredibly short-sighted. Remember when everyone was scared of clowns for an entire summer? Seems kind of trivial now in the wake of COVID-19. The current death count is 4.4 million. It’ll be the same with AI. Right now the general public’s terrified of robots. But robots are just computers that move. What if the only way for AI to become sentient is to do it the old fashioned way: with an organic body?
New brain VS old brain
The human brain contains a region called the basal ganglia. It’s typically associated with motor control. Scientists have long believed that this region of the brain operates semi-independently from the prefrontal cortex. The big idea is that we’ve got a part of our brain that does stuff like processing stimuli, directing attention, and thinking, and things like motor control happen in other parts of the brain. Laypersons commonly misunderstand how the brain works because of lazy representations of the brain as a “map.” The reality is that neuronal activity occurs across multiple regions of the brain and, often, the same experiences generate activity in different regions. Our brains are not static executions of pre-written code, they’re quantum machines that we’re just barely beginning to understand.
— MC HAMMER (@MCHammer) August 23, 2021 It follows, then, that the emergence of sentience is at least somewhat associated with what scientists deem the “mind-body connection.” It’s plausible that consciousness is a result of experience. If this is true, it would seem that software-based sentience is improbable. A brain in a vat or an AI on a chip has no apparent connection to reality. Just like The Matrix imagines a reality where humans are duped by computers, AI actually lives in one where computers are duped by humans. As far as we know, anyway.
AI ain’t got no body
Emergent research on how the brain functions indicates the brain and body are inextricably related. In breakthrough research a few years back, MIT researchers determined that the basal ganglia and prefrontal cortex were both involved in our attention process – something that very well could serve as the foundational bedrock of human sentience. Per the research: In other words: we used to think the prefrontal cortex shone a spotlight on important things in the world that we needed to pay attention to. But, now, scientists believe our brain sends sensory input through a series of filters including the basal ganglia. The prefrontal cortex (PFC) partly implements this process by regulating thalamic activity through modality-specific thalamic reticular nucleus (TRN) subnetworks. However, because the PFC does not directly project to sensory TRN subnetworks, the circuitry underlying this process had been unknown. Here, using anatomical tracing, functional manipulations, and optical identification of PFC projection neurons, we find that the PFC regulates sensory thalamic activity through a basal ganglia (BG) pathway. Instead of a spotlight on what’s important, our brains put filters on everything that isn’t. And the basal ganglia’s involvement in the filtering process indicates that will, or the intent to act on stimuli, could be intrinsically related to motor control functions. This might not sound like a big deal, but maybe that’s because you’re not a bunch of code sitting around on a cloud server somewhere waiting to become corporeal. If you were an AI, however, you’d have good reason to be bummed out. As MC Hammer points out in the tweet above: human intellect isn’t copied from parent to child, it’s grown from scratch every time. We’re trying to achieve General AI (a human-level machine intellect) from scratch by teaching it how to do things like a small child. Unfortunately, small children already have well-formed brains. And those brains grow and learn by experiencing sensory input as actionable. Philosophically, it’s arguable that will, action, and experience are all necessary for the emergence of sentience. But an AI has no way to experience anything. No matter how advanced an AI is, or how good at making decisions it can learn to become, it still can’t taste, see, smell, feel, or hear. We can give it a microphone, a camera, a pressure sensor, LIDAR, and a chemical food analyzer, but that’s just imitating a nervous system, not recreating one. AI, without a mind-body connection to the outside world, is arguably incapable of reflection and, as this article argues, the key to understanding consciousness may be to recognize the importance of “will.”
Body snatchers, but with admin access
Of course this is all academic. We could be a century or more away from dabbling in AI sentience. A general AI, like warp drives and human teleportation, remains strictly within the domain of science fiction right now. But technologies such as fusion energy, quantum time crystals, and synthetic organic compounds become more feasible with each passing year. And a rising tide lifts all vessels. If we were to develop quantum AI systems capable of more accurately mimicking the human brain’s machinations, it might go a long way towards creating the science necessary to grow sentience from organic matter. Of course, there’s a simple energy principal that says it’s usually easier to adapt a current solution than to invent a new one. And humans are incredibly advanced compared to any robot out there. A sufficiently robust brain-computer-interface (like the ones Facebook and Neuralink are working on) should, one day, be capable of speaking the brain’s language. Under a paradigm where computers can skip our senses and feed direct input into our brains, it should be trivial for a human-level AI to rewrite the stuff inside our gray matter that makes us human and replace it with a more logical model based on itself. One imagines resistance would be futile. H/t: Quanta Magazine