It might not sound like much, but at three months old an infant has a basic grasp of how physical things work. They understand advanced concepts such as solidity and permanence – objects typically don’t pass through one another or disappear – and they can predict motion. To study this, researchers show infants videos of objects acting the way they should, such as passing behind an object and emerging on the other side, and others where they seemingly break the laws of physics. What scientists have learned is that babies exhibit varying levels of surprise when objects don’t act the way they should. MIT researcher Kevin Smith said: The big idea for the MIT team was to train AI to recognize whether a physical event should be considered surprising or not and then to express that surprise in its output. Per an MIT press release: Classical physics is hard. The myriad predictions and calculations involved in figuring out what’s going to happen next in any given sequence of events are incredibly complex and require massive amounts of compute for non-AI systems. Unfortunately, even AI systems are beginning to produce diminishing returns under classical computing paradigms. In order to push forward, it’s likely we’ll have to abandon the current brute-force method of cramming data into a black box and then using hundreds or thousands of processing units in tandem to tune and tease useful outputs out of an artificial neural network.  Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch. Some experts believe we need a quantum solution that can time travel, or arrive at multiple outputs at once, and then surface answers autonomously like the human brain. This puts us in a bit of a “Catch 22,” because our understanding of the human brain, artificial neural networks, and quantum physics are all considered incomplete. The hope is that continued research in all three fields will work as a rising tide that lift all ships.  For now, scientists hope that artificial curiosity and codifying ‘surprise’ helps to bridge the gap between the human brain and artificial neural networks. Eventually this novel, exploration-based method of learning could be combined with quantum computing technology to create the basis for “thinking” machines. We may have a long way to go before any of this happens, but today’s research represents the initial baby steps towards human-level AI. For a deeper dive into the MIT team’s work check out its conference paper here.