MIT Solved a Century-Old Differential Equation to Break the Computational Bottleneck of ‘Liquid’ AI

Last year, MIT developed an AI/ML algorithm that can learn and adapt to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee meaning) literally play 4D chess – their models requiring Time series data to operate – making them ideal for use in time-sensitive tasks such as pacemaker monitoring, weather forecasting, investment forecasting or self-driving vehicle navigation. But the problem is that data throughput has become a bottleneck and scaling these systems has become computationally prohibitively expensive.

On Tuesday, researchers at MIT announced that they had developed a solution to this restriction, not by expanding the data pipeline, but by solving a differential equation that has puzzled mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons across synapses… to unlock a new kind of fast and efficient artificial intelligence algorithms.

“The new machine learning models we call ‘CfC’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed-form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL director Daniela Rus said in a statement. press Tuesday. “CfC models are causal, compact, explainable, and efficient at training and predicting. They pave the way for reliable machine learning for safety-critical applications. »

So, for those of us without a PhD in really tough math, differential equations are formulas that can describe the state of a system at various discrete points or steps along the way. For example, if you have a robot arm moving from point A to point B, you can use a differential equation to find out where it is between the two points in space at any given step in the process. However, solving these equations for each step quickly becomes computationally expensive. MIT’s “closed form” solution circumvents this problem by functionally modeling the complete description of a system in a single computational step. As the MIT team explains:

Imagine if you have an end-to-end neural network that receives driving inputs from a camera mounted on a car. The network is trained to generate outputs, such as the steering angle of the car. In 2020, the team solved this problem by using 19-node liquid neural networks, so that 19 neurons plus a small perception module could drive a car. A differential equation describes each node of this system. With the closed-form solution, if you replace it inside this network, it will give you the exact behavior, as it is a good approximation of the real dynamics of the system. They can thus solve the problem with an even lower number of neurons, which means that it would be faster and less computationally expensive.

By solving this equation at the level of neurons, the team hopes to be able to build models of the human brain that measure millions of neural connections, which is not possible today. The team also notes that this CfC model might be able to take the visual training it learned in one environment and apply it to an entirely new situation without additional work, known as generalization out of distribution. This is not something that current generation models can really do and it would prove to be an important step towards the generalized AI systems of tomorrow.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

Leave a Reply

%d bloggers like this: