A team of researchers at the Massachusetts Institute of Technology (MIT) and Harvard University have developed an approach to quickly determine the certainty of neural networks. The model has been detailed in a paper titled, ’Deep Evidential Regression’. Real-world systems that rely on AI-assisted decision making could become more efficient via the model.
The researchers used the neural network to analyze images and estimate the distance from the camera lens. This is similar to the way autonomous vehicles operate to assess closeness to pedestrians or to another vehicle. The network was tested by using slightly altered images as well. However, it was able to detect the changes. MIT noted in a release that this could help detect manipulations including deepfakes. Daniela Rus, Director of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) said, “By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model.”
The new approach was designed by the team to generate ‘bulked up output’, meaning that along with making a decision, it will also provide evidence to supplement that decision from a single run of neural network. Any uncertainty present in the input data can also be indicated along with indicating whether that uncertainty can be decreased by adjusting the neural network or determining if it an issue with the input data.
Alexander Amini, a researcher at MIT said that not only high-performance models are needed but also the ability to understand when those models cannot be trusted.
As per MIT, earlier approaches to estimate uncertainty relied on running or sampling a neural network many times in order to apprehend its confidence. This process was computationally expensive and relatively slow for split-second decisions.