[ad_1]
A faster way to estimate uncertainty in AI-assisted decision making could lead to more confident outcomes.
Increasingly, artificial intelligence systems known as deep learning neural networks are being used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large and complex data sets to aid in decision making. But how do we know they are correct? Alexander Amini and his colleagues from WITH and Harvard University wanted to find out.
They developed a quick way for a neural network to process data and produce not only a prediction but also the confidence level of the model based on the quality of the available data. Progress could save lives, as deep learning is already being implemented in the real world today. The level of certainty of a network can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case”.
Current uncertainty estimation methods for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” speeds up the process and could lead to safer outcomes. “We need the ability not only to have high-performance models, but also to understand when we can’t trust those models,” says Amini, a graduate student in Professor Daniela Rus’s group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
“This idea is important and widely applicable. It can be used to evaluate products based on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model and what missing data could improve the model, “says Rus.
Amini will present the research at next month’s NeurIPS conference, along with Rus, who is Andrew and Erna Viterbi Professor of Electrical and Computer Engineering, Director of CSAIL and Deputy Dean of Research for MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.
Efficient uncertainty
After an up and down story, deep learning has demonstrated extraordinary performance in a variety of activities, in some cases even precision. And nowadays, deep learning seems to go wherever computers go. Power search engine results, social media feeds, and facial recognition. “We have had huge successes using deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99% of the time.” But 99% won’t cut it when lives are at stake.
“One thing that researchers have missed is the ability of these models to know and tell us when they might be wrong,” says Amini. “We really care 1% of the time and how we can detect such situations reliably and efficiently.”
Neural networks can be huge, sometimes packed with billions of parameters. So it can be a hefty computational boost just to get an answer, let alone a level of confidence. The analysis of uncertainty in neural networks is not new. But previous approaches, stemming from Bayesian deep learning, have relied many times on running or sampling a neural network to understand its safety. This process takes time and memory, a luxury that may not exist in high-speed traffic.
The researchers devised a way to estimate uncertainty from a single neural network run. They designed the network with enhanced output, producing not only a decision but also a new probabilistic distribution that captures the evidence supporting that decision. These distributions, called evidential distributions, directly capture the model’s confidence in its prediction. This includes any uncertainty in the underlying input data, as well as in the final model decision. This distinction can signal whether the uncertainty can be reduced by modifying the neural network itself or whether the input data is just noisy.
Trust check
To test their approach, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. the distance to the camera lens) for each pixel. An autonomous vehicle could use similar calculations to estimate its proximity to a pedestrian or other vehicle, which is no easy task.
Their network performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network predicted high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the mistakes made by the network, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” says Amini.
To test their calibration, the team also demonstrated that the network anticipated greater uncertainty for “out of distribution” data – completely new types of images never encountered in training. After training the net on indoor home scenes, they powered a series of outdoor driving scenes. The network consistently warned that its responses to new outdoor scenes were uncertain. The test highlighted the network’s ability to signal when users shouldn’t have full confidence in its decisions. In these cases, “if it’s a health application, maybe we don’t trust the diagnosis the model is giving and instead seek a second opinion,” says Amini.
The network even knew when the photos had been manipulated, potentially protecting itself from data manipulation attacks. In another study, the researchers increased adversarial noise levels in a series of images they sent to the network. The effect was subtle – barely noticeable to the human eye – but the network sniffed those images, tagging its output with high levels of uncertainty. This ability to sound the alarm about falsified data could help detect and deter adversarial attacks, a growing concern in the age of deepfakes.
Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who does not she was involved in the work. “This is done in a novel way that avoids some of the messy aspects of other approaches – eg sampling or ensembles – which makes it not only elegant but also more computationally efficient – a winning combination.”
Deep evidential regression could improve confidence in AI-assisted decision making. “We are starting to see many more [neural network] the models come out of the research lab and into the real world, in situations that are touching humans with potentially life-threatening consequences, “says Amini.” Any user of the method, whether it be a doctor or a person on the passenger seat of a vehicle, must be aware of any risk or uncertainty associated with that decision. “Imagine the system not only quickly reports uncertainty, but also uses it to make more conservative decisions in risky scenarios such as an autonomous vehicle that approaches an intersection.
“Any field that has distributable machine learning ultimately needs to have a reliable awareness of uncertainty,” he says.
This work was supported, in part, by the National Science Foundation and the Toyota Research Institute through the Toyota-CSAIL Joint Research Center.
[ad_2]
Source link