Artificial Intelligence News: The Neural Network Learns When It Shouldn’t Be Trusted: “99% Won’t Cut It” | Science | news



[ad_1]

Artificial intelligence is becoming more and more central in our daily life, from driverless cars to medical diagnosis. But while such next-generation networks excel at recognizing patterns in complex data sets, engineers are only now figuring out how we know when they’re correct.

Artificial intelligence experts have developed a method to model machine confidence based on the quality of available data.

MIT engineers expect this advancement will ultimately save lives, as deep learning is now widely used in everyday ways.

For example, a network’s level of certainty can be the difference between an autonomous vehicle determining between a clear intersection and “it’s probably clear, so stop just in case”.

This approach, led by MIT graduate student Alexander Amini, nicknamed “deep evidential regression,” speeds up the process and could lead to even safer AI technology.

READ MORE: The media manipulated by the AI ​​will be “ARMED” to deceive the military

He said: “We need the ability to not only have high-performance models, but also to understand when we can’t trust those models.

“This idea is important and widely applicable. It can be used to evaluate products based on learned models.

“By estimating the uncertainty of a learned model, we also learn how much error to expect from the model and what missing data could improve the model.”

The AI ​​analyst adds how previous approaches to uncertainty analysis are based on Bayesian deep learning.

“We really care that one percent of the time and how we can reliably and efficiently detect such situations.”

The researchers started with a challenging computer vision task to test their approach.

They trained their neural network to analyze an image and estimate the focal depth for each pixel.

Self-driving cars use similar calculations to estimate proximity to a pedestrian or another vehicle – it’s not an easy task.

As the researchers had hoped, the network predicted high uncertainty for pixels where it predicted the wrong depth.

Amini said: “It was very calibrated on the mistakes the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator.”

The test revealed the network’s ability to signal when users shouldn’t have full confidence in its decisions.

In these examples, “if it is a health application, perhaps we don’t trust the diagnosis the model is giving and seek a second opinion instead,” added Amini.

Dr. Raia Hadsell, a DeepMind artificial intelligence researcher not involved in the work. Deep evidential describes regression as “a simple and elegant approach that advances in the field of uncertainty estimation, which is important for robotics and other real-world control systems.

He added: “This is done in a new way that avoids some of the messy aspects of other approaches – [for example] sampling or ensemble – which makes it not only elegant but also computationally more efficient – a winning combination. “



[ad_2]
Source link