Misinformation or Artifact: A New Way of Thinking About Machine Learning: Researcher considers when – and if – we should consider AI a failure –



[ad_1]

Deep neural networks, multilayered systems built to process images and other data through the use of mathematical models, are a cornerstone of artificial intelligence.

They are capable of seemingly sophisticated results, but they can also be fooled in ways ranging from relatively harmless – misidentifying one animal as another – to potentially deadly if the network driving a self-driving car misinterprets a stop sign as one. indicating it is safe to proceed.

A University of Houston philosopher suggests in an article published in Nature Machine Intelligence that common assumptions about the cause behind these alleged malfunctions may be wrong, crucial information for assessing the reliability of these networks.

As machine learning and other forms of artificial intelligence become increasingly integrated into society, used in everything from cash machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said he is Crucial to understand the origin of the apparent failures caused by what researchers call “contradictory examples”, when a deep neural network system misjudges images or other data when compared to information outside the training inputs used to build the network . They are rare and are called “contradictory” because they are often created or discovered by another machine learning network – a kind of barrier in the world of machine learning between more sophisticated methods of creating contradictory examples and more sophisticated methods of detecting and avoiding them.

“Some of these contradictory events may be artifacts instead, and we need to know better what they are to know how reliable these networks are,” Buckner said.

In other words, the failure could be caused by the interaction between what the network is being asked to process and the actual schemas involved. It’s not quite the same as being completely wrong.

“Understanding the implications of contradicting examples requires exploring a third possibility: that at least some of these models are artifacts,” Buckner wrote. “… So, currently there are both costs in eliminating these schemes and dangers in using them naively.”

The contradictory events that cause these machine learning systems to make mistakes aren’t necessarily caused by intentional wrongdoing, but this is where the highest risk comes in.

“It means that bad guys could deceive systems that rely on an otherwise reliable network,” Buckner said. “This has security applications.”

A safety system based on facial recognition technology could be breached to allow for a breach, for example, or decals could be placed on road signs that cause self-driving cars to misinterpret the sign, even if they appear harmless to the observer human.

Previous research has found that, contrary to previous assumptions, there are some examples of contradictory that occur naturally – times when a machine learning system misinterprets data through an unexpected interaction rather than through an error in the data. They are rare and can only be discovered through the use of artificial intelligence.

But they are real, and Buckner said that suggests a need to rethink how researchers approach anomalies, or artifacts.

These artifacts have not been well understood; Buckner offers the analogy of a lens flare in a photograph, a phenomenon that is not caused by a defect in the camera lens but is instead produced by the interaction of light with the camera.

The lens flare offers potentially useful information, such as the position of the sun, if you know how to interpret it. This, he said, raises the question of whether adverse events in machine learning caused by an artifact also have useful information to offer.

Equally important, Buckner said, is that this new way of thinking about how artifacts can affect deep neural networks suggests that misreading by the network should not automatically be considered proof that deep learning is invalid. .

“Some of these contradictory events could be artifacts,” he said. “We need to know what these artifacts are so we can know how reliable the networks are.”

Source of the story:

Materials provided by University of Houston. Original written by Jeannie Kever. Note: The content can be changed by style and length.

.

[ad_2]
Source link