[ad_1]
From 4,675 fully tagged bear faces on DSLR photographs, taken from bear research and observation sites in Brooks River, Alabama, and Knight Inlet, they randomly split the images into training and test datasets. Once trained by 3,740 bear faces, the deep learning went to work “unsupervised,” said Dr. Clapham, to see how well he could spot the differences between known bears from 935 photographs.
First, the deep learning algorithm finds the bear’s face using distinctive landmarks such as the eyes, tip of the nose, ears, and top of the forehead. Then the app rotates the face to extract, code and classify facial features.
The system identified the bears with an 84 percent accuracy rate, correctly distinguishing between known bears such as Lucky, Toffee, Flora and Steve.
But how does he actually distinguish those bears? Before the era of deep learning, “we tried to imagine how humans perceive faces and how we distinguish individuals,” said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology, Germany, who is not was involved in the study but has collaborated with Dr. Clapham in the past. Programmers manually entered face descriptors into a computer.
But with deep learning, programmers insert images into a neural network that can figure out the best way to identify individuals. “The network itself extracts the functionality,” said Dr. Loos, which is a huge plus.
He also cautioned that, “It’s basically a black box. You don’t know what it’s doing” and that if the dataset under consideration is unintentionally skewed, some errors may emerge.
Source link