DCNNs are the type most commonly used to identify patterns in images and video.
“Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition to understand visual processing in the brain,” said researcher
“These deep models tend to use ‘shortcuts’ when solving complex recognition tasks. While these shortcuts can work in many cases, they can be dangerous in some of the real AI applications we’re currently working on with our industry and government partners, ”
For the study, published in the journal
“Frankensteins are just objects that have been taken apart and put back together in the wrong way. As a result, they have all the right local features, but in the wrong places,” Elder said.
The researchers found that while the human visual system is confused by Frankensteins, DCNNs are not — revealing an insensitivity to configuration object properties.
According to the researchers, adjustments to training and architecture to make networks more brain-like did not lead to configuration processing, and none of the networks was able to accurately predict the rating of human objects.
“We speculate that to match human configuration sensitivity, networks need to be trained to solve a wider range of object tasks beyond category recognition,” Elder noted.
ALSO SEE :
A healthy lifestyle can reduce the risk of dementia in diabetic patients
Sensex gets 343 points, handy 106 points up – M&M, IndusInd top winners