Even the smartest AI models don’t match human visual processing: Study


Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using shape perception — and that can be dangerous in real-world artificial intelligence (AI) applications, researchers say.

DCNNs are the type most commonly used to identify patterns in images and video.

“Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition to understand visual processing in the brain,” said researcher James Elder from York University in Toronto.

“These deep models tend to use ‘shortcuts’ when solving complex recognition tasks. While these shortcuts can work in many cases, they can be dangerous in some of the real AI applications we’re currently working on with our industry and government partners, ” elder added.

For the study, published in the journal iSciencethe team used new visual stimuli called “Frankensteinsto explore how the human brain and DCNNs process holistic, configuration object properties.

“Frankensteins are just objects that have been taken apart and put back together in the wrong way. As a result, they have all the right local features, but in the wrong places,” Elder said.

The researchers found that while the human visual system is confused by Frankensteins, DCNNs are not — revealing an insensitivity to configuration object properties.

According to the researchers, adjustments to training and architecture to make networks more brain-like did not lead to configuration processing, and none of the networks was able to accurately predict the rating of human objects.

“We speculate that to match human configuration sensitivity, networks need to be trained to solve a wider range of object tasks beyond category recognition,” Elder noted.


A healthy lifestyle can reduce the risk of dementia in diabetic patients

Sensex gets 343 points, handy 106 points up – M&M, IndusInd top winners


Please enter your comment!
Please enter your name here