A Feedforward Convolutional Neural Network with a Few Million Neurons Learns from Images to Covertly Attend to Cues and Context like Humans and an Optimal Bayesian Observer

Abstract

Human behavioral experiments have led to influential conceptualizations of visual attention, such as a serial processor or a limited resource spotlight. There is growing evidence that simpler organisms such as insects show behavioral signatures associated with human attention. Can those organisms learn such capabilities without conceptualizations of human attention? We show that a feedforward convolutional neural network (CNN) with a few million neurons trained on noisy images to detect targets learns to utilize predictive cues and context. We demonstrate that the CNN predicts human performance and gives rise to the three most prominent behavioral signatures of covert attention: Posner cueing, set-size effects in search, and contextual cueing. The CNN also approximates an ideal Bayesian observer that has all prior knowledge about the statistical properties of the noise, targets, cues, and context. The results help understand how even simple biological organisms show human-like visual attention by implementing neurobiologically plausible simple computations.

ICB Affiliated Authors

Authors
Sudhanshu Srivastava, William Wang, and Miguel P. Eckstein
Date
Journal
PsyArXiv Preprints