Comparing Visual Reasoning in Humans and AI

Abstract

Recent advances in natural language processing and computer vision have led to AI models that interpret simple scenes at human levels. Yet, we do not have a complete understanding of how humans and AI models differ in their interpretation of more complex scenes. We created a dataset of complex scenes that contained human behaviors and social interactions. AI and humans had to describe the scenes with a sentence. We used a quantitative metric of similarity between scene descriptions of the AI/human and ground truth of five other human descriptions of each scene. Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes. Using an experimental manipulation that occludes different spatial regions of the scenes, we assessed how machines and humans vary in utilizing regions of images to understand the scenes. Together, our results are a first step toward understanding how machines fall short of human visual reasoning with complex scenes depicting human behaviors.

ICB Affiliated Authors

Authors
Shravan Murlidaran, William Yang Wang, Miguel P. Eckstein
Date
Type
Peer-Reviewed Conference Presentation
Journal
International Conference on Learning Representations (ICLR)
City
Vienna
Country
Austria
Emblems