Bio-inspired and Scalable Machine Vision Methods for Dynamic Scene Analysis

Vertical Tabs

Pipeline of the proposed approach to extract multiple object from videos (a) Input Video. (b) Eye tracking fixation data from multiple subjects and extracting dominant eye tracking patterns (c) Potential object locations from objectness and optical flow information (d) Graph based object extraction framework which fuses objectness, optical flow and eye tracking information which is optimized using integer programming (e) Extracted objects from video.

Vertical Tabs

The primary goal of the project is to bridge the gap between human performance and computer vision algorithms by gaining access to contextual information. Eye tracking and fMRI based contextual data is primarily studied in this project. Consequently we have developed detection and saliency algorithms which mimic the manner in which humans process complex visual stimuli. The proposed human inspired vision algorithms have outperformed state-of-the-art text detection and visual saliency algorithms. In addition we have also studied how camera focus and scene context in the form of object co-occurrence affects visual attention. Current ongoing research extends the benefits to a dynamic setting and focuses on improving object extraction from videos utilizing eye movement information from multiple subjects. We are also investigating the importance of different visual cues which humans use for person recognition, by systematically eliminating individual components. In addition, this analysis will be coupled with contextual information in the form of fMRI and eye tracking.



Task Order: