LOG IN TO MyLSU
Home
lecture image Special Guest Lectures
Gaze-contingent Image Analysis
Umesh Rajashekar, New York University
Postdoctoral Fellow, Howard Hughes Medical Institute & Lab for Computational Vision
Johnston Hall 338
April 26, 2011 - 09:30 am
Abstract:
Despite a large field-of-view, our eyes process only a tiny central region with very great detail while the resolution drops rapidly towards the periphery. To assimilate visual information from this multi-resolution input, we move our eyes and sample the scene 3-4 times each second. An understanding of how the human visual system selects and sequences image regions for scrutiny is not only important to better understand biological vision, it is also the fundamental component of active artificial vision systems. With the availability of inexpensive and accurate eye trackers, it is now feasible to easily record and analyze the eye movements of human observers. In this talk, I will demonstrate that accurate eye tracking in tandem with analysis of the stimulus at the point-of-gaze can be used to provide insight into the visual selection process in human observers. These insights will be used to develop a simple algorithm that can deploy human-like eye movements in novel natural scenes.
div
Speaker's Bio:
Umesh Rajashekar currently works at the New York University Howard Hughes Medical Institute & Laboratory for Computational Vision as a postdoctoral fellow. He received his Ph.D. in electrical and computer engineering from the University of Texas at Austin (2005); his M.S. in electrical and computer engineering from the University of Texas at Austin (2000); and his Bachelor of Engineering in electronics and communication engineering from the Karnataka Regional Engineering College in India (1998). Rajashekar's research interests are in perceptually- motivated image analysis, an interdisciplinary field that focuses on the implications of human perception in engineering applications.