Ajay.jpg AAA_Portrait.png

Short Bio

My name is Ajay Mishra. I am currently a Research Scientist at Intelligent Automation, Inc. Until recently, I was a post-doc at the Institute for Advanced Computer Studies (UMIACS), University of Maryland, College Park, working with Prof. Yiannis Aloimonos on active vision and robotics. I obtained my Doctorate degree from National University of Singapore (NUS), Singapore in Feb 2011. Prior to this, I worked in STMicroelectronics as a design engineer for two years after completing my B.Tech degree from IIT Kanpur (India) in 2003.

My email address is [last_name]ka@umiacs.umd.edu. In case you are wondering, my last name is "mishra".

My small but useful robot is in the picture on the right which you can click to find more about it.

Download CV

Research Interest

My ultimate research goal is to develop a robust vision framework, using intuitions and insights drawn from our understanding of how the Human Visual System (HVS) works. The most significant step towards that goal that I have taken so far is to incorporate fixation into visual processing. The inspiration for this came from a rather simple observation that even when our eyes continuously fixate at different things in the scene to see/preceive them, most computer vision algorithms are yet to give any significance to fixation. We believe that fixation (a part of visual attention) is the reason why human visual system works so well. In fact, I have put together a small Psychophysical experiment for you to check it out yourself how critical a fixation can be for visual perception.

In our ICCV 2009 paper, we have already shown how segmenting a fixated object (or region), instead of segmenting entire scene all at once, is an easier and better defined problem. A point is given inside the object of interest, and the algorithm extracts the "optimal" closed contour around that point. This closed contour is the boundary of the object. To segment multiple objects, we simply have to fixate inside all of the objects and carry out the segmentation process for each fixation.

Now to fixate inside objects automatically, we have used another important characteristic of the HVS which is the concept of border ownership. Essentially, the cells in our visual cortex not only detect edges but also record a pointer to the object side of the boundary edges. Using the border ownership information, we automatically select fixation points inside all possible objects in the scene and segment them. Below is an example of our automatic segmentation process:
newFrameWork
The left and the right side shows the fixation points and the corresponding segmentation results respectively.
For details on how the fixation points are selected, refer to our page on "simple" objects and/or our paper in RSS 2011.

News

*New version of the segmentation code will be available on June 04, 2012.

*A new version of our Psychophysical Experiment, showing the critical role of fixation in perception, is online. For the previous version, click here

*Check out the new homepage of my Active Agent (AAA) (June 28, 2011)

*C++ source code (for both windows and UNIX/Linux) available.  (Updated Apr 03, 2011)

* Details of my ECCV 2010 demo (Sep 2010) is available now!


Publications

PhD Thesis

Journal

Conferences

Book chapters



Awards

Won the first prize at the Semantic Robot Vision Challenge 2008 (software league) held with CVPR 2008 in Alaska (USA).


Demos

@ECCV in Greece, 2010
@U. of MD, Maryland Robotics Day. 2010

Invited Talks

@NIST, Nov 29, 2010
@Willow Garage, Mar 01, 2010