Model-Based Pose Estimation of 3-D Objects
Machines need the ability to determine the pose of objects in their environment in order to be able to reliably and intelligently interact with them. Estimating the three-dimensional position and orientation of known objects from camera images is an important sensory skill of intelligent robots, with applications such as online path or task planning, world model update and visual servoing. However, model-based pose estimation requires to explicitly or implicitly solve the extremely difficult correspondence problem of relating image features to corresponding features in the model description.
As it is still very hard to find even an approximate pose solution analytically, a learning approach seems more appropriate. We have investigated a neural network approach to model-based object pose estimation. Kohonen maps and some of their variations are studied for this purpose. It is shown that the performance of these networks depends heavily on the mathematical representation of pose in general and orientation in particular. It is also advantageous to choose the topology of the neural network in harmony with the representation used. Experimental results from both simulated and real images demonstrate that a pose estimate within the accuracy requirements can be found in more than 90% of all cases. The current implementation operates at near frame rate on real world images.
Please see my publications page for a list of related papers.