Thursday, February 2, 2012

The Hybrid Tracking Algorithm

     As the project I'm working on will consist of implementing augmented reality, I chose to read an article that detailed an algorithm that was related to it.  The algorithm builds on past approaches to tracking reality and blends them together to take advantage of the strengths of each one.  In the beginning, GPS and magnetic sensors were used to obtain a position and direction that the camera is aimed at.  Then accelerometers and gyroscopes were added to detect the orientation of the camera for additional accuracy.  This allowed for augmented reality devices to determine much more effectively where to draw objects on the screen and how to orient them.  Computer vision techniques were also implemented to detect objects such as buildings and add another level of accuracy for realistic application of virtual images.

     In this article the authors discuss how they can merge the different features together.  The goal is to create a highly accurate, efficient algorithm that can follow and detect real life images in real time.  The GPS and gyroscope, along with magnetic sensors and accelerometers act to allow images drawn on screen to appear in the right place and properly oriented.  To detect surfaces, the algorithm references textured 3D models and uses edge detection on those models to match the real-life location of corresponding edges.  The algorithm then compares this to the next video frame to detect camera motion and compensate in the video.  This allows for the augmented reality device to accurately and in real-time display an augmented image on top of an underlaying video, and also allows for realistic occlusion of hidden surfaces.


References:
Reitmayr, G, and TW WDrummond. "Going out: Robust model-based tracking for outdoor augmented reality." 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE: (2006). :153-153.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4079263&tag=1

No comments:

Post a Comment