It's true! The article that I read is actually from 1999, but it details an application of augmented reality not yet considered by our group. The goal of this group was to explore augmented reality's suitability in enhancing user interaction with locations. To accomplish this they designed a system that allowed users to roam about Columbia University's campus and discover different documentary bits of three main topics.
Wearing a device that looks like something straight out of a familiar movie series, users don a backpack computer, a tablet pc and a head-mounted display and set out around campus. As they walk around they can find flags dotted all around campus, located and tracked by GPS and a magnetometer orientation tracker. Users look at a virtual identifier, in this case colored flags, and pull up multimedia content specific to that location, which can reference the user to other tangent topics. The system that was built is able to handle images, web sites, videos, and 360 degree images using the backpack computer, tablet pc and/or head-mounted display (depending on the multimedia). Looking at this article I see an interesting use case that would be entirely possible with our software, which could allow people to create virtual tours with videos, images and audio specific to locations.
Source:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16.6539&rep=rep1&type=pdf
Wednesday, April 4, 2012
Thursday, March 29, 2012
AR in Education
Contrary to what I expected to find in this article, the main focus was in advocating for the use of augmented reality rather than concrete examples in which augmented reality is being used to aid education. The authors write from the context of living in Mexico and begin with the motivation that modern day education
in Mexico emphasizes memorization rather than actual learning due to expectations of school systems and poor training on the part of teachers (as can also be seen in the US for that matter) . The article's main point is that with the advent of the digital age in which many children grow up and are familiar with advanced technology we make poor use of the tools at our disposal in improving our ability to teach interactively to help build understanding.
The project that the paper was written for is funded by the Mexican government in an attempt to increase the quality of education in Mexico. The authors advocate that while the current success of augmented reality lies in marketing, architecture, and entertainment that it can be just as effective in changing the way education works. While at first the thought of integrating technology with education seems a bit awkward, it's not difficult to see the benefits that can be achieved in nurturing understanding with well developed computer human interaction. The difficulty with the use of augmented reality in instructing is most likely the instructors, however. Oftentimes teachers are technologically inept and as such would require specialized training to make use of the technology that is provided for their use.
Source:
http://books.google.com/books?hl=en&lr=&id=23tcDsn2g_wC&oi=fnd&pg=PA481&dq=augmented+reality&ots=bfCcsW1Ysm&sig=PYjNkfqi6VqJSR555QBhdT2mMBA#v=onepage&q=augmented%20reality&f=true
in Mexico emphasizes memorization rather than actual learning due to expectations of school systems and poor training on the part of teachers (as can also be seen in the US for that matter) . The article's main point is that with the advent of the digital age in which many children grow up and are familiar with advanced technology we make poor use of the tools at our disposal in improving our ability to teach interactively to help build understanding.
The project that the paper was written for is funded by the Mexican government in an attempt to increase the quality of education in Mexico. The authors advocate that while the current success of augmented reality lies in marketing, architecture, and entertainment that it can be just as effective in changing the way education works. While at first the thought of integrating technology with education seems a bit awkward, it's not difficult to see the benefits that can be achieved in nurturing understanding with well developed computer human interaction. The difficulty with the use of augmented reality in instructing is most likely the instructors, however. Oftentimes teachers are technologically inept and as such would require specialized training to make use of the technology that is provided for their use.
Source:
http://books.google.com/books?hl=en&lr=&id=23tcDsn2g_wC&oi=fnd&pg=PA481&dq=augmented+reality&ots=bfCcsW1Ysm&sig=PYjNkfqi6VqJSR555QBhdT2mMBA#v=onepage&q=augmented%20reality&f=true
Collaborative AR
So this time I found an interesting article that was from '95, which also happened to be from Sony's research team. This marks the earliest of the Sony articles I've found but it is an interesting article that relates to our own project by more than just being about augmented reality. This system is designed to provide for group collaboration on 3D model inspection and transformations. The example given for motivation is of car designers having to look at physical models to discuss features even though CAD designs generate 3D models.
The system was created on a palmtop device with several external sensors tied in for positioning information. Depending on the tilt of the device with respect to the tracking coordinates different transformations can be made to the model displayed. The system allowed for multiple users to examine the same model in an augmented sense, and all could also examine changes in near real-time. Only one user would be able to make modifications at a time, though, which is handed off by the users who have control in designating another control master.
This system, while not quite what we are aiming to accomplish does have some similarities. The base idea is for a collaboration system, which is one important aspect to what we are aiming for as well. Similar to this implementation, though, users cannot examine changes simultaneously.
Source:
http://www.sonycsl.co.jp/person/rekimoto/papers/vsmm96.pdf
The system was created on a palmtop device with several external sensors tied in for positioning information. Depending on the tilt of the device with respect to the tracking coordinates different transformations can be made to the model displayed. The system allowed for multiple users to examine the same model in an augmented sense, and all could also examine changes in near real-time. Only one user would be able to make modifications at a time, though, which is handed off by the users who have control in designating another control master.
This system, while not quite what we are aiming to accomplish does have some similarities. The base idea is for a collaboration system, which is one important aspect to what we are aiming for as well. Similar to this implementation, though, users cannot examine changes simultaneously.
Source:
http://www.sonycsl.co.jp/person/rekimoto/papers/vsmm96.pdf
Thursday, March 22, 2012
Sony CyberCode
Following up on Sony's earlier research
paper from 1998 I found this article from 2000 by the same team. In
this article Sony's developers created a system they dubbed
“CyberCode”. CyberCode is an augmented reality system that was
designed for the capability of identifying a tracking marker and
responding with predefined actions. The software acts as a
foundation for several applications that the Sony team created to
demonstrate usability. Examples ranged from better interactive
museum layouts to extended desktop space using the table your
computer is on to giving new sense to the words “drag and drop”
by allowing paper codes to be placed on printers to execute a print
command.
While not the same direction as the
previous article, this article also discusses the other identifiers
that were considered and explained the pros and cons for each one.
For example, infrared beams are unobtrusive and can be detected more
reliably than scanning a code. The downfall is in mounting the
device and the need to replace dead batteries. 1D barcodes were also
considered but such required more specific scanning devices than were
commercially feasible depending on the application. The team settled
on using the 2D marker patterns for the ease of placement and rapid
development.
Source: http://hercules.infotech.monash.edu.au/EII-CAC/CAPapers/Rekimoto_CyberCodeDesignAugmentedReality_ACM_DARE_2000_pp1-10.pdf
Source: http://hercules.infotech.monash.edu.au/EII-CAC/CAPapers/Rekimoto_CyberCodeDesignAugmentedReality_ACM_DARE_2000_pp1-10.pdf
Thursday, March 8, 2012
Pointing Blindly
I came across an article that deviated from the realm of augmented reality, yet was closely related. The article began by providing a brief overview of augmented reality and virtual reality applications: their premises and their implementations. The authors of this paper recognized that due to hardware diversification among personal smart phones it is unlikely for the majority of users to have the same technologies at their disposal. The authors investigated the capabilities of a pointing-based interaction that provides no visual feedback.
The decision to remove visual feedback is based on hardware limitations, but also on the observation that users tend to shift their attention to displays rather than the real-world primarily. These authors set out to test whether simple pointing-based interaction could yield comparable accuracy in tracking targets. The results of their research was that pointing-based interaction, based on a minimalist-style hardware configuration of an accelerometer and compass, were unpromising. The pitch and roll of the device as it was pointed between targets dramatically affected the hardware's ability to retrieve accurate orientation. Further, their initial hypothesis that maintaining a stationary, target-facing posture, was debunked by results that showed that subjects allowed free motion were able to reach a higher accuracy.
This article is not quite about augmented reality but did have an impact on our project. Our project solely relies on marker-based tracking and GPS-based tracking. This article helped to remind me at least that pointing-based interaction is also not only plausible but necessary in our application when it comes to markerless tracking. The article helped to give an estimation of the best use of pointing-based interaction and its limitations, so that we can avoid proven shortfalls.
Source:
Reaching the same point: Effects on consistency when pointing at objects in the physical environment without feedback
The decision to remove visual feedback is based on hardware limitations, but also on the observation that users tend to shift their attention to displays rather than the real-world primarily. These authors set out to test whether simple pointing-based interaction could yield comparable accuracy in tracking targets. The results of their research was that pointing-based interaction, based on a minimalist-style hardware configuration of an accelerometer and compass, were unpromising. The pitch and roll of the device as it was pointed between targets dramatically affected the hardware's ability to retrieve accurate orientation. Further, their initial hypothesis that maintaining a stationary, target-facing posture, was debunked by results that showed that subjects allowed free motion were able to reach a higher accuracy.
This article is not quite about augmented reality but did have an impact on our project. Our project solely relies on marker-based tracking and GPS-based tracking. This article helped to remind me at least that pointing-based interaction is also not only plausible but necessary in our application when it comes to markerless tracking. The article helped to give an estimation of the best use of pointing-based interaction and its limitations, so that we can avoid proven shortfalls.
Source:
Reaching the same point: Effects on consistency when pointing at objects in the physical environment without feedback
http://www.sciencedirect.com.lib-ezproxy.tamu.edu:2048/science/article/pii/S1071581910001254
Tuesday, February 28, 2012
Using AR in Life-Death Applications
The article that I found this time described a recent application that integrated several means of information into an augmented reality environment. The target audience for the application is military infantry, though it could be just as well suited for law enforcement and fire response teams. The challenge was to aggregate and filter various input relative to a situation and to relay pressing information rapidly for the infantryman's benefit. A vehicle-mounted prototype was designed for proof of concept without the restraints of portability.
The infantrymen in this application are to be outfitted with several sensors. An inertial measurement unit is placed on one foot and on the helmet to measure distance, speed and orientation for tracking movement. A LiDAR system is mounted on the helmet to correct for drift and other navigational error. This allows for the central hub to track friendly infantrymen's position in an outdoor and indoor environment, independent of GPS. The augmented reality system will overlay colors on structures and targets to indicate friendly or enemy structures, as well as neutral or unknown structures. What this altogether allows is for a field commander to observe more accurately and react quicker to real-time situations.
Source:
http://iospress.metapress.com.lib-ezproxy.tamu.edu:2048/content/bq0632q474310576/fulltext.pdf
The infantrymen in this application are to be outfitted with several sensors. An inertial measurement unit is placed on one foot and on the helmet to measure distance, speed and orientation for tracking movement. A LiDAR system is mounted on the helmet to correct for drift and other navigational error. This allows for the central hub to track friendly infantrymen's position in an outdoor and indoor environment, independent of GPS. The augmented reality system will overlay colors on structures and targets to indicate friendly or enemy structures, as well as neutral or unknown structures. What this altogether allows is for a field commander to observe more accurately and react quicker to real-time situations.
Source:
http://iospress.metapress.com.lib-ezproxy.tamu.edu:2048/content/bq0632q474310576/fulltext.pdf
Tuesday, February 21, 2012
Sony's Augment-able Reality System
In this great article I found a near exact replication of what we hope to achieve in our own project. Written back in 1998, Sony developed a prototype AR system that allowed for digital content to be tagged in an environment either to virtual areas, or physical markers.
Sony designed a system comprised of a head-mounted system containing a monocular display screen, a camera, and an infrared sensor. They coupled the headset with a wearable computer able to connect to the Internet.
Sony chose to use a wearable computer design because they believed it was the technology of the future, that it would become much more popular as the years went on. Today the wearable computer is all but forgotten, and has instead been replaced by high-performing smart phones.
The team from Sony created a software system that allowed for the detection of physical contexts, such as rooms, and also to recognize physical markers such as black-and-white matrix codes. Infrared beams emit codes on occasion that signal the identity of a room location. This allows for the system to track its location on a floor map. For specific objects, they created unique ID codes for physical objects such as a VCR.
As for what the head-up display showed the user, while the user is viewing the environment they will see a video overlay, with some additional information on what is available in a side pane such as what is available for viewing. The user can also create content, either voice or images, to append to a location using drag-and-drop on the display. The microphone is cleverly hidden inside the mini mouse.
Sony designed a system comprised of a head-mounted system containing a monocular display screen, a camera, and an infrared sensor. They coupled the headset with a wearable computer able to connect to the Internet.
Sony chose to use a wearable computer design because they believed it was the technology of the future, that it would become much more popular as the years went on. Today the wearable computer is all but forgotten, and has instead been replaced by high-performing smart phones.
As for what the head-up display showed the user, while the user is viewing the environment they will see a video overlay, with some additional information on what is available in a side pane such as what is available for viewing. The user can also create content, either voice or images, to append to a location using drag-and-drop on the display. The microphone is cleverly hidden inside the mini mouse.
Adding voice content to a location
While there are many similarities in the concept, the implementation between Sony and our design is suitably different. For example, Sony uses IR light to detect a room location, and a high contrast ID matrix code. We will be using a tracking system that relies on the ID matrix codes, images, and GPS location. Additionally, while the filtering that they utilized is very similar to our idea (users can filter out content and establish privacy options) our work will additionally allow for collaboration of content that is lacking here.
Reference: Rekimoto, J.; Ayatsuka, Y.; Hayashi, K.; , "Augment-able reality: situated communication through physical and digital spaces," Wearable Computers, 1998. Digest of Papers. Second International Symposium on , vol., no., pp.68-75, 19-20 Oct 1998 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=729531&isnumber=15725 |
Thursday, February 9, 2012
An Evaluation of AR Tool Kits Among Platforms
I stumbled on this interesting article when looking for research performed with AndAR, primarily to see if there had been an expansion to support markerless AR tracking. What I found here is more of a light evaluation of different tool kits for developing AR applications. The authors of this paper were not performing benchmark tests or formal evaluations but rather experimenting with the feasibility of developing across several platforms and SDK's and relating their experiences and performance achieved by each SDK. To test this, the authors designed a basic AR application that was modeled from typical archaeological surveys. Each grid of the "site" was represented with a card or stack of cards, which a handheld device would register. The grid squares would at the beginning each display an undisturbed patch of dirt, and deeper layers of cards may contain objects drawn in 3D. This in addition with registered "tools" allows for a more immersive experience in simulated archaeology.
In order to accomplish this application, several base functions were designed for all the functionality of the application. Then the AR was implemented on Android and iOS devices, using different SDK's. For the iOS, the developers used ARToolKit for the iPhone, which is distributed commercially by ARToolworks--the company that developed ARToolKit. They found that while the end performance in tracking was acceptable (ARToolworks claims that this SDK can track up to 30 fps) and ARToolKit itself was usable, the team spent a significant amount of time trying to understand the iOS framework and properly implement it. In comparison, the Android AndAR (also based off of ARToolkit) was relatively simple to set up but performed poorly. This no doubt stems from the fact that AndAR is based off of the free ARToolKit, which has not been updated since 2007 and as such is not as advanced as ARToolworks' more recent projects, where the iPhone's implementation is more current. Another freeware, Qualcomm AR (QCAR) was also tested and found to be not only usable, but also performed very well in their evaluations. So while this article did not create any additional contribution to the field of AR, it is helpful to read the results achieved by developers trying different configurations.
Article:
http://www.ideals.illinois.edu/bitstream/handle/2142/27688/AR_Smart_Phone_Note_rev3.pdf?sequence=2
In order to accomplish this application, several base functions were designed for all the functionality of the application. Then the AR was implemented on Android and iOS devices, using different SDK's. For the iOS, the developers used ARToolKit for the iPhone, which is distributed commercially by ARToolworks--the company that developed ARToolKit. They found that while the end performance in tracking was acceptable (ARToolworks claims that this SDK can track up to 30 fps) and ARToolKit itself was usable, the team spent a significant amount of time trying to understand the iOS framework and properly implement it. In comparison, the Android AndAR (also based off of ARToolkit) was relatively simple to set up but performed poorly. This no doubt stems from the fact that AndAR is based off of the free ARToolKit, which has not been updated since 2007 and as such is not as advanced as ARToolworks' more recent projects, where the iPhone's implementation is more current. Another freeware, Qualcomm AR (QCAR) was also tested and found to be not only usable, but also performed very well in their evaluations. So while this article did not create any additional contribution to the field of AR, it is helpful to read the results achieved by developers trying different configurations.
Article:
http://www.ideals.illinois.edu/bitstream/handle/2142/27688/AR_Smart_Phone_Note_rev3.pdf?sequence=2
Tuesday, February 7, 2012
PDA Augmented Reality
This time I chose to read an article that discussed augmented reality acting on a handheld device to see the challenges and overall performance issues. This article was posted in 2003, so the hardware that was available then as compared to today is much more limited. Still, the authors decided to make use of a PocketPC for its comparably advanced hardware of the time, with a 400MHz processor; a 240x320 16-bits display; 64MB RAM; 802.11b wireless network interface; as well as a camera addon with 320x240 color resolution. Compare this to the hardware available in the Nexus One phone, which boasts a 1 GHz processor, has a GPU processor, 480 x 800 pixels 16M color display, 2560х1920 pixels camera with geotagging ability.
In the paper, the authors used a hybrid tracking system that utilized ARToolKit as a foundation. The hybrid system comes from allowing the PDA to act as a standalone computation device, which can work autonomously, as well as allowing a PC connected wirelessly to shoulder the expensive tracking computation and thus increase overall performance. The PDA was given SoftGL for drawing, which is a light version of OpenGL. Due to the limitations of the PDA being unable to utilize floating points, which OpenGL relies on quite extensively, there were slight performance losses due to translations between integers and floats. Overall, the PDA + camera addon were able to achieve performance of approximately 5 fps when utilizing a supporting PC for computations, and otherwise 2.5-3.5 fps. This is promising for our project, as it demonstrates that even hardware that is not meant to handle augmented reality and has comparatively limited computing limitations can achieve modest results.
Source:
Daniel Wagner; Dieter Schmalstieg; “First Steps Towards Handheld Augmented Reality,” Vienna University of Technology, Favoritenstr.
http://www.icg.tu-graz.ac.at/Members/daniel/Publications/HandheldAR_ISWC03final.pdf
In the paper, the authors used a hybrid tracking system that utilized ARToolKit as a foundation. The hybrid system comes from allowing the PDA to act as a standalone computation device, which can work autonomously, as well as allowing a PC connected wirelessly to shoulder the expensive tracking computation and thus increase overall performance. The PDA was given SoftGL for drawing, which is a light version of OpenGL. Due to the limitations of the PDA being unable to utilize floating points, which OpenGL relies on quite extensively, there were slight performance losses due to translations between integers and floats. Overall, the PDA + camera addon were able to achieve performance of approximately 5 fps when utilizing a supporting PC for computations, and otherwise 2.5-3.5 fps. This is promising for our project, as it demonstrates that even hardware that is not meant to handle augmented reality and has comparatively limited computing limitations can achieve modest results.
Source:
Daniel Wagner; Dieter Schmalstieg; “First Steps Towards Handheld Augmented Reality,” Vienna University of Technology, Favoritenstr.
http://www.icg.tu-graz.ac.at/Members/daniel/Publications/HandheldAR_ISWC03final.pdf
Thursday, February 2, 2012
The Hybrid Tracking Algorithm
As the project I'm working on will consist of implementing augmented reality, I chose to read an article that detailed an algorithm that was related to it. The algorithm builds on past approaches to tracking reality and blends them together to take advantage of the strengths of each one. In the beginning, GPS and magnetic sensors were used to obtain a position and direction that the camera is aimed at. Then accelerometers and gyroscopes were added to detect the orientation of the camera for additional accuracy. This allowed for augmented reality devices to determine much more effectively where to draw objects on the screen and how to orient them. Computer vision techniques were also implemented to detect objects such as buildings and add another level of accuracy for realistic application of virtual images.
In this article the authors discuss how they can merge the different features together. The goal is to create a highly accurate, efficient algorithm that can follow and detect real life images in real time. The GPS and gyroscope, along with magnetic sensors and accelerometers act to allow images drawn on screen to appear in the right place and properly oriented. To detect surfaces, the algorithm references textured 3D models and uses edge detection on those models to match the real-life location of corresponding edges. The algorithm then compares this to the next video frame to detect camera motion and compensate in the video. This allows for the augmented reality device to accurately and in real-time display an augmented image on top of an underlaying video, and also allows for realistic occlusion of hidden surfaces.
References:
Reitmayr, G, and TW WDrummond. "Going out: Robust model-based tracking for outdoor augmented reality." 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE: (2006). :153-153.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4079263&tag=1
In this article the authors discuss how they can merge the different features together. The goal is to create a highly accurate, efficient algorithm that can follow and detect real life images in real time. The GPS and gyroscope, along with magnetic sensors and accelerometers act to allow images drawn on screen to appear in the right place and properly oriented. To detect surfaces, the algorithm references textured 3D models and uses edge detection on those models to match the real-life location of corresponding edges. The algorithm then compares this to the next video frame to detect camera motion and compensate in the video. This allows for the augmented reality device to accurately and in real-time display an augmented image on top of an underlaying video, and also allows for realistic occlusion of hidden surfaces.
References:
Reitmayr, G, and TW WDrummond. "Going out: Robust model-based tracking for outdoor augmented reality." 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE: (2006). :153-153.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4079263&tag=1
Subscribe to:
Posts (Atom)