It's true! The article that I read is actually from 1999, but it details an application of augmented reality not yet considered by our group. The goal of this group was to explore augmented reality's suitability in enhancing user interaction with locations. To accomplish this they designed a system that allowed users to roam about Columbia University's campus and discover different documentary bits of three main topics.
Wearing a device that looks like something straight out of a familiar movie series, users don a backpack computer, a tablet pc and a head-mounted display and set out around campus. As they walk around they can find flags dotted all around campus, located and tracked by GPS and a magnetometer orientation tracker. Users look at a virtual identifier, in this case colored flags, and pull up multimedia content specific to that location, which can reference the user to other tangent topics. The system that was built is able to handle images, web sites, videos, and 360 degree images using the backpack computer, tablet pc and/or head-mounted display (depending on the multimedia). Looking at this article I see an interesting use case that would be entirely possible with our software, which could allow people to create virtual tours with videos, images and audio specific to locations.
Source:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16.6539&rep=rep1&type=pdf
CSCE 482 Eric
Wednesday, April 4, 2012
Thursday, March 29, 2012
AR in Education
Contrary to what I expected to find in this article, the main focus was in advocating for the use of augmented reality rather than concrete examples in which augmented reality is being used to aid education. The authors write from the context of living in Mexico and begin with the motivation that modern day education
in Mexico emphasizes memorization rather than actual learning due to expectations of school systems and poor training on the part of teachers (as can also be seen in the US for that matter) . The article's main point is that with the advent of the digital age in which many children grow up and are familiar with advanced technology we make poor use of the tools at our disposal in improving our ability to teach interactively to help build understanding.
The project that the paper was written for is funded by the Mexican government in an attempt to increase the quality of education in Mexico. The authors advocate that while the current success of augmented reality lies in marketing, architecture, and entertainment that it can be just as effective in changing the way education works. While at first the thought of integrating technology with education seems a bit awkward, it's not difficult to see the benefits that can be achieved in nurturing understanding with well developed computer human interaction. The difficulty with the use of augmented reality in instructing is most likely the instructors, however. Oftentimes teachers are technologically inept and as such would require specialized training to make use of the technology that is provided for their use.
Source:
http://books.google.com/books?hl=en&lr=&id=23tcDsn2g_wC&oi=fnd&pg=PA481&dq=augmented+reality&ots=bfCcsW1Ysm&sig=PYjNkfqi6VqJSR555QBhdT2mMBA#v=onepage&q=augmented%20reality&f=true
in Mexico emphasizes memorization rather than actual learning due to expectations of school systems and poor training on the part of teachers (as can also be seen in the US for that matter) . The article's main point is that with the advent of the digital age in which many children grow up and are familiar with advanced technology we make poor use of the tools at our disposal in improving our ability to teach interactively to help build understanding.
The project that the paper was written for is funded by the Mexican government in an attempt to increase the quality of education in Mexico. The authors advocate that while the current success of augmented reality lies in marketing, architecture, and entertainment that it can be just as effective in changing the way education works. While at first the thought of integrating technology with education seems a bit awkward, it's not difficult to see the benefits that can be achieved in nurturing understanding with well developed computer human interaction. The difficulty with the use of augmented reality in instructing is most likely the instructors, however. Oftentimes teachers are technologically inept and as such would require specialized training to make use of the technology that is provided for their use.
Source:
http://books.google.com/books?hl=en&lr=&id=23tcDsn2g_wC&oi=fnd&pg=PA481&dq=augmented+reality&ots=bfCcsW1Ysm&sig=PYjNkfqi6VqJSR555QBhdT2mMBA#v=onepage&q=augmented%20reality&f=true
Collaborative AR
So this time I found an interesting article that was from '95, which also happened to be from Sony's research team. This marks the earliest of the Sony articles I've found but it is an interesting article that relates to our own project by more than just being about augmented reality. This system is designed to provide for group collaboration on 3D model inspection and transformations. The example given for motivation is of car designers having to look at physical models to discuss features even though CAD designs generate 3D models.
The system was created on a palmtop device with several external sensors tied in for positioning information. Depending on the tilt of the device with respect to the tracking coordinates different transformations can be made to the model displayed. The system allowed for multiple users to examine the same model in an augmented sense, and all could also examine changes in near real-time. Only one user would be able to make modifications at a time, though, which is handed off by the users who have control in designating another control master.
This system, while not quite what we are aiming to accomplish does have some similarities. The base idea is for a collaboration system, which is one important aspect to what we are aiming for as well. Similar to this implementation, though, users cannot examine changes simultaneously.
Source:
http://www.sonycsl.co.jp/person/rekimoto/papers/vsmm96.pdf
The system was created on a palmtop device with several external sensors tied in for positioning information. Depending on the tilt of the device with respect to the tracking coordinates different transformations can be made to the model displayed. The system allowed for multiple users to examine the same model in an augmented sense, and all could also examine changes in near real-time. Only one user would be able to make modifications at a time, though, which is handed off by the users who have control in designating another control master.
This system, while not quite what we are aiming to accomplish does have some similarities. The base idea is for a collaboration system, which is one important aspect to what we are aiming for as well. Similar to this implementation, though, users cannot examine changes simultaneously.
Source:
http://www.sonycsl.co.jp/person/rekimoto/papers/vsmm96.pdf
Thursday, March 22, 2012
Sony CyberCode
Following up on Sony's earlier research
paper from 1998 I found this article from 2000 by the same team. In
this article Sony's developers created a system they dubbed
“CyberCode”. CyberCode is an augmented reality system that was
designed for the capability of identifying a tracking marker and
responding with predefined actions. The software acts as a
foundation for several applications that the Sony team created to
demonstrate usability. Examples ranged from better interactive
museum layouts to extended desktop space using the table your
computer is on to giving new sense to the words “drag and drop”
by allowing paper codes to be placed on printers to execute a print
command.
While not the same direction as the
previous article, this article also discusses the other identifiers
that were considered and explained the pros and cons for each one.
For example, infrared beams are unobtrusive and can be detected more
reliably than scanning a code. The downfall is in mounting the
device and the need to replace dead batteries. 1D barcodes were also
considered but such required more specific scanning devices than were
commercially feasible depending on the application. The team settled
on using the 2D marker patterns for the ease of placement and rapid
development.
Source: http://hercules.infotech.monash.edu.au/EII-CAC/CAPapers/Rekimoto_CyberCodeDesignAugmentedReality_ACM_DARE_2000_pp1-10.pdf
Source: http://hercules.infotech.monash.edu.au/EII-CAC/CAPapers/Rekimoto_CyberCodeDesignAugmentedReality_ACM_DARE_2000_pp1-10.pdf
Thursday, March 8, 2012
Pointing Blindly
I came across an article that deviated from the realm of augmented reality, yet was closely related. The article began by providing a brief overview of augmented reality and virtual reality applications: their premises and their implementations. The authors of this paper recognized that due to hardware diversification among personal smart phones it is unlikely for the majority of users to have the same technologies at their disposal. The authors investigated the capabilities of a pointing-based interaction that provides no visual feedback.
The decision to remove visual feedback is based on hardware limitations, but also on the observation that users tend to shift their attention to displays rather than the real-world primarily. These authors set out to test whether simple pointing-based interaction could yield comparable accuracy in tracking targets. The results of their research was that pointing-based interaction, based on a minimalist-style hardware configuration of an accelerometer and compass, were unpromising. The pitch and roll of the device as it was pointed between targets dramatically affected the hardware's ability to retrieve accurate orientation. Further, their initial hypothesis that maintaining a stationary, target-facing posture, was debunked by results that showed that subjects allowed free motion were able to reach a higher accuracy.
This article is not quite about augmented reality but did have an impact on our project. Our project solely relies on marker-based tracking and GPS-based tracking. This article helped to remind me at least that pointing-based interaction is also not only plausible but necessary in our application when it comes to markerless tracking. The article helped to give an estimation of the best use of pointing-based interaction and its limitations, so that we can avoid proven shortfalls.
Source:
Reaching the same point: Effects on consistency when pointing at objects in the physical environment without feedback
The decision to remove visual feedback is based on hardware limitations, but also on the observation that users tend to shift their attention to displays rather than the real-world primarily. These authors set out to test whether simple pointing-based interaction could yield comparable accuracy in tracking targets. The results of their research was that pointing-based interaction, based on a minimalist-style hardware configuration of an accelerometer and compass, were unpromising. The pitch and roll of the device as it was pointed between targets dramatically affected the hardware's ability to retrieve accurate orientation. Further, their initial hypothesis that maintaining a stationary, target-facing posture, was debunked by results that showed that subjects allowed free motion were able to reach a higher accuracy.
This article is not quite about augmented reality but did have an impact on our project. Our project solely relies on marker-based tracking and GPS-based tracking. This article helped to remind me at least that pointing-based interaction is also not only plausible but necessary in our application when it comes to markerless tracking. The article helped to give an estimation of the best use of pointing-based interaction and its limitations, so that we can avoid proven shortfalls.
Source:
Reaching the same point: Effects on consistency when pointing at objects in the physical environment without feedback
http://www.sciencedirect.com.lib-ezproxy.tamu.edu:2048/science/article/pii/S1071581910001254
Tuesday, February 28, 2012
Using AR in Life-Death Applications
The article that I found this time described a recent application that integrated several means of information into an augmented reality environment. The target audience for the application is military infantry, though it could be just as well suited for law enforcement and fire response teams. The challenge was to aggregate and filter various input relative to a situation and to relay pressing information rapidly for the infantryman's benefit. A vehicle-mounted prototype was designed for proof of concept without the restraints of portability.
The infantrymen in this application are to be outfitted with several sensors. An inertial measurement unit is placed on one foot and on the helmet to measure distance, speed and orientation for tracking movement. A LiDAR system is mounted on the helmet to correct for drift and other navigational error. This allows for the central hub to track friendly infantrymen's position in an outdoor and indoor environment, independent of GPS. The augmented reality system will overlay colors on structures and targets to indicate friendly or enemy structures, as well as neutral or unknown structures. What this altogether allows is for a field commander to observe more accurately and react quicker to real-time situations.
Source:
http://iospress.metapress.com.lib-ezproxy.tamu.edu:2048/content/bq0632q474310576/fulltext.pdf
The infantrymen in this application are to be outfitted with several sensors. An inertial measurement unit is placed on one foot and on the helmet to measure distance, speed and orientation for tracking movement. A LiDAR system is mounted on the helmet to correct for drift and other navigational error. This allows for the central hub to track friendly infantrymen's position in an outdoor and indoor environment, independent of GPS. The augmented reality system will overlay colors on structures and targets to indicate friendly or enemy structures, as well as neutral or unknown structures. What this altogether allows is for a field commander to observe more accurately and react quicker to real-time situations.
Source:
http://iospress.metapress.com.lib-ezproxy.tamu.edu:2048/content/bq0632q474310576/fulltext.pdf
Tuesday, February 21, 2012
Sony's Augment-able Reality System
In this great article I found a near exact replication of what we hope to achieve in our own project. Written back in 1998, Sony developed a prototype AR system that allowed for digital content to be tagged in an environment either to virtual areas, or physical markers.
Sony designed a system comprised of a head-mounted system containing a monocular display screen, a camera, and an infrared sensor. They coupled the headset with a wearable computer able to connect to the Internet.
Sony chose to use a wearable computer design because they believed it was the technology of the future, that it would become much more popular as the years went on. Today the wearable computer is all but forgotten, and has instead been replaced by high-performing smart phones.
The team from Sony created a software system that allowed for the detection of physical contexts, such as rooms, and also to recognize physical markers such as black-and-white matrix codes. Infrared beams emit codes on occasion that signal the identity of a room location. This allows for the system to track its location on a floor map. For specific objects, they created unique ID codes for physical objects such as a VCR.
As for what the head-up display showed the user, while the user is viewing the environment they will see a video overlay, with some additional information on what is available in a side pane such as what is available for viewing. The user can also create content, either voice or images, to append to a location using drag-and-drop on the display. The microphone is cleverly hidden inside the mini mouse.
Sony designed a system comprised of a head-mounted system containing a monocular display screen, a camera, and an infrared sensor. They coupled the headset with a wearable computer able to connect to the Internet.
Sony chose to use a wearable computer design because they believed it was the technology of the future, that it would become much more popular as the years went on. Today the wearable computer is all but forgotten, and has instead been replaced by high-performing smart phones.
As for what the head-up display showed the user, while the user is viewing the environment they will see a video overlay, with some additional information on what is available in a side pane such as what is available for viewing. The user can also create content, either voice or images, to append to a location using drag-and-drop on the display. The microphone is cleverly hidden inside the mini mouse.
Adding voice content to a location
While there are many similarities in the concept, the implementation between Sony and our design is suitably different. For example, Sony uses IR light to detect a room location, and a high contrast ID matrix code. We will be using a tracking system that relies on the ID matrix codes, images, and GPS location. Additionally, while the filtering that they utilized is very similar to our idea (users can filter out content and establish privacy options) our work will additionally allow for collaboration of content that is lacking here.
Reference: Rekimoto, J.; Ayatsuka, Y.; Hayashi, K.; , "Augment-able reality: situated communication through physical and digital spaces," Wearable Computers, 1998. Digest of Papers. Second International Symposium on , vol., no., pp.68-75, 19-20 Oct 1998 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=729531&isnumber=15725 |
Subscribe to:
Posts (Atom)