Documente Academic
Documente Profesional
Documente Cultură
Introduction
Augmented Reality (AR) is a growing area in virtual reality research. The world environment around us provides a wealth of information that is difficult to duplicate in a computer. This is evidenced by the worlds used in virtual environments. Either these worlds are very simplistic such as the environments created for immersive entertainment and games, or the system that can create a more realistic environment has a million dollar price tag such as flight simulators. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. Augmented reality presented to the user enhances that person's performance in and perception of the world. The ultimate goal is to create a system such that the user cannot tell the difference between the real world and the virtual augmentation of it. To the user of this ultimate system it would appear that he is looking at a single real scene.
The discussion above highlights the similarities and differences between virtual reality and augmented reality systems. A very visible difference between these two types of systems is the immersiveness of the system. Virtual reality strives for a totally immersive environment. In contrast, an augmented reality system is augmenting the real world scene necessitating that the user maintains a sense of presence in that world. The virtual images are merged with the real view to create the augmented display. There must be a mechanism to combine the real and virtual that is not present in other virtual reality work. The computer generated virtual objects must be accurately registered with the real world in all dimensions. Errors in this registration will prevent the user from seeing the real and virtual images as fused. The correct registration must also be maintained while the user moves about within the real environment. Discrepancies or changes in the apparent registration will range from distracting which makes working with the augmented view more difficult, to physically disturbing for the user making the system completely unusable. An immersive virtual reality system must maintain registration so that changes in the rendered scene match with the perceptions of the user. Milgram defines the Reality-Virtuality continuum shown as Figure 1.
The real world and a totally virtual environment are at the two ends of this continuum with the middle region called Mixed Reality. Augmented reality lies near the real world end of the line with the predominate perception being the real world augmented by computer generated data. Augmented Virtuality is a term created by Milgram to identify systems, which are mostly synthetic with some real world imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the technology improves and the virtual elements in the scene become less distinguishable from the real ones.
Video Merging
The task in the augmented reality system is to register the virtual frame of reference with what the user is seeing. Registration is more critical in an augmented reality system because we are more sensitive to visual misalignments than to the type of vision-kinesthetic errors that might result in a standard virtual reality system. Figure shows the multiple reference frames that must be related in an augmented reality system.
The scene is viewed by an imaging device, which in this case is depicted as a video camera. The camera performs a perspective projection of the 3D world onto a 2D image plane. The generation of the virtual image is done with a standard computer graphics system. The virtual objects are modeled in an object reference frame. The graphics system requires information about the imaging of the real scene so that it can correctly render these objects. This data will control the synthetic camera that is used to generate the image of the virtual objects. This image is then merged with the image of the real scene to form the augmented reality image.
1. Head Mounted Display 2. Tracking System (GPS) 3. Mobile Computing Power Head Mounted Displays
They enable us to view graphics and text created by the augmented reality system. There are two basic types of head mounted displays being used. 1. Video See-Through Display The "see-through" designation comes from the need for the user to be able to see the real worldview that is immediately in front of him even when wearing the HMD. This system blocks the wearers surrounding environment using small cameras attached to the outside of the goggle to capture images. On the inside of the display, the video image is played in real-time and the graphics are superimposed on the video. One problem with the use of video cameras is that there is more lag, meaning that there is a delay in image-adjustment when the viewer moves his or her head.
objects, virtual objects are then merged with the real objects generated by the video camera and sent to the monitor from where it is displayed to the user. 2.Optical See-Through Displays The optical see-through HMD eliminates the video channel that is looking at the real scene. Instead merging of real world and virtual augmentation is done optically in front of the user.
display is that they could be made very small however the biggest constraint in using this technology is the prohibitive cost.
The main components of our system are a backpack computer (with 3D graphics acceleration), a differential GPS system, a head-worn display interface (with orientation tracker), and a spread spectrum radio communication link, all attached to the backpack
The above shown figure is the block diagram of AR system. It consists of a backup PC to which two inputs one from the GPS receiver and other from the head mounted display comes. The signal from the GPS receiver tells the co-ordinates of the person and the orientation tracker gives the orientation of the head. These two inputs will be transferred through the satellite to the database server. This server based according to the information received sends the related database to the satellite, which is then transmitted to the backpack PC. The graphics card will generate the virtual objects, which are then merged with the real
environment by the head worn display interface and displayed to the user.
Our Research
The discipline of affective computing studies how computers can recognize, understand, and mimic human emotions. Now, in most cyberpunk stories where humans merge their minds with computers, the computer is able to interpret the symbolic thinking of its human companion, and insert symbolic ideas back into the human brain. But wouldn't it be better if the computer could interpret ourvalues instead? In fact, it would probably be far easier for computers to learn to recognize what we like, dislike, approve of or are uncomfortable with-these base responses tend to be similar between people, and even across cultural and linguistic barriers. While each of us probably has a unique encoding for the concept "carrot" in our brains, we almost certainly share a basic neural and physiological response when asked whether we like carrots. I.e. you can teach a computer to recognize that someone is enjoying the carrot they're munching--but probably you can't teach the computer to recognize when someone is thinking about carrots. Basically it's just that the internet is so full of information, we end up spending most of our time filtering out the irrelevant data. If you think about it, like/dislike is the basic crap filter--a computer that could tell you hated pop-up browser ads without being asked would be a good thing. Looking not too far in the future, we see a world where everybody is immersed in one form or another of augmented reality. Everywhere we look, we see annotations on reality, provided by our AR glasses. The issue then becomes the same as the one we face with the Internet today: how to filter out all the crap? This is where the values-driven interface becomes crucial. Our AR system needs to be able to recognize our reactions to the various cues, annotations, pop-ups, overlays, sims, and pointers. If our system can do this, it can edit out the things we don't want to see.
e.g. Orthodox religious types no longer see ads for girlie shows or salacious lingerie models on billboards; serious rationalists no longer see the corner church as they walk by. Something else replaces it, a soothing image of the Gandhiji or something... The point is, the augmented world reflects the values of whoever is using it, and it does so seamlessly and automatically.
We introduce a hand held PC in the basic AR diagram. This handheld PC has been specifically trained by the users to understand his likes and dislikes. Now the Back Pack pc will receive input not only from GPS and orientation tracker but also from the hand held PC. Now according to this information it would be decided which data has to be accessed. e.g. if the person is in front of a real object which he hated to see, this information will be conveyed by the handheld pc to the back pack pc which in turn will generate virtual objects so to superimpose the real object the user hates. Thus filtering all the irrelevant information for the user and helping him
1.SPACE SEGMENT
The SPACE segment consists of 24 operational satellites in six orbital planes (four satellites in each plane). The satellites operate in circular 20,200 km orbits at an inclination angle of 55 degrees and with a 12hour period. The position is therefore the same at the same sidereal time each day, i.e. the satellites appear 4 minutes earlier each day.
2.CONTROL SEGMENT
The CONTROL segment consists of five Monitor Stations (Hawaii, Kwajalein, Ascension Island, Diego Garcia, Colorado Springs), three Ground Antennas, (Ascension Island, Diego Garcia, Kwajalein), and a Master Control Station (MCS) located at Schriever AFB in Colorado. The monitor stations passively track all satellites in view, accumulating ranging data. This information is processed at the MCS to determine satellite orbits and to update each satellite's navigation message. Updated information is transmitted to each satellite via the Ground Antennas.
3.USER SEGMENT
The USER segment consists of antennas and receiver-processors that provide positionin research leads to technologies that were virtually unimaginable at the time the research was done.
(either too far from Earth or moving at an impossible velocity) and can be rejected without a measurement.
The time taken by the signal to reach from satellite to the receiver can be found by calculating the phase shift of the Pseudo Random Code Pseudo Random Code: The Pseudo Random Code is a fundamental part of GPS. Physically it's just a very complicated digital code, or in other words, a complicated sequence of "on" and "off" pulses. The signal is so complicated that it almost looks like random electrical noise. Hence the name "PseudoRandom." There are several good reasons for that complexity: First, the complex pattern helps make sure that the receiver doesn't accidentally sync up to some other signal. The patterns are so complex that it's highly unlikely that a stray signal will have exactly the same shape. Since each satellite has its own unique Pseudo-Random Code this complexity also guarantees that the receiver won't accidentally pick up another satellite's signal. So all the satellites can use the same frequency without jamming each other. And it makes it more difficult for a hostile force to jam the system. We assume that both the satellite and the receiver start generating their codes at exactly the same time. Distance to a satellite is determined by measuring how long a radio signal takes to reach us from that satellite.
3. Multiply that travel time by the speed of light and you've got
distance. But how do we make sure everybody is perfectly synchronized? If measuring the travel time of a radio signal is the key to GPS, then our stop watches had better be darn good, because if their timing is off by just a thousandth of a second, at the speed of light, that translates into almost 200 miles of error! On the satellite side, timing is almost perfect because they have incredibly precise atomic clocks on board. Atomic clocks don't run on atomic energy. They get the name because they use the oscillations of a particular atom as their "metronome." This form of timing is the most stable and accurate reference man has ever developed. Remember that both the satellite and the receiver need to be able to precisely synchronize their pseudo-random codes to make the system work. Our receivers needed atomic clocks (which costs a lot) nobody could afford it. The secret to perfect timing is to make anextra satellite measurement. If our receiver's clocks were perfect, then all our satellite ranges would intersect at a single point (which is our position). But with imperfect clocks, a fourth measurement, done as a crosscheck, will NOT intersect with the first three. Since any offset from universal time will affect all of our measurements, the receiver looks for a single correction factor that it can subtract from all its timing measurements that would cause them all to intersect at a single point. That correction brings the receiver's clock back into sync with universal time . Once it has that correction it applies to all the rest of its measurements and now we've got precise position of accuracy upto 3-6 mts.
Authorized users with cryptographic equipment and keys and specially equipped receivers use the Precise Positioning System. U. S. and Allied military, certain U.S Government agencies, and selected civil users specifically approved by the U.S. Government, can use the PPS. PPS Predictable Accuracy o 22 meter Horizontal accuracy o 27.7 meter vertical accuracy
Civil users worldwide use the SPS without charge or restrictions. Most receivers are capable of receiving and using the SPS signal. The SPS accuracy is intentionally degraded by the DOD by the use of Selective Availability. SPS Predictable Accuracy o 100 meter horizontal accuracy o 156 meter vertical accuracy
The SVs transmit two microwave carrier signals. The L1 frequency (1575.42 MHz) carries the navigation message and the SPS code signals. The L2 frequency (1227.60 MHz) is used to measure the ionospheric delay by PPS equipped receivers. Three binary codes shift the L1 and/or L2 carrier phase. o The C/A Code (Coarse Acquisition) modulates the L1 carrier phase. The C/A code is a repeating 1 MHz Pseudo Random Noise (PRN) Code. This noise-like code modulates the L1 carrier signal, "spreading" the spectrum over a 1 MHz bandwidth. The C/A code repeats every 1023 bits (one millisecond). There is a different C/A code PRN for each SV. GPS satellites are often identified by their PRN number, the unique identifier for each pseudo-random-noise code. The C/A code that modulates the L1 carrier is the basis for the civil SPS. o The P-Code (Precise) modulates both the L1 and L2 carrier phases. The P-Code is a very long (seven days) 10 MHz PRN. The P-Code is encrypted into the Y-Code. The encrypted Y-Code requires a classified AS Module for each
receiver channel and is for use only by authorized users with cryptographic keys. The P (Y)-Code is the basis for the PPS. The Navigation Message also modulates the L1-C/A code signal. The Navigation Message is a 50 Hz signal consisting of data bits that describe the GPS satellite orbits, clock corrections, and other system parameters.
GPS Data
The GPS Navigation Message consists of time-tagged data bits marking the time of transmission of each sub frame at the time they are transmitted by the SV. A data bit frame consists of 1500 bits divided into five 300-bit sub frames. A data frame is transmitted every thirty seconds. Three six-second sub frames contain orbital and clock data. SV Clock corrections are sent in sub frame one and precise SV orbital data sets (ephemeris data parameters) for the transmitting SV are sent in sub frames two and three. Sub frames four and five are used to transmit different pages of system data. An entire set of twenty-five frames (125 sub frames) makes up the complete Navigation Message that is sent over a 12.5 minute period. Data frames (1500 bits) are sent every thirty seconds. Each frame consists of five sub frames. Data bit sub frames (300 bits transmitted over six seconds) contain parity bits that allow for data checking and limited error correction
There are usually more satellites available than a receiver need to fix a position so the receiver picks a few and ignores the rest. These picks should be as far from each other in the space as possible as for maximum accuracy the spheres should intersect at almost right angles. The accuracy achieved by this GPS system is 3-6 mts. but for Augmented Reality accuracy in centimeters is required. For that purpose we use Differential GPS Systems.
2. Military Training
The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display. SIMNET, a distributed war games simulation system, is also embracing augmented reality technology. By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder the activities of other units participating in the exercise can be imaged. While looking at the horizon, for example, the display-equipped soldier could see a helicopter
rising above the tree line. This helicopter could be being flown in simulation by another participant. In wartime, the display of the real battlefield scene could be augmented with annotation information or highlighting to emphasize hidden enemy units.
3. Engineering Design
Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it. To hold discussions the client can point at the prototype to highlight sections and this will be reflected on the real model in the augmented display that the designers are using. Or perhaps in an earlier stage of the design, before a prototype is built, the view in each conference room is augmented with a computer-generated image of the current design built from the CAD files describing it. This would allow real time interaction with elements of the design so that either side can make adjustments and changes that are reflected in the view seen by both groups.
5. Consumer Design
Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to
the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like. It is conceivable that a future system would allow you to bring a video tape of your house shot from various viewpoints in your backyard and in real time it would augment that view to show the new deck in its finished form attached to our house. Or bring in a tape of your current kitchen and the augmented reality processor would replace your current kitchen cabinetry with virtual images of the new kitchen that you are designing. Applications in the fashion and beauty industry that would benefit from an augmented reality system can also be imagined. If the dress store does not have a particular style dress in your size an appropriate sized dress could be used to augment the image of you. As you looked in the three-sided mirror you would see the image of the new dress on your body. Changes in hem length, shoulder styles or other particulars of the design could be viewed on you before you place the order. When you head into some high-tech beauty shops today you can see what a new hairstyle would look like on a digitized image of yourself. But with an advanced augmented reality system you would be able to see the view as you moved. If the dynamics of hair were included in the description of the virtual object you would also see the motion of your hair as your head moved
6.Instant information
Tourists and students could use these systems to learn more about a certain historical event. Imagine walking onto a Civil War battlefield and seeing a re-creation of historical events on a head-mounted, augmented-reality display. It would immerse you in the event, and the view would be panoramic.
7.Gaming
How cool would it be to take video games outside? The game could be projected onto the real world around you, and you could, literally, be in it as one of the characters. When one uses this system, the game surrounds him as he walks across campus. There are hundreds of potential applications for such a technology, gaming and entertainment being the most obvious ones. Any system that gives people instant information, requiring no research on their part, is bound to be a valuable to anyone in pretty much any field.
Augmented-reality systems will instantly recognize what someone is looking at, and retrieve and display the data related to that view.
Update rate for generating the augmenting image, Accuracy of the registration of the real and virtual image.
Visually the real-time constraint is manifested in the user viewing an augmented image in which the virtual parts are rendered without any visible jumps. To appear without any jumps, a standard rule of thumb is that the graphics system must be able to render the virtual scene at least 10 times per second. This is well within the capabilities of current graphics systems for simple to moderate graphics scenes. For the virtual objects to realistically appear part of the scene more photorealistic graphics rendering is required. The current graphics technology does not support fully lit, shaded and ray-traced images of complex scenes. Fortunately, there are many applications for augmented reality in which the virtual part is either not very complex or will not require a high level of photorealism. Failures in the second performance criterion have two possible causes. One is a misregistration of the real and virtual scene because of noise in the system. The position and pose of the camera with respect to the real scene must be sensed. Any noise in this measurement has the potential to be exhibited as errors in the registration of the virtual image with the image of the real scene. Fluctuations of values while the system is running will cause jittering in the viewed image. As mentioned previously, our visual system is very sensitive to visual errors, which in this case would be the perception that the virtual object is not stationary in the real scene or is incorrectly positioned. Misregistrations of even a pixel can be detected under the right conditions. The second cause of misregistration is time delays in the system. As mentioned in the previous paragraph, a minimum cycle time of 0.1 seconds is needed for acceptable real-time performance. If there are delays in calculating the camera position or the correct alignment of the graphics camera then the
augmented objects will tend to lag behind motions in the real scene. The system design should minimize the delays to keep overall system delay within the requirements for real-time performance.
Summary
Though Augmented Reality is in a nascent stage and is still not being used in mass. We feel that with the growing research in AR and shrinking size and complexity of AR system it is not far away in time where everybody will own his own AR system