In optical systems emitted light is captured by cameras in various formats. Electromagnetic tracking can be used where small electrified coils affect other electromagnetic sensors, and how their magnetic field affects others can position them in space. Acoustic systems use ultrasonic sound waves to identify the position and orientation of target objects. Mechanical tracking can use articulated arms/limbs/joysticks/sensors connected to headsets or inside them, much like the inertial tracking in phones often made possible by accelerometers and gyroscopes.
The world of virtual reality has successfully utilised several of these methods for consumer hardware.
Phones are one of the more basic viewing options for VR content and have been providing a super cheap yet still interactive solution for several years now. The phone is put inside a headset, and the screen is split to give output for each eye, to give the illusion of depth. The accelerometer and gyroscope systems in the smartphones and headsets of the last two decades give virtual reality applications a true sense of motion.
The accelerometer works by measuring the rate of change in your movement. Tiny sensors translate the forces into readable electrical signals. Imagine you are in a car going around a corner at speed and being pushed sideward against the door panel. The same situation occurs in the sensors but on a much smaller scale. The accelerometer, however, cannot detect the phones position relative to everything else. This is where the gyroscope comes in. This can measure rotational movements relative to a neutral position. If you ever played one of those ball bearing, maze games where you must tilt the maze to move the ball, this is a comparable principle. These more mechanically oriented systems are also used in the higher end headsets to improve the fluidity of the experience.
When ships at sea used to navigate into ports, they didn’t have fancy positional GPS systems and radars. Instead they used lighthouses or towers in order to judge there position relatively. This is the same principle for Valve’s Lighthouse Base stations used for the HTC Vive, Vive Cosmos and their own Index. The technology has developed to be an incredibly accurate version of actual lighthouses, emitting non-visible light to flood a room and using the reference points on the HMD’s and handhelds to give positional information back to the computer. This is done using both stationary LED’s and two active laser emitters spinning at 60Hz. The LED’s flash and then either of the lasers creates a beam of light across the room. In the receivers, which are covered in photosensors (32 in the HMD and 24 per controller), the beams are picked up as the devices have been “counting”. It calculates, using the time and position of the base station it came from over multiple sensors creating a “pose”. This shape can then be analysed and gives both an exact position and direction its facing. This is a cheaper way to get incredible tracking.
The Oculus Rift and Rift S, as well as other consumer products such as the Nintendo Wii Controllers, use a similar “pose” recognition principle but achieved in a slightly different manner. The headset for the Rift uses constellations of infrared LED’s built into the HMD and controllers. These are subsequently picked up by the two desktop sensors designed to recognise the LED’s specific glow and convert their placement into positional data. There is also a magnetometer, gyroscope and accelerometer in the headset. Combined these allow for accurate tracking across all three dimensions of the 3D world.
Oculus released another headset called the Quest. This wireless and computer-less system uses a radically different tracking method in the form of scanning a room for any objects within your space (curtains, windows, coffee tables etc.), and creating a 3D map to play in. It combines data from a gyroscope and accelerometer with the map to give the HMDs position at 1000Hz (once every millisecond). The Guardian system is then there to stop you bowling over and allows the saving of multiple rooms for quicker set up times as well. Similarly, the Microsoft HoloLens headset uses two front mounted cameras to scan the surrounding area and in conjunction with an inertial measurement unit (IMU) gives precise positions. It essentially builds up a more and more precise map as you look around.
Mechatech has developed a tracking system based on direct measurement of the human body.
Mechatech has developed a tracking system based on direct measurement of the human body. An exoskeleton frame is worn over the body, the AgileVR product is worn over the knee only, and direct measurements are taken.
The direct physical tracking of the human body has advantages over camera-based systems – it doesn’t need a camera! It’s not trapped in room scale, and it also does not suffer from occlusion – imagine putting your arm behind your back – the camera cannot see it.
The direct measurement also means that pose data does not have to be calculated, it can be read straight from the device in real-time, reducing lag, which helps with immersive Virtual Reality.
These systems are all improving and constantly surprising us with respect to how smooth and realistic movements in VR can be. The future of tracking has already begun developing with eye tracking being taken from the medical and aerospace fields and being applied within HMDs for determining what you are focusing on. This could provide amazing updates to peoples set up such as retinal scanning for security or eye strain measurement for game time efficiencies. Hand scanning has also been talked about and Facebook have stated “the future for this technology is in all-day wearable AR glasses that are spatially aware”. It’s an exciting time for tech lovers when the big names begin to push the boundaries.