AR based navigation
Most of the work I've done around indoor navigation is now confidential, but I also did some early research with personal projects around using AR for navigation which is shown below. The main idea is to use AR, or specifically, augmented images, to establish a known fixed point so that it's possible to relate the AR coordinate space to a real world coordinate. Once done, as long as AR tracking can be maintained and doesn't drift overly much, the current position (and orientation) is known precisely in the real world. This is a method of positioning that doesn't require beacons, wifi radio mapping, WiFi-RTT (coming soon), or GPS.
Google recently promoted an experimental maps navigation feature, which I believe uses similar techniques.
First, if not familiar with native AR libraries, know that both Google( ARCore ) and Apple( ARkit ) include augmented image capabilities . This allows detection of an image, similar to standard computer vision capabilities that have been around for a while now. But in addition to image detection, the augmented part also establishes, and then tracks, the coordinates of that image within the AR space.
This video, using demo code from google, shows this clearly. When app detects either of the two images I've registered, it draws a virtual frame around the image and then maintains the frame's position in the AR space as I move the phone around.
Note that these these images are not ideal for AR purposes. I was testing the limits and this was an older version of ARCore. Tracking can still drift as seen in this video, but things have improved.
AR Based Positioning and Navigation
In this demo, the coordinate axis marker seen early on is the origin of the AR space. The origin is established according to the phone/camera's position when the AR library initializes. After that, all phone movement in the AR space is relative to that origin.
I have an augmented image hanging from the door and I've coded in the exact real world geo coordinates of that image. Once image has been detected, I know exact real position and orientation by transforming between the real world and AR coordinate systems. With that, I am able to draw visual navigation aids directing me forward and right down the hallway to an animated thingamajig.
At the bottom of the app is a 2D floor plan sketch, and the blue dot is my current position. As I walk down the hallway and back, you can see the accuracy of the position.
In this demo, the drift was relatively small. It immediately corrects itself when the augmented image comes in to the camera view again. So one idea to increase the distance that this technique could remain useful would be to have periodic well placed augmented images.
An idea I would've liked to experiment with is if it is possible to dynamically register "augmented images" by taking pictures as part of the setup, instead of using predefined 2D poster type images. I am wondering if Google is doing something similar in their recent AR navigation experiments since the documentation says street view imagery is required for it.
Separately, Apple has an interesting capability in ARKit2 whereby they are detecting 3D objects , not just 2D images. I haven't used it enough to say how well it works.
Note on AR for Browser & Cross Platform
The state of AR support in browsers is in flux and future unclear. I tried hard to make various AR ideas work in browser, but in the end, had to give it up. The leading standard at this time is WebXR, but it's not fully or consistently supported. Chrome had a decent implementation for a while, but it was then removed. Firefox seems to be the current leader. Who knows if/when Safari will ever support it. Regarding the premise above, the WebXR standard does not yet include support for augmented images.
There are a few options for cross platform AR apps, but no good ones. The best so far seems to be ViroReact. However, my initial experiments with it were frustrating, to say the least.