AR Navigation App

Dec. 2016


Introduction

This Augmented Reality Navication app was designed to help tourists when they traveling in unfamiliar cities. The goal of this project is to make a use of the advantage of AR, which activates new ways of interaction, to guide the user to find the destinatoins and also to provide store informations on the way to the destination without touching the device. This project is the final project for CS 291A, Mixed and Augmented Reality, at UCSB.

Advisor: Tobias Hollerer

Team: Jing Yan, Zhenyu Yang, Junxiang Yao

Role: Responsible for rendering and displaying visual elementvisual elements (except text) using OpenGL, designing and realizing interaction functionalities using Vuforia, user interface design (except the store information plane) using Illutrator. Participated in code integration in Android Studio.

Programming Language: Java

Tools: Google Tablet Nexus 7, OpenGL ES 2.0, Android Studio, Vuforia SDK, Google Map API, Illustrator

Github


Objectives

In this project, our plan is to build an Android app, which will be placed into the Google cardboard as a headset through use. To realize augmented reality functionalities, the camera of the device will keep open, and the app will respond to what the camera is capturing controled by the movement of user's head. Navigation information and facts of the stores will be overlaid on top of the logos respectively, and these functionalityies can be turned off by user without touching the device.

Achieving Interactivity

In this app, because we are using Vuforia SDK for Android, markers are crutial to realize augmented reality. When the camera of the device detects a marker, with the help of Vuforia, the device will get the relative position relationship between that marker and the camera, which is the user in this case. Based on that, to make the hand trackable by the device, we decided to put a marker on the hand. Thus, the device will understand where the hand is in its own coordinate system by detecting the marker. And for better control, there is a cursor rendered on the screen showing the position of the hand.

Because the user need to wear the device while using this app, the image captured by the camera and shown on the screen in the device is the only source of visual information the user will gain. Thus, the clearness of the view in the device is vital. We don't want the pointer and the information plane for each shop to block the sight of the user for too long. Therefore, the control buttons which enable the user to turn the functionalities on and off as need is a necessary part in our design. To achieve this, my solution is defining a virtual plane in the front of the camera to place all the UI elements. This control panel is parallel with the screen of the device, and the distance is 60 in the camera's coordinate system.

Pipeline

This app could be devided into two sections, Voice Control and Navigation, according to the difference in functionalities and the order of use.

Voice Control

The first step of using is mount the device in the headset and wear with the app lauched. Typing the destination on the screen would be tedious. To simplify the process, we implement the voice contorl. The user will only need to speak to the device and choose the correct one after voice recognition, the app will jump to next section, which is navigation. The voice recognition is realized by the built-in speak-to-text(STT) function in Android system, 3 guesses of the voice recognition result of the user's speech will appear on the screen. If the correct one is not shown, user can speak multiple times.

Navigation

The navigation part greatly depends on the Google maps API. The result of the Voice Control section will be sent to Google maps servers as a request of direction data. In return, the server will send back data of all GPS locations along the direction path, which will be decoded into locations by this app. The direction and the locations of way points will be visualized as a pointer and small landmarks respectively. The pointer will be fixed in the middle of the screen and always points to the next landmark that user should reach.

The second functionality in this section is displaying the location-cased information of the stores. When the camera detect a logo (images of logos are uploaded as dataset), the web will retrive the geographical location and the name of the store from GPS module and send them to Google Place API to request real-time JSON file. This file will be parsed into basic data (store name, rating, price level, image reference, place id) and be shown on the information plane covering the logo, since the image of the logo are used as markers.

User Interface Design

Because we couldn't find a screen recording app that allow us to record the Voice Control section of our app, this section were not included in the demo video shown later. The demo image of this section is as shown above. After launching the app, user will see this page directly. There are three elements in this layout, which are texts, a microphone icon for user to launch the voice control function and a transparent cursor (an opaque one may block the texts). The cursor is controled by the hand with the marker, because it is set to be the center of the marker projected on the virtual plane I defined in the camera's coordinate system. A transparent black mask is obscuring the background scene in order to better present the visual elements. In terms of the texts, the first string the user will see is "Please indicate your destination", this string is not selectable and will not be responsive. But if the cursor is hovering on the microphone button, the microphone button will enlarge a little as a response, and the microphone will change from gray to green when the button is pressed, this tells the user to speak the destination.

After the voice recognition is complete, the microphone button will become gray automatically. Three string will be shown in this page. The texts have hover effect, which is also enlarment like other elements. When user choose one of the strings, the whole interface will move towards the camera until it cannot be seen. And the black transparent will be fading away. If these three strings are not ideal, the user can touch the microphone button again to launch the voice control function.

In order to reach the best effect, I drew all of buttons from scratch and attached them to the circle I drew using openGL ES 2.0 as textures. In terms of the color I chose in the buttons, since we are using Android voice control API, Google Map and Vuforia, I used the green from Android's logo, red from Google's logo and green from Vufoia's logo in the microphone button, navigation button and information detail button respectively.

In order to eliminate the shake and increase the precision while touching the button on the virtual plane, for each button and the cursor, I placed a torus around the them to represent the process. Thus, when the hand touches a button, which will enlarge that button, the buffer torus will start to draw. After the drawing process is complete, the button will rotate 180 degrees, and the other side of the button, which is the gray button will show up, and the corresponding functionality will be turned off.

For the most of the time, the user will not need to manipulate the buttons, which means the marker, as well as the hand, are not suppose to always be seen by the camera. Thus, I hided the buttons when the marker is not found by the camera. I also declared an out-of-view counter, which is counting how many frame has past since the marker is not found by the camera anymore. If the counter is smaller than a threshold, the button will not move out the view since this may be caused by the marker moving too fast rather than it is really not in view. By implementing this, jittering of the buttons disappeared and the movements of the buttons became smoother.

Final Demo


Lab testing.

Practical testing.

密码:arnav

Future Works

As shown in the demo video, we couldn't make it to be able to mount and use in a headset like Google Cardboard because there are only three weeks for us to finish this app. Also, there is rooms for the improvement of design especially the cursor in the Navigation section.








Visualizing Time Oriented Data
in Virtual Reality
In Progress / Processing / Unity / Oculus
Event Poster Design End of Year Show 2017 / MAT UCSB
Flocking around Water Augmented Virtuality / C++ / AlloSphere
Tourism Navigation Device
for Qinhuai, Nanjing
UI Design / Product Design / Prototype / Cultual Research
Star Wars Nebula Virtual Sculpture / Processing / MySQL / SPL Database
AR Navigation App Augmented Reality / Android / OpenGL ES / Vuforia / UI Design
Art & Design Distribution Virtual Sculpture / Processing / Behance API
Reading Preference
about Sports
Processing / MySQL / SPL Database
Supermarket System
Design
UX Research / User Centered Design
All-set App UI Design / Prototype
Nanjing Breakfast
Packing
Branding / Cultual Research / Student Research Training
Luminaria Physical Computing / Arduino (Electronics)
Doodle in the Air Virtual Sculpture / Processing / Kinect