In 2014, Google Glass was a new wearable with novel sensory and interactive affordances. Can we utilize this new technology to make outdoor navigation an easier experience for the visually impaired?  My team set out to explore this design challenge.

 

Fall 2014 | Class Project

Teammates

Carlos Miguel Lasa, Vibhore Vardhan, Zaky Prabowo, Andy Huang, Nikil Mane

My Role

User Research - Contextual Inquiries, Observations, Interviews, Scenarios, Personas

Design & Prototyping - Lo-Fi Wizard of Oz, Mid-Fi, Participatory Design

Usability Testing - Heuristic Evaluation, Think-Aloud User Testing

 

 

 

Project Overview

The challenge was to design an app on Google Glass that would help the blind navigate. My team was intrigued by the portability, eye-level camera, voice recognition, bone conduction headphones and the simple gesture user interface that Glass afforded. We went on to explore how to create an app useful to the blind leveraging Glass.

We employed the user-centric design methodology, starting with interviews and observations with visually impaired volunteers to understand our potential users. Validating our design with users each step along the way, our design ideas went from an initial step-by-step audio map, to a landmark-based audio map, and eventually to a more user-empowering location-aware audio note-taking system. Our final design allows a user to record their own audio navigation notes at any time any location and automatically attaches a GPS location tag to the note. The next time the user is in the same area, they can play their previous audio notes to help them navigate easier.  Our final design received positive feedback from the user volunteers, and we are currently researching the feasibility of building the design into a full-fledged product.

 
 

 

 

 

1

User Research

 

Understanding what it is like to be a blind navigator is our first challenge. Through active user recruiting, we conducted in-depth interviews, observations, and contextual inquiries with nine visually impaired volunteers, representing both genders and a wide range of ages, occupations, and levels of visual impairment. These studies helped us understand our users’ navigation challenges and habits. 

We learned A LOT from this research! 

 
 

Affinity Diagrams

extract insights from research notes

 
  • The visually impaired face navigation challenges in many physical locations: sidewalks, intersections, buildings, parking lots, stairways, and crowds.
  • They use a variety of navigation tools: dogs, canes, magnifiers, maps, and signs.
  • The severity of visual impairment spans a wide spectrum from low vision to complete blindness.
  • Many blind navigators have highly sensitive touch, hearing and smell, which compensate for the vision.

 

 

Work Models

capture user's interaction with surrounding people and objects

 

Personas

Represent different vision levels, navigation habits, and demographics

 

Scenarios

Analyse Different navigation types

 
 
 
 

Why Glass?

We investigated whether Google Glass was a good platform for our navigation app. The conclusion was yes. According to our user research, cost and availability issues aside (not the focus of this exploratory project), Google Glass had the following advantages over a smartphone that were important to blind users:

BUILT-IN BONE CONDUCTIVE HEADSET

Being able to hear the surrounding environment at all times is crucial to blind navigators. Normal cellphone headsets block the environmental sounds, but bone conductive headsets would allow users to hear the surrounding sounds while interacting with Google Glass. 

MORE HANDS-FREE OPERATION

As we learned during the user research, blind navigators usually carried many devices and tools with them, usually with their hands occupied. Mobile phones were often inaccessible (in pocket) and required frequent prolonged two-handed operations, while Google Glass was accessible (on face) and required less frequent one-handed operations.

 

 

 

2

Needs Assessment

 

Now with a good understanding of our users’ navigation needs through user research, we moved on to design our application features through three low-fidelity iterations, testing each iteration prototype using the Wizard-of-Oz method.

 
 

Prototype 1

turn-by-turn audio navigation

We decided to focus on navigation assistance using GPS and map data, after evaluating the feasibility of different technologies. Features that would require computer vision was currently not accurate enough to be useful. Focusing on GPS and map data could provide potential solutions to three out of the four scenarios shown above, excluding only the intersection crossing scenario.

Considering the complexity of navigation situations our users encountered on a daily basis, we mocked up a high-granularity navigation system. It would use degree angles and number of steps to orient users, and it would convey this information via voice.

 

“In 100 steps, turn left onto Bancroft way.”
“Keep straight for 600 steps.”
“In 100 steps, turn 40 degrees counter clockwise onto Creek Walk”.

 

User Testing

Users had difficulties following the directions issued by our app.

I don’t usually count steps while walking.
It’s hard to tell how much 40 degrees is.
— User 2

On the other hand, users seemed to prefer a different navigation approach.

When I first learn a route, I write down [in braille] landmarks like fire hydrants and fountains. I take the notes with me the next time.
— User 1

Findings

Instead of by street names and turns, users prefer to navigate by landmarks.
Based on this observation, we developed a new prototype using landmarks to navigate.

Prototype 2

Landmark Based Navigation

This app will orient users by informing them of surrounding landmarks retrieved from Google Maps.

 

“Passing landmark: Golden Bear Café.”

 

I created a storyboard depicting our persona Andrea navigating around the UC Berkeley campus using Google Glass.

User Testing

The landmarks I use are usually subtle.
— User 1

It appeared that Google Maps landmarks (buildings, stores, offices, etc) were not the ones blind navigators found helpful. Through further user interviews, we collected the following useful landmarks:

  • Topography: uphill, downhill, flat
  • Objects: pole, bus stop benches
  • Audible & Sensory: sound and airflow changes around buildings allies, fountains, other pedestrians, cars
  • Ground Texture: smooth, rough, protruding tree roots, manhole covers
  • Smell: aroma from coffee shops, bakeries and flower shops (Note: Some low vision conditions accompany the loss of smell. Therefore, olfactory landmarks are not applicable to some people)

Enactment

At this point, we felt we needed to gain more empathy towards our users. We needed to experience first hand what it was like navigate without sight using our app. This turned out to be an easy task  find a stick and walk down the street with eyes closed! We enacted the three storyboard scenarios with me being Andrea, Vibhore being my student and Nikil being Google Glass. 

 
 
 
 

As we followed the streets and landmarks outlined in the storyboard, I, the blind navigator, soon grew frustrated with the navigation hints Glass was giving me - they were not helpful to me. Knowing that I was passing Golden Bear Cafe and the next landmark was Sather Gate didn't help me figure out how to get to Sather Gate. Instead, I became very interested in the smell coming from the roadside trash cans, as they kept me off the edge of the road.

Findings

Testings on prototype 2 revealed the diversity and unconventionality of navigation landmarks for the blind: compared to sighted people’s landmarks such as street names and building names, blind people’s landmarks were highly personal and individualized; in addition, the blind landmarks were usually not available in conventional location databases such as Google Maps. 

Based on these findings, we developed our third prototype.

Prototype 3

Location aware audio-annotation

Realizing the highly individualized nature of our users’ landmarks, we decided the navigation tips should be generated by our users themselves, rather than by the Google Maps database. Instead of a top-down machine generated navigation approach, what our users needed was a versatile annotation system that managed bottom-up organically generated navigation tips. Our new prototype would allow users to record their own audio navigation notes at any time/location and automatically attach a GPS location tag to the note. Next time the user is in the same area, they can play back previous audio notes to help them navigate easier.

 

<user initiates>
<user records>
“There are 9 steps leading up to South Hall entrance.”
<app initiates>
<app retrieves>
“Turn left when feeling the gravel asphalt surface.”

we received positive user feedback on the new design. Users liked the freedom of being able to control what annotations they could put in. It was time to move on to mid-fidelity prototype. 

Our three rounds of iterations in needs assessment can be summarized as below:

 
 

 

 

3

Interaction Design

 

Now that we had found an effective navigation method through lo-fi prototypes, we moved on to designing the application's interaction. Designing Google Glass interactions required more creativity and experimentation than usual. Its slide and tap gestures, tiny head-mount screen, and voice interface had few example designs that we could reference. We tackled this challenge by experimenting with many prototypes. 

 
 

Interaction Prototypes

Decision Tree

In this lo-fi prototype, we developed a game board-like simulation to imagine what it would be like to record landmarks using the Wayfinder app on Google Glass. There are various decision trees and cycles based on how the user interacts with the device.

 

Gesture vs Voice

In these prototypes, we compared the effectiveness of gesture interface and voice interface.

Access Menu - Gesture

Access Menu - Voice

 

Retrieve Landmarks - Gesture

Retrieve Landmarks - Voice

Usability Testing

During think-aloud user testing, many users expressed concerns over the inaccuracy of voice recognition (especially when there were ambient noises), and some users felt uncomfortable talking to Glass in public spaces. Therefore, many users preferred the gestural control. For the gestural control, they pointed out that some interactions required many different gestures to operate, making them difficult to remember. 

We decided to adopt a gestural interaction approach. Combining user testing feedback and additional heuristic evaluation, we eliminated some secondary features and standardized the gestures to create a more simplified and streamlined interaction experience.

Final Interaction Design

After iterating through prototypes, we were able to narrow the interaction scope to several key functions: recording an important landmark, retrieving saved landmarks near the user, a tutorial mode, as well as an alert functionality for saved landmarks the user is approaching. 

I made the following interaction flowchart. The numbered blue bubbles are references to design justifications, which are listed after the flowchart.

 
flowchart-annotation-midres.png
 

 

 

4

Final Prototype

 

We tried different ways to make a functional prototype: visual tools such as JustInMind and InVision, Google Glass SDK, and interactive environments like Processing and Wearscript. 

 

We ended up using Wearscript in the final prototype as it offered the best functionality compared to other prototyping environments. Wearscript had limitations though, we had to drop the alert functionality from our scope as this was something that the environment did not support well. Focus was put into the saving of landmarks and retrieval of landmarks, which were the two most important features that needed to be tested in our experiment. We also faced limitations in using the GPS sensor, so we had to employ “Wizard of Oz” techniques to simulate location awareness in our prototype.

 

 

 

5

Evaluation

 

With the interactive prototype coming into shape, we conducted additional user testing to evaluate its effectiveness. We evaluated 3 high-level tasks and 1 low-level task with 3 blind users. We also conducted cross-product comparison and within-product comparison. You can view our evaluation plan here.

 

Overall, users found the Wayfinder usable and useful:

  • The low-level tasks of landmark recording and retrieval were successfully completed by all our users. 
  • All three commented on the interaction design as "simple" and "intuitive". The gestures involved were not difficult to learn, and navigation through our menus was straightforward.
  • All three users thought that having the ability to take location-based audio notes and later retrieve them was helpful for navigation.
This app would be helpful to me.
— Test 1
I can see myself taking notes using this app.
— Tester 2
Yes, I would use this app.
— Tester 3

Users also pointed out areas for future improvement. You can read the full details here:

  • One user suggested we put more aural markers to help users keep track of their location in the application. An example of this was to explicitly mark the ends of a menu or a list, either by having an end marker audio notification, or by implementing a rotating menu to cycle back to the first item once the end is reached.
  • All three users thought the side-way swipe gesture was intuitive for browsing through a list, but the up and down swipes were less intuitive and not very consistent, due to limitations of the prototyping environment. So we needed to make sure that our tutorial used easy language to explain the ideas that an up swipe would go back to the previous menu location, and a down swipe would exit the current session and return to the very top of the app menu. 
  • Two of the three volunteers had trouble hearing the voice menu in a relatively noisy outdoor environment, a limitation imposed by the volume of the Google Glass bone conduction speakers.

 

 

 

6

Future Work

 

The user evaluation provided some validation for Wayfinder's effectiveness. However, given the limitations of our hardware, prototyping environment and access to testers, there are various opportunity areas we can leverage to further improve on our experiment results.

 

Conduct More User Testing

We would love to have a control group when testing our prototypes since we need to have users navigating a path both using their current methods and also using Wayfinder. With a control group, we could do more quantitative testing to see how navigation has improved for the user.

Find Better prototype environment

We plan to Develop a mobile prototype to work around the limitations of the Warescript environment. The mobile platform would allow for better user interactions and fuller access to the features of the Google Glass platform. This would then give us the ability to implement some of the features which were not included in the existing prototype, such as actual GPS location awareness and user alerts for nearby landmarks. Developing an actual application would also open the door to features that we believe would be helpful, such as the ability to report positional awareness with respect to landmarks (how the user is oriented toward a specific landmark, how many degrees and distance from the landmark). This would also open doors for using the camera on Google Glass to help identify landmarks or provide additional cues to allow the blind to “see” with the device, as camera functionality is not being used in the current iteration of the prototype.

Design a schema for organizing landmarks

Currently Wayfinder's landmarks are all treated in the same manner. Being able to modify or augment landmarks as well as sorting or ranking them using machine learning or crowdsourcing would provide an additional layer of intelligence to help the blind navigate. Enriching the landmark database with other data sources such as Google Maps, Yelp or Foursquare could also prove valuable.

Explore New Technologies

Beyond Google Glass, the possibilities to develop applications to enhance blind navigation are endless. Bluetooth beacon technology is becoming more popular for more precise landmark identification and navigation. Wearable technologies such as smartwatches and bone conduction headsets could also supplant Google Glass, and perhaps a different combination of technologies may provide a more optimal approach to tackle this problem. 

 
 

TEAM GLASS WITH ONE OF OUR USERS