In 2004, Google Glass was a new wearable with novel sensory and interactive affordances. Can we utilize this new technology to make outdoor navigation an easier experience for the visually impaired?  My team set out to explore this design challenge.

 

Fall 2014 | Class Project

Teammates

Carlos Miguel Lasa, Vibhore Vardhan, Zaky Prabowo, Andy Huang, Nikil Mane

My Role

User Research - Contextual Inquiries, Observations, Interviews, Scenarios, Personas

Design & Prototyping - Lo-Fi Wizard of Oz, Mid-Fi, Participatory Design

Usability Testing - Heuristic Evaluation, Think-Aloud User Testing

 

 

 

 

 

1

User Research

 

Understanding what it is like to be a blind navigator is our first challenge. Through active user recruiting, we conducted in-depth interviews, observations, and contextual inquiries with nine visually impaired volunteers, representing both genders and a wide range of ages, occupations, and levels of visual impairment. These studies helped us understand our users’ navigation challenges and habits. 

We learned A LOT from this research! 

 
 

Affinity Diagrams

extract insights from research notes

 
  • The visually impaired face navigation challenges in many physical locations: sidewalks, intersections, buildings, parking lots, stairways, and crowds.
  • They use a variety of navigation tools: dogs, canes, magnifiers, maps, and signs.
  • The severity of visual impairment spans a wide spectrum from low vision to complete blindness.
  • Many blind navigators have highly sensitive touch, hearing and smell, which compensate for the vision.

 

 

Work Models

capture user's interaction with surrounding people and objects

 

Personas

Represent different vision levels, navigation habits, and demographics

 

Scenarios

Analyse Different navigation types

 
 
 
 

Why Glass?

We investigated whether Google Glass was a good platform for our navigation app. The conclusion was yes. According to our user research, cost and availability issues aside (not the focus of this exploratory project), Google Glass had the following advantages over a smartphone that were important to blind users:

BUILT-IN BONE CONDUCTIVE HEADSET

Being able to hear the surrounding environment at all times is crucial to blind navigators. Normal cellphone headsets block the environmental sounds, but bone conductive headsets would allow users to hear the surrounding sounds while interacting with Google Glass. 

MORE HANDS-FREE OPERATION

As we learned during the user research, blind navigators usually carried many devices and tools with them, usually with their hands occupied. Mobile phones were often inaccessible (in pocket) and required frequent prolonged two-handed operations, while Google Glass was accessible (on face) and required less frequent one-handed operations.

 

 

 

2

Needs Assessment

 

Now with a good understanding of our users’ navigation needs through user research, we moved on to design our application features through three low-fidelity iterations, testing each iteration prototype using the Wizard-of-Oz method.

 
 

Prototype 1

turn-by-turn audio navigation

We decided to focus on navigation assistance using GPS and map data, after evaluating the feasibility of different technologies. Features that would require computer vision was currently not accurate enough to be useful. Focusing on GPS and map data could provide potential solutions to three out of the four scenarios shown above, excluding only the intersection crossing scenario.

Considering the complexity of navigation situations our users encountered on a daily basis, we mocked up a high-granularity navigation system. It would use degree angles and number of steps to orient users, and it would convey this information via voice.

 

“In 100 steps, turn left onto Bancroft way.”
“Keep straight for 600 steps.”
“In 100 steps, turn 40 degrees counter clockwise onto Creek Walk”.

 

User Testing

Users had difficulties following the directions issued by our app.

I don’t usually count steps while walking.
It’s hard to tell how much 40 degrees is.
— User 2

On the other hand, users seemed to prefer a different navigation approach.

When I first learn a route, I write down [in braille] landmarks like fire hydrants and fountains. I take the notes with me the next time.
— User 1

Findings

Instead of by street names and turns, users prefer to navigate by landmarks.
Based on this observation, we developed a new prototype using landmarks to navigate.

Prototype 2

Landmark Based Navigation

This app will orient users by informing them of surrounding landmarks retrieved from Google Maps.

 

“Passing landmark: Golden Bear Café.”

 

I created a storyboard depicting our persona Andrea navigating around the UC Berkeley campus using Google Glass.

User Testing

The landmarks I use are usually subtle.
— User 1

It appeared that Google Maps landmarks (buildings, stores, offices, etc) were not the ones blind navigators found helpful. Through further user interviews, we collected the following useful landmarks:

  • Topography: uphill, downhill, flat
  • Objects: pole, bus stop benches
  • Audible & Sensory: sound and airflow changes around buildings allies, fountains, other pedestrians, cars
  • Ground Texture: smooth, rough, protruding tree roots, manhole covers
  • Smell: aroma from coffee shops, bakeries and flower shops (Note: Some low vision conditions accompany the loss of smell. Therefore, olfactory landmarks are not applicable to some people)

Enactment

At this point, we felt we needed to gain more empathy towards our users. We needed to experience first hand what it was like navigate without sight using our app. This turned out to be an easy task  find a stick and walk down the street with eyes closed! We enacted the three storyboard scenarios with me being Andrea, Vibhore being my student and Nikil being Google Glass. 

 
 
 
 

As we followed the streets and landmarks outlined in the storyboard, I, the blind navigator, soon grew frustrated with the navigation hints Glass was giving me - they were not helpful to me. Knowing that I was passing Golden Bear Cafe and the next landmark was Sather Gate didn't help me figure out how to get to Sather Gate. Instead, I became very interested in the smell coming from the roadside trash cans, as they kept me off the edge of the road.

Findings

Testings on prototype 2 revealed the diversity and unconventionality of navigation landmarks for the blind: compared to sighted people’s landmarks such as street names and building names, blind people’s landmarks were highly personal and individualized; in addition, the blind landmarks were usually not available in conventional location databases such as Google Maps. 

Based on these findings, we developed our third prototype.

Prototype 3

Location aware audio-annotation

Realizing the highly individualized nature of our users’ landmarks, we decided the navigation tips should be generated by our users themselves, rather than by the Google Maps database. Instead of a top-down machine generated navigation approach, what our users needed was a versatile annotation system that managed bottom-up organically generated navigation tips. Our new prototype would allow users to record their own audio navigation notes at any time/location and automatically attach a GPS location tag to the note. Next time the user is in the same area, they can play back previous audio notes to help them navigate easier.

 

<user initiates>
<user records>
“There are 9 steps leading up to South Hall entrance.”
<app initiates>
<app retrieves>
“Turn left when feeling the gravel asphalt surface.”

we received positive user feedback on the new design. Users liked the freedom of being able to control what annotations they could put in. It was time to move on to mid-fidelity prototype. 

Our three rounds of iterations in needs assessment can be summarized as below:

 
 

 

 

3

Interaction Design

 

Now that we had found an effective navigation method through lo-fi prototypes, we moved on to designing the application's interaction. Designing Google Glass interactions required more creativity and experimentation than usual. Its slide and tap gestures, tiny head-mount screen, and voice interface had few example designs that we could reference. We tackled this challenge by experimenting with many prototypes. 

 
 

Interaction Prototypes

Decision Tree

In this lo-fi prototype, we developed a game board-like simulation to imagine what it would be like to record landmarks using the Wayfinder app on Google Glass. There are various decision trees and cycles based on how the user interacts with the device.

 

Gesture vs Voice

In these prototypes, we compared the effectiveness of gesture interface and voice interface.

Access Menu - Gesture

Access Menu - Voice

 

Retrieve Landmarks - Gesture

Retrieve Landmarks - Voice

Usability Testing

During think-aloud user testing, many users expressed concerns over the inaccuracy of voice recognition (especially when there were ambient noises), and some users felt uncomfortable talking to Glass in public spaces. Therefore, many users preferred the gestural control. For the gestural control, they pointed out that some interactions required many different gestures to operate, making them difficult to remember. 

We decided to adopt a gestural interaction approach. Combining user testing feedback and additional heuristic evaluation, we eliminated some secondary features and standardized the gestures to create a more simplified and streamlined interaction experience.

Final Interaction Design

After iterating through prototypes, we were able to narrow the interaction scope to several key functions: recording an important landmark, retrieving saved landmarks near the user, a tutorial mode, as well as an alert functionality for saved landmarks the user is approaching. 

I made the following interaction flowchart. The numbered blue bubbles are references to design justifications, which are listed after the flowchart.

 
flowchart-annotation-midres.png
 

 

 

4

Final Prototype

 

We tried different ways to make a functional prototype: visual tools such as JustInMind and InVision, Google Glass SDK, and interactive environments like Processing and Wearscript. 

 

We ended up using Wearscript in the final prototype as it offered the best functionality compared to other prototyping environments. Wearscript had limitations though, we had to drop the alert functionality from our scope as this was something that the environment did not support well. Focus was put into the saving of landmarks and retrieval of landmarks, which were the two most important features that needed to be tested in our experiment. We also faced limitations in using the GPS sensor, so we had to employ “Wizard of Oz” techniques to simulate location awareness in our prototype.

 

 

 

5

Evaluation

 

With the interactive prototype coming into shape, we conducted additional user testing to evaluate its effectiveness. We evaluated 3 high-level tasks and 1 low-level task with three blind users. We also conducted cross-product comparison and within-product comparison. You can view our evaluation plan here.

 

Experiment Design

Task list 

We looked at the following tasks for the user to do as part of the test:

 

Control conditions and experimental variants

We planned to conduct cross-product comparison and within-product comparison:


Attributes and Measurements

Test users

If we had sufficient time, we would have wanted to experiment on users with varying degrees of blindness to ensure the product was applicable for the whole blind population. Our preferred test group size would be 15-20 people with various backgrounds of age, gender, and occupation. We would use within-subjects experiment so the same users apply to multiple tasks, because we did not have access to large number of users to conduct between-subjects experiment without bias. 

Given the time and resource constraints, however, we planned to test with three users. These users graciously volunteered to evaluate our functional prototype:

 

Evaluation Results

  • The low-level tasks of landmark recording and retrieval were successfully completed by all our users. 
  • All three commented on the interaction design as "simple" and "intuitive". The gestures involved were not difficult to learn, and navigation through our menus was straightforward.
  • All three users thought that having the ability to take location-based audio notes and later retrieve them was helpful for navigation.

Users also pointed out areas for future improvement. You can read more details here.

This app would be helpful to me.
— Test 1
I can see myself taking notes using this app.
— Tester 2
Yes, I would use this app.
— Tester 3

 

 

6

Future Work

 

The user evaluation provided some validation for Wayfinder's effectiveness. However, given the limitations of our hardware, prototyping environment and access to testers, there are various opportunity areas we can leverage to further improve on our experiment results.

 
  • Recruit more users to test the application. We would love to have a control group when testing our prototypes since we need to have users navigating a path both using their current methods and also using Wayfinder. With this, we could do more quantitative testing to see how navigation has improved for the user.
  • Develop a mobile prototype to work around the limitations of the Warescript environment. The mobile platform would allow for better user interactions and fuller access to the features of the Google Glass platform. This would then give us the ability to implement some of the features which were not included in the existing prototype, such as actual GPS location awareness and user alerts for nearby landmarks. Developing an actual application would also open the door to features that we believe would be helpful, such as the ability to report positional awareness with respect to landmarks (how the user is oriented toward a specific landmark, how many degrees and distance from the landmark). This would also open doors for using the camera on Google Glass to help identify landmarks or provide additional cues to allow the blind to “see” with the device, as camera functionality is not being used in the current iteration of the prototype.
  • Impose a schema for organizing landmarks, as current landmarks are all treated in the same manner. Being able to modify or augment landmarks as well as sorting or ranking them using machine learning or crowdsourcing would provide an additional layer of intelligence to help the blind navigate. Enriching the landmark database with other data sources such as Google Maps, Yelp or Foursquare could also prove valuable.

Beyond Google Glass, the possibilities to develop applications to enhance blind navigation are endless. Bluetooth beacon technology is becoming more popular for more precise landmark identification and navigation. Wearable technologies such as smartwatches and bone conduction headsets could also supplant Google Glass, and perhaps a different combination of technologies may provide a more optimal approach to tackle this problem. 

 

TEAM GLASS WITH ONE OF OUR USERS