Wayfinder Evaluation Experiment Design


We looked at the following tasks for the user to do as part of the test:


We planned to conduct cross-product comparison and within-product comparison:



If we had sufficient time, we would have wanted to experiment on users with varying degrees of blindness to ensure the product was applicable for the whole blind population. Our preferred test group size would be 15-20 people with various backgrounds of age, gender, and occupation. We would use within-subjects experiment so the same users apply to multiple tasks, because we did not have access to large number of users to conduct between-subjects experiment without bias. 

Given the time and resource constraints, however, we planned to test with three users. These users graciously volunteered to evaluate our functional prototype:

Wayfinder Evaluation User Feedback

Users identified several improvement areas in our prototype:

  • One user suggested we put more aural markers to help users keep track of their location in the application. An example of this was to explicitly mark the ends of a menu or a list, either by having an end marker audio notification, or by implementing a rotating menu to cycle back to the first item once the end is reached.
  • All three users thought the side-way swipe gesture was intuitive for browsing through a list, but the up and down swipes were less intuitive and not very consistent, due to limitations of the prototyping environment. So we needed to make sure that our tutorial used easy language to explain the ideas that an up swipe would go back to the previous menu location, and a down swipe would exit the current session and return to the very top of the app menu. 
  • Two of the three volunteers had trouble hearing the voice menu in a relatively noisy outdoor environment, a limitation imposed by the volume of the Google Glass bone conduction speakers.
  • One user expressed confusion with the menu title "Retrieve Nearby Landmarks". The word "nearby" did not give him a clear sense of how close the landmarks would be, and in what order they would appear. This provided us with an opportunity to further refine the design of this functionality in future prototypes.
  • One user pointed out that a richer database that incorporated other navigation resources could enhance the effectiveness of the application. Such resources could include Google Maps street names and FourSquare/Yelp business names.
  • Two users expressed the desire to be able to edit and organize the landmarks more effectively. We should work on enabling edit and deletion of landmarks, as well as creating a multi-layer notification schema based on the importance of a landmark. 

OpenBAS User Research Finding Summary


  • Managers do not want fine control of the system and prefers automation. (see: participatory design)
  • Managers likes the idea of having a building automation system (of course this depends on how well the system delivers, but also depends on managers’ attitude towards automation in terms of professional values and job security) (see: interviews, usability studies)
  • Managers currently spend little time on building management activities (this doesn’t mean that more management activities that save energy or increase comfort should not be done) (see: interviews, diary study)
  • Managers do not want to increase the amount of time spent on building management. (see: interviews)
  • Managers prefer performing building management activities on physical interfaces than the current web interface. (see: interviews, diary study)
  • Managers think the current web interface has many features that they do not need. (see: interviews, usability studies, card sorting)
  • Among the values that openBAS provides, office managers think convenience is more important than energy saving, but higher management thinks energy saving is more important than convenience. (see: interviews, usability studies)

Overall market fit

openBAS provides two values (see: expert interviews)

  • Convenience: Automates building management through smart devices, so office managers do not need to manage HVAC and lighting manually (most desirable), or can manage them in a centralized setting (secondarily desirable). 
  • Energy saving: Auto adjusts HVAC and lighting settings to energy efficient modes based on demand to save energy. 

Small-medium size offices currently do not want openBAS because (see: expert interviews)

  • Not enough smart devices to be connected to openBAS (expert interviews)
  • Not enough incentive to upgrade to smart devices in the short term
    • Convenience: openBAS not significantly increase convenience because current management is easy enough
    • Financial: energy cost savings do not justify device upgrade costs

Small-medium size offices will want openBAS in the long term, but it may take a long time (see: expert interviews)

  • Building code will eventually require buildings to install smart devices, but this will take a long time
  • The technology will help save energy and make management more convenient, but it may take a long time before the technology becomes reliable and affordable. 

Background qdo Concepts

Task, Job & Worker

  • A task is a Linux command
  • A number of tasks are grouped into a queue (see Stephen’s example)
  • A job is a queue submitted as a batch job.
  • Workers are the number of processes (process may not be the correct terminology) that can run simultaneously within a job. 

To illustrate with an example:

We can type in the following qdo command on Edison: “qdo launch (2 jobs, 4 workers, 96 cores)” 
Then what happens on Edison is the 2 jobs are submitted simultaneously through Edison command “qsub [jobname]”; the 96 cores are divided between the 4 workers so each worker gets 24 cores. Each worker independent grabs a task from the pending list from the two jobs and perform “qdo do” on the task, using the 24 cores (a.k.a one node) it has access to. Once a worker finishes a task, it goes back to the pending list of the queue and grab another task to perform. This happens recursively until the queue is complete. 
The reason why we submit two jobs here to run simultaneously instead of one bigger job that combines the two jobs is, among other technical considerations, the fact that shorter jobs that require short walltime wait less in the NERSC queue. 

What Happens When A Task Fails?

If a task fails, the queue would still finish executing. To figure out which task has failed, you do “qdo tasks queuename” and that will list all the tasks commands in the given queue and their status. 
Then you can do “qdo retry queuename”: put all failed tasks in the queue back into pending.

Some Terminology Distinctions

  • qdo retry: put all failed tasks back into pending
  • qdo recover: in case a job crashes due to system reasons, put running tasks back into pending
  • qdo rerun: put all tasks in a given queue back into pending. You can add flags to this command to specify a subset of tasks to rerun, e.g. “qdo rerun exitcode=0” would only put the “succeed” tasks back to pending.

  • Waiting: tasks whose dependency hasn’t been run yet
  • Pending: tasks has cleared all dependency and waiting for execution

Feature Prioritization Research

Question 1: Verify if a dashboard was needed


  • How often do you monitor your queues?
  • How frequently should information on the UI be updated?
  • What’s the biggest value of the GUI to you, compared to command line?
  • How often do tasks fail?
  • What do you do when tasks fail?
  • Do you launch qdo add, and qdo launch on separate occasions, or together?
  • How many queues and how many tasks do you usually have?

Question 2 (if Question 1 validates dashboard)



  1. Here are the potential items to display: (Maybe good as a graphic card sorting activity, if a certain job status does not trigger action, then it should not be on the dashboard.) How often would you access the above information (w/o clicks, w/ one click, w/ two clicks, w/ three clicks)? Remember the less clicks, the more clustered the screen will be.
  2. Once items have been sorted, ask (especially for the w/o click items):
    • Describe a situation when this information would lead you to do something OR Give me an example of the actual data that would appear and the action that you would take in response. 
  3. Once items have been pruned from (2), ask about the number related items:
    • What are the useful comparisons (targets, standards, past data etc.) that will allow you to see these items of information in meaningful context?
  4. How would you group these items that could be used to organize the items of information on the dashboard?



a.    task status in a queue

1)    Task list (list of individual tasks)
2)    Task status (list of the statuses of individual tasks without task detail)
3)    Failed task (list of failed tasks)
4)    Completed task (list of completed task)
5)    Waiting/pending/running/rerunning (currently not feasible in backend, ask Stephen if this would be useful) tasks ( list of tasks of each of the three statuses)

b.    Queue status

1)    Queue status (the number of tasks in the queue that are waiting, pending, running, succeed, fail) 
2)    Queue failures (a simple warning that there is at least one task that failed in the queue)
3)    Queue completion (notification that a queue is completed)

c.    General stats (* marks the number items that need comparison context)

1)    *Number of workers for a queue
2)    *Queue running time list
3)    *Queue waiting time list
4)    *Number of tasks in a queue
5)    Queue status (running/paused/resume/deleted (a list of each))
6)    Queue status (launched/ unlaunched (list of each)

d.    Any trends/history you want to see?



The card sorting activity may have given ideas to what actions are needed.
Rank importance of these action:

a.    Retry/rerun/recover
b.    Pause/Resume
c.    Delete
d.    View
e.    Edit (currently not feasible in backend)
f.    Other actions

Fill out the action verb with subject and object. E.g. qDO rerun the queue

Platform Justifications


Using a mobile app instead of a desktop app is critical to making a good solution.

  • A mobile app would allow people to help out on the go: wherever they are.
  • It also expands the amount of users who could have the app, especially in developing countries, where lots of people don’t have a desktop, but many people have a smartphone.
  • Using a smartphone also allows the good Samaritan to be guided directly to the person in distress.
  • Additionally, the phone can give updated information as the good Samaritan is in transit.

Given all these factors, we can see that a smartphone app is a better fit for this problem.


Using a smartwatch instead of a smartphone provides a few key improvements.

  • It allows for quicker response time. These few seconds it takes to pull out your smartphone can be the difference to save a life. The timing of EMS is critical and the app is trying to shave off minutes and seconds from our current EMS response times so utilizing the speed of a smartwatch response is worth it.
  • Using a smartwatch has a lower likelihood of a responder missing the notification. Since a watch is almost always on your wrist, the notifications can be more assured of reaching their intended targets. With a phone, sometimes people don’t feel or hear them in their pockets/purses.
  • The app allows the good Samaritan to have their hands free as they’re updated with information. Having to hold a phone in order to read or hear instructions can limit their ability to help out in a situation.

With all these factors under consideration, we think a smartwatch app is appropriate in this situation. There would certainly be a phone app as well, because smartwatch adoption is low at this time, but for those that have a smartwatch: all those seconds add up and could save many lives.