Evaluation Toolbox for Aviation Technical Publications
National Institute for Aviation Research &
Wichita State University
Summary: This article describes the Evaluation Toolbox (Chaparro et al., 2004) - an aid to understand the process of evaluating the usability of aviation maintenance documentation -- from the initial development stage through the final pre-publication stage. This toolbox provides techniques to help technical writers better understand their users and to evaluate their documentation more effectively and efficiently.
The goal of an aviation technical writer is to produce maintenance documentation that will be clearly understood by all users – i.e. make the documentation usable. In order to assure that this is the case, it is necessary throughout the process of developing the documentation to evaluate the usability of the documentation with diverse groups of users. While there are many definitions for usability, in the context of product design the most widely recognized is the ISO 9241 standard definition of quality of use to the design of human-machine interface.
The usability of an interface is a measure of the effectiveness, efficiency and satisfaction with which technicians can achieve specified goals in a particular environment with that interface.
Effectiveness – Does the product do what the technician needs to accomplish their intentions, tasks, and goals?
Efficiency – Can the technician complete their tasks with a minimum of effort and without errors?
Satisfaction – Does the technician feel good about their ability to accomplish their goals using the product?
Although most literature written on usability to date has been in the field of human-computer interaction, the aircraft maintenance manual can also be viewed as the technician's interface with aircraft assembly, service, and repair. In the same way that usability influences the effectiveness, efficiency, and satisfaction of any product design, it can benefit the user of aviation technical publications. In aircraft maintenance where there is a high value placed on productivity and a high cost associated with human error, usability is critical.
The importance of usability can be illustrated by the consequences of a lack of usability. A lack of usability in the product's design results in:
"Workarounds" – When the product is difficult to use or does not match the technician's mental model of the system, technicians invent ways to "workaround" to compensate for the inadequacies of the procedure.
Low usage levels – Technicians use the written procedure as little as possible.
Higher error rates – Technicians have to depend upon recalling similar experiences or making choices (best guesses) when they do not have previous experience.
Dissatisfaction – Frustration levels are higher when the technician finds the manual difficult to use.
In contrast, incorporating usability in a manual's design results in:
Consistency – The documentation will be reliable, allowing the technician to know what is meant and what to expect.
Supporting the technician's intentions and goals – When the technician knows that the maintenance documentation will aid their goal achievement, s/he will use the manual.
Satisfaction – Higher approval of the manual lowers the technician's frustration level and gives the user a sense of control via their ability to accomplish the task.
The key to involving users in the manual development process is to take an iterative or cyclical approach. An iterative process is one in which the procedure is evaluated – those corrections made – then tested again – then corrections made – etc. User feedback is gathered early and often by using evaluative methods at each stage of the process which drives the manual procedures' development. Even if some important errors are not found during one evaluation, another evaluative cycle offers another opportunity to identify the problem or issue. Each iteration is an opportunity to bring in real users and evaluate different aspects of the evolving procedure-writing process. User responses drive the writer's responses to changes and continued improvement.
Usability Evaluation Methods
Many usability evaluation methods have been developed to assess product usefulness and ease of use. These methods, which were originally developed in the area of software usability, fall into two basic categories – evaluation of the product's usability by the product's developers and usability professionals (referred to as expert reviews) and evaluation through the end user of the product (referred to as user testing).
Each type of usability evaluation method has unique advantages and functions within the entire development process. The variety of methods also collect different types of data which can be classified as:
- Performance or preference – Performance data is collected when the user "performs" a task, including error rates, time to complete task. Preference data is collected when the user adds their thoughts, feelings, and preferences, including participant rankings, answers to questions. Depending on the goals of the study, both performance and preference date can be used either quantitatively or qualitatively.
- Objective or subjective – Objective data is collected when the user's data collects what the user does, e.g., time to complete task. Subjective data is what is collected by the user's interpretation of what the user sees, hears, or does.
- Quantitative or Qualitative – Quantitative data is counted; qualitative data must be coded if it is to be used quantitatively. For example, quantitative data is number of tasks completed correctly/incorrectly; whereas, qualitative data is analyzed by the type of response, positive/negative or satisfied/not satisfied.
The Evaluation Toolbox for Aviation Technical Writers was designed to give a comprehensive overview of the various types of usability evaluative methods that are best suited to this domain. The methods are described using the Practical Review System (PRS) framework developed by Rhodes (2003). There are also templates, guidelines, and forms which can be adapted to the organization's individual project needs to provide a "turn key" way to incorporate usability evaluation into the organization's technical writing process. Cognitive processes and Norman's Action Cycle are also discussed in terms of writing for aviation maintenance. This toolbox takes a practical and usable approach to assist experienced technical communicators in the review and evaluation of the usability of technical documentation.
Chaparro, A., Rogers, B., Hamblin, C. & Chaparro, B. Evaluation Toolbox for Aviation Technical Writers. 2004, Federal Aviation Administration: Washington D.C. (Technical Report)
Rhodes, J. S. (2003). A proposal for evaluating usability testing methods: The practical review system (PRS). http://webword.com/moving/prs.html