The Role of Usability Research in Designing

Children’s Computer Products

Libby Hanna, Ph.D.

IMG Usability

Microsoft Corporation

Kirsten Risden, Ph.D.

IMG Usability

Microsoft Corporation

Mary Czerwinski, Ph.D.

Microsoft Research

Microsoft Corporation

Kristin J. Alexander, Ph.D.

Hardware Ergonomics and Usability

Microsoft Corporation

 

Introduction

Usability research with children has often been considered either too difficult to carry out with unruly subjects, or not necessary for an audience that is satisfied with gratuitous animations and funny noises. In addition, traditional measures of usability such as productivity indices and speed and efficiency of task completion are not generally appropriate to use for children’s products. However, our research at Microsoft indicates that the usability of a product is closely related to children’s enjoyment of it. Therefore we have worked hard to develop sound methodologies for usability testing with children. In this chapter, we describe the methods we use during various stages of product development, design guidelines that have resulted from our research and useful practices for working with product teams and upper management that we have learned along the way.

Microsoft usability engineers have been working on children’s products for many years, but it was only recently that usability was formally incorporated as a standard practice during product design. It might be useful to begin this chapter by examining the evolution of children’s usability research at Microsoft. At the outset, the Microsoft Kids product teams felt strongly that children’s engagement was more important than, or at least equally important to, usability. In fact, some product teams were abandoning usability work during the development cycle because it appeared too difficult to evaluate product ideas for "fun." When usability engineers attempted to operationally define engagement for research purposes, it was clear that a stable and consistent definition was not available.

One of the most productive research efforts the Kids usability staff did was to take on this problem. Through literature review, surveys, and response tracking over product generations, the usability engineers were able to define at least some of the components of a fun product (Risden, Hanna, & Kanerva, 1997). Factor analyses of children’s responses to questions assessing liking and usability of computer software revealed dimensions of engagement such as "familiarity," "control," and "challenge" that fit with research and theoretical discussions of others (e.g., Lepper, 1988; Malone, 1980; Whalen & Csiksentmihalyi, 1991). Most importantly, this research demonstrated that ease of use is a critical determinant of engagement, and as such is key to every child’s product if it is to be a success. The research also helped product teams value the background and skills that usability engineers could bring to new and difficult issues in the design of children’s computer products. Our research was a fabulous mechanism for broadening our argument for why usability work is important during Kids product design.

However, it was not always obvious that the Kids usability group was making progress in establishing a precedent for user-centered design. As with other teams, battles were often lost due to schedule or budget, and these losses were very disappointing given how much effort the team was exerting to get successful products out the door. We knew progress had occurred, however, when one of the usability staff was elevated to a "team lead" role and allowed to attend upper-level staffing meetings. Here the usability lead was able to fight strongly for product redesign, and even schedule changes, when feedback from children demonstrated a need for it. Eventually, all usability issues observed in the laboratory were integrated into a database and given the same kind of high priority treatment in the company as other software "bugs."

At about the same time the Kids usability engineers were researching the dimensions of engagement, departments within Microsoft were reorganizing into smaller, functionally organized teams. Usability decentralized and the Kids usability engineers moved next door to their teammates on the product teams. Usability engineers now reported into a divisional usability manager as well as having a dotted-line reporting relationship to program managers. This arrangement ensured that usability engineers were rewarded not only for their usability expertise, but also for how well they worked with their teams and toward their team’s goals. Over time, it became clear that this dual reporting structure elevated the status of the usability engineers within the Kids product teams. The close proximity to teams is now something that no usability engineer would give up willingly.

The usability engineering staff that works on children’s products is uniquely qualified to advocate usability engineering. As developmental and educational psychologists, the staff has adopted an energetic, rigorous approach to product design and research, and has been able to attain respect for their contribution to the product design cycle.

In the rest of this chapter, we describe the range of methods we use during the course of product development. Then we discuss some of the child-technology interaction guidelines that have emerged from our work on specific Microsoft children’s products. Finally, we suggest ways to work with product teams as a usability engineer. All the expertise in the world will not guarantee success if the usability engineer does not know how to communicate effectively with his or her team. It is our sincere hope that other children’s usability engineers will benefit from the lessons learned by the Microsoft Kids usability team.

Methods of Usability Research with Children

Usability Research Processes

When we conduct usability research, we follow three basic processes common to the field of human-computer interaction (e.g., Dumas & Redish, 1993; Nielson, 1993; Rubin, 1994). The first is to analyze the user—to understand the user’s skills, knowledge, and expectations. For children’s products, we begin with a targeted age range and supply information about children within that range. One common age range for children’s products is 3- to 6-year-olds. We help the team understand both 3-year-olds and 6-year-olds and how they interact with computer programs. A 3-year-old may not understand certain words like "select," whereas a 6-year-old may resent a "babyish" character on screen. Individual differences in temperament and attention span also have to be taken into account. The product needs to accommodate children who click madly around a screen as well as those who sit back and wait to be told what to do.

The second process is to analyze tasks—to understand the user activities that a product is intended to support. When analyzing usage by children, we look at the goals of the product and the goals of children. The goal of the product may be to teach the alphabet, but children will probably not play with the product because they want to learn the alphabet. A child’s goal may be to explore and find out what happens, or to win a game.

The third process is to design a product in iterative phases based on the analysis of users and their tasks. This puts the previous two steps into the actual design process. Iterative design is put into practice by testing an idea, revising it according to feedback from the data, and testing the revised idea.

Usability Research Techniques

Table 1 lists the techniques we have used in our usability research and when each method is most useful in the product development cycle. The paragraphs that follow provide details and examples of how each technique has been implemented in our work.

Table 1. Techniques for usability research with children.

Research technique

Applicable stage of product development

Expert reviews

Throughout

Site visits

Concept; preliminary design

Survey construction

Concept; preliminary design; beta testing

Card-sorting tasks

Concept; preliminary design

Paper-prototype tests

Preliminary design

Iterative laboratory tests

Prototyping; developing

Longitudinal tests

Beta testing; final products

Expert reviews. At the very beginning of the design process, expert reviews routinely provide quick checks on design to catch obvious problems. We look at design specifications or storyboards and use child development milestones as well as common usability guidelines to check for violations. For example, a game for 4-year-old children that requires reading is not age-appropriate. A design that provides a simple yellow circle on the screen for help functionality violates a usability guideline of "recognition over recall" (Nielson, 1993). The button needs to have an image that gives more information about its function, so that a child can immediately recognize its purpose instead of having to memorize it with repeated use. Issues and recommendations can be communicated to the team in a written format or in informal meetings such as "spec bashes." The issues and recommendations generated in this way can also be summarized into general guidelines that other teams can use as a resource as well. When usability engineers sit on a team and participate in design meetings, expert reviews can be incorporated continuously in the design process.

Site visits. Site visits provide information about children’s use of products in context. Site visits for word-processing or spreadsheet programs often focus on looking at the work environment and how to increase productivity or efficiency within that environment. For children’s products, we tend to focus on how children use products over time, examining what makes a product retain appeal and re-playability, or how quickly children master certain types of interactivity. Our goal is to increase the chance that children will choose our products over many other options in their free time.

In homes, we have observed children using competitive products in order to gather information about the relative importance of features. For example, when we began designing an educational title that incorporated a unique help system, we observed children at home turning off the help systems in competitive products because the help systems were disrupting game play. This observation helped us formulate a design for a less intrusive help system.

We conduct site visits at schools and day care centers to gather group data. This is especially useful for exploring children’s preferences and other qualitative reactions to design ideas. We have interviewed children at schools about Internet use and their current interests in commercial products. We have taken character drawings to day care centers and gathered preference data (using a paired-comparison technique, where children are shown all possible pairs of characters and asked to choose which they like better in each pair).

Survey construction. In some cases, we have had to adapt typical survey methods to be age-appropriate. For example, when asking children between about 5 to 10 years of age to rate attributes of computer products, we have developed the use of a vertical scale with a smiling face on the top and a frowning face on the bottom (Risden et al., 1997), shown in Figure 1.

Figure 1. A scale for asking children to rate software on attributes of usability and engagement (Risden et al., 1997).

J

 

 

 

 

 

 

 

 

L

Researchers can read questionnaire items out loud and then ask children to draw a line across the scale to mark "how much" of something is true of what they are rating. We find that children are able to respond more reliably to a pictorial representation such as this with its meaningful anchors (smiling and sad faces) and concepts of more and less (vertical rather than horizontal presentation of the scale) than to Likert-type scales.

When children under 5 years of age are participating in laboratory tests, we often ask parents to rate features for both their own liking and their children’s liking. These ratings can be compared to observations of children’s behavior to validate our reports on what the children find appealing. We also create questionnaires for parents to answer specific research questions. For example, before revising a parent-report feature in an educational product, the team wanted to know how much parents actually used and benefited from the feature. We developed a questionnaire that asked about the feature in the midst of questions about many other features. This helped avoid focusing too much attention on the feature of interest.

Card-sorting tasks. Another research technique that can contribute to early product concept design is card sorting. Children as young as eight years of age can be asked to sort cards, containing pictures or words, into spontaneous or predetermined categories. When software groups features, tools, or activities into pre-assigned categories and children need to follow the hierarchy to access the tools, this procedure can be used to check how well the pre-assigned categories fit children’s expectations.

This technique was used during the design phase for the revision of Creative Writer (Microsoft, 1994), a word-processing program for children. Figure 2 is a screen shot from the original program, where numerous icons for different categories of tools are displayed across the top of the screen (in the toolbar for a new document). These include such categories as font format tools, clip art, page format tools, editing tools, etc.

Figure 2. The "new document" screen in the first version of Creative Writer (© 1994 Microsoft Corporation), showing icons for different categories of tools across the top.

In one of the usability studies done for the redesign, children were asked to sort cards containing the names of word-processing tools. The results from a cluster analysis helped the team re-group the tools into four main categories—writing tools, page tools, picture tools, and idea tools—which are represented in four large icons across the top of the screen in Creative Writer 2 (Microsoft, 1996), shown in Figure 3.

Figure 3. The "new document" screen in Creative Writer 2 (©1996 Microsoft Corporation), showing the reorganization of tool categories after card-sorting studies with children.

Paper prototype tests. Even very preliminary functional designs can be tested with children by using paper materials. Screen shots, sketches, or storyboards can be put together in a notebook. Children can "click" on things by pointing to them with their finger. As they click on navigational elements, the researcher can turn the pages to take them to the appropriate place in the program. Active elements can be simulated by cutting out the pieces and allowing children to manipulate them. Once children are interacting with the design, errors and observed confusion can directly predict errors children will make when using the future computer product.

For example, preliminary designs for the science exploration title The Magic School Bus Explores in the Age of Dinosaurs (Microsoft, 1996) were tested using drawings of the front of the bus at various locations (shown in Figures 4-6).

Figure 4. A drawing of the initial "front of the bus" screen used in a paper prototype test for The Magic School Bus Explores in the Age of Dinosaurs (©1996 Microsoft Corporation).

Figure 5. A drawing of the destination map that appears in the front of the bus after clicking the steering wheel (The Magic School Bus Explores in the Age of Dinosaurs, ©1996 Microsoft Corporation).

Figure 6. A drawing of the front of the bus as it appears after clicking Cretaceous Mongolia in the destination map (The Magic School Bus Explores in the Age of Dinosaurs, ©1996 Microsoft Corporation).

As children navigated to the various locations, it was easy to assess how well they could remember the required sequence to "drive" the bus. When they first entered the bus, they saw an image of the school through the bus window to show them they were in the present (Figure 4). Then they had to click on the steering wheel to bring up a "destination map" in the window (Figure 5). After clicking on one of the dots on the map, an image of the resulting location in time would appear in the window (Figure 6). In the test, children had trouble remembering the particular step to bring up a destination map, so the final design included alternate ways to access the map.

Team members may discount results from paper tests, feeling that the additional audio and animation of the computer product will eliminate children’s confusion. However, these very preliminary stages are the only time at which there is still the possibility of radical redesign.. A precise and rigorous simulation of interactivity will usually convince teams that the paper test is an adequate measure and can be achieved with a little forethought and creativity. And it offers the opportunity to observe children interacting with the design almost right from the start.

Iterative laboratory tests. Children can participate in traditional laboratory usability tests with only minor adjustments in procedure (see Hanna, Risden, & Alexander, 1997, for guidelines on adapting these tests for children). We use a typical laboratory set-up, with a one-way mirror into an observation area for team members, and videotaping equipment to record behavior and computer screen captures. Children participate one-on-one with the researcher. During these tests we can quickly uncover problems by observing children maneuvering through products on their own. Children are very accustomed to asking for and getting help from others when they use new products. However, because our goal is to find out what children can do with the product without intervention, we try to turn the situation around. Instead of helping them progress with the product as it is intended to be used, we try to find out what they want to do with the product.

As products begin to be built, usability is assessed in an iterative research process. A feature is tested, revised, then tested again to make sure that children understand its components. The "peek-a-boo" game for the animatronic doll Actimates Barney (Microsoft, 1997) was tested repeatedly to ensure that the sensors inside the doll were sensitive enough to respond consistently to small children’s attempts to cover the doll’s eyes. The early learning title My Personal Tutor Preschool Workshop (Microsoft, 1997) was tested on a monthly basis to make sure that young children could understand audio instructions, recognize and use navigational icons, find hotspots, and complete problems in activities. For example, the icons for returning to the main screen and for changing levels went through several versions, until testing showed consistent success with using the host character pointing backwards for Return and using a scale with dots representing the 1-3 levels of the game for Levels. The final versions are shown in the screen shot of a sample activity from the product in Figure 7 (also shown in the color plates).

Figure 7. A screen shot of the Patterns activity in My Personal Tutor Preschool Workshop (©1997 Microsoft Corporation), showing final Return and Levels icons (on the bottom of the screen) after iterative testing.

Laboratory testing can offer a quick check on product appeal, especially for gathering information on how to avoid negative reactions. While children may not display much positive emotion in the laboratory, any sign of negative emotion deserves attention. Observing signs of disengagement like rocking, sighing, or turning away from the product gives a picture of how much something may or may not appeal to children. For example, when testing Actimates Barney, several children were observed ending their play when the doll wouldn’t respond to initiations of the peek-a-boo game. Actimates Barney was presented as a play partner for children, so naturally children expected him to respond to their actions, just like a real partner would. This issue was resolved by making all of the doll’s routines interruptible whenever children initiated the peek-a-boo game. With this change, peek-a-boo became their favorite activity. A picture of a child playing with Actimates Barney is shown in Figure 8 (in the color plates).

Longitudinal tests. Longitudinal research on children’s products often evaluates the effectiveness of educational content or techniques (e.g., Kulik & Kulik, 1991; Lieberman, 1997). When a product is completed or nearly completed, we can conduct similar tests under industry time constraints by bringing children into the laboratory for repeat visits over 2 to 3 weeks. We plan for a total amount of time that approximates the average amount of time children may use the product over 2 to 3 months. Pre-tests and post-tests offer quick checks on learning, and comparisons of how quickly children can navigate through products during repeat visits give additional insight into usability.

We have put beta versions of products in homes and surveyed parents over several days or weeks to assess younger children’s progress through games and the discoverability of key features. For example, a team designed a product that constrained children to following a set progression of activities. Putting the product in the home and gathering responses from parents about children’s frustration with the lack of freedom was crucial feedback to get the team to change to a less restrictive design. (Although the team had observed this frustration in earlier lab testing, they felt strongly that children would accept the constraint over time. Home testing persuaded them otherwise.) Survey construction work for gathering this information included both qualitative questionnaires and scale construction, involving the use of statistical techniques like factor analysis and reliability assessments.

Designers may approach a product for children with the attitude "I know kids" or "I have kids so I can design for kids," and they may come up with appealing and age-appropriate graphics and humor. But how to design a game that children can easily figure out how to play may still be elusive. Putting the product in the proper context for children and looking at the product through their eyes, is the crucial task. Observing even a few children using the product during early product design will catch a substantial number of issues that are difficult to predict ahead of time. Evaluating products with children at the end of the product development cycle provides invaluable information for future directions.

User Interface Design Guidelines for Children

From our experiences observing children interacting with both successful and unsuccessful UI designs, and the work of others both in academia and the computer industry (e.g., Druin & Solomon, 1996; Haugland & Shade, 1990; Henninger, 1994; Robertson, 1994; Wright, Shade, Thouvenelle, & Davidson, 1989), we have come up with some guidelines to use when designing multimedia environments for children. This is not by any means a complete list. Rather, the guidelines included here reflect helpful principles that have emerged from our work. Table 2 contains the general areas and specific guidelines for each that are discussed in more detail in the following paragraphs.

Table 2. UI guidelines for children’s computer products.

Areas of product design

Guidelines for design

Activities

  • Design activities to be inherently interesting and challenging so children will want to do them for their own sake.
  • Design activities to allow for expanding complexity and support children as they move from one level to the next in use of the product.
  • Design supportive reward structures that take children’s developmental level and context of use into account.

Instructions

  • Present instructions in an age-appropriate format.
  • Design instructions to be easy to comprehend and remember.
  • Onscreen character interventions should be supportive rather than distracting.
  • Allow children to control access to instructional information.

Screen design

  • Design icons to be visually meaningful to children.
  • Use cursor design to help communicate functionality.
  • Use rollover audio, animation, and highlighting to indicate where to find functionality.

Activity Design

The best software, like the best play materials, should provide a tool that allows children to explore the world creatively, using their imaginations to manipulate and assimilate knowledge about the world around them. A successful design gives children control of the computer environment and allows them to set the pace of the interaction.

Design activities to be inherently interesting and challenging so children will want to do them for their own sake. The best interactivity models real-world play scenarios that children are most interested in (e.g., for preschoolers, dress-up and fantasy role-playing, construction play, drawing and coloring, action figure and doll play, etc) and uses intuitive, logical, and familiar procedures for accomplishing activities. Each step should make sense to children so that they can easily remember what to do. Any activity that requires children to signal when they are done should use a logical sequence, such as pulling the chain to the train whistle or opening a gate to let something through.

Design activities to allow for expanding complexity and support children as they move from one level to the next in use of the product. Activities should begin with single-step interactivity, so children do not have to remember several steps in order to complete a problem. As children gain mastery of the activity, steps can slowly be added to increase the challenge and complexity. Support children in mastering the activity by supplying feedback that helps them learn new information. In structured activities where children are asked to supply the correct answer, give feedback for wrong choices to redirect children and teach them the concept (e.g., "That’s blue, I need red"). Activities should never jump levels of difficulty without warning or sufficient practice in the preceding level.

Design supportive reward structures that take children’s developmental level and context of use into account. The best method for motivating children to stick with a computer program may be designing intrinsically-rewarding activities, in which mastering a challenging problem is rewarding in and of itself (Lepper, 1988; Malone, 1980). However, many educational programs make use of extrinsic rewards as well, for example to encourage children to try an activity they find less enjoyable. The following guidelines pertain to the design of extrinsic rewards. These can be as simple as congratulatory audio and animations that play when a child has successfully completed a problem, or as complicated as intricate point systems that accumulate to offer access to new games or prizes.

Rewards should be given consistently even when children repeat problems or activity levels they have done before. Children may fail at harder levels and will need to be able to re-experience the same success at the easier level to gain confidence in moving forward. Children should never be punished (by the absence of a reward) for repeating activities. Reward structures designed to motivate children to continue will need to address young children’s problems with delayed gratification and self-monitoring. Older children (6 years and up) can be highly motivated by point systems and obtaining "high scores." However, younger children are often unable to track their own progress toward end goals unless they are given frequent reminders and intermittent rewards. Finally, humor in rewards should take into account the intellectual level of children in the target age range. The sophistication of the humor should offer both silliness and incongruity humor that they can understand (e.g., McGee, 1971). Some humor directed at adults is appropriate, as many parents accompany their young children on the computer, and anything that makes the experience enjoyable for both children and adults is commendable. But adult-directed humor should never detract from children’s engagement by interrupting progress or confusing children.

Instructions

Present instructions in an age-appropriate format. For example, avoid text on-screen when designing products for young children. Add a feature that enables children of all ages to have any on-screen text read aloud for them. Children may be beginning readers through 2nd grade and even older children are not accustomed to reading on computer screens. They usually will not read text unless they absolutely have to.

Design instructions to be easy to comprehend and remember. The language should be clear and simple without use of concepts children have not yet learned. For example, don’t reference left and right with young children. Use onscreen characters that speak instructions. Children pay more attention to characters than to audio alone, and they can click the characters to hear instructions repeated. Add highlighting or animation of objects that are being referred to in instructions to help direct attention.

Onscreen character interventions should be supportive rather than distracting.

Make sure the onscreen character’s comments are appropriately timed in relation to onscreen content. Prime children for events that are about to happen, comment on events that are in the process of occurring, or reflect on just completed events. If there is more than one onscreen character, they should not talk over each other. Multiple characters should complement, rather than compete with or copy one another. Finally, characters should not animate or talk constantly—such behavior will distract children’s attention away from important content or their own accomplishments. (See Reeves & Nass, 1996, for further discussion of the importance of characters and social conventions such as politeness in computer products.)

Allow children to control access to instructional information. Always allow children to terminate animations and interrupt audio with mouse clicks. Children may assume something is broken if their mouse clicks do not do anything. They do not have the patience to sit through lengthy instructions, and will not be able to absorb much of the information at any one time. Instructions can be repeated in the form of feedback for incorrect or irrelevant choices, or can be accessed by clicking help characters. This feature works well for speeding up game play.

Screen design

Design icons to be visually meaningful to children. The best icons for children are easily recognizable and familiar, representing items in their everyday world. For example, use doors for going "outside," and stop signs for stopping activities. Design icons and accompanying hotspots to be large to accommodate young children’s developing cursor control. A common rule is to make icons at least the size of a quarter. Make icons look "clickable" by using three-dimensional imagery. Use a Return button separately from a Quit button. When both are on the screen simultaneously, children tend to choose the Quit button to exit activities and then accidentally exit the program.

Use cursor design to help communicate functionality. The cursor optimally has three states: a resting state, a "hot" state when rolling over an active element, and a "wait" state during screen transitions. The wait cursor should use a symbol that children associate with time passing, such as the traditional hourglass, clock, or fingers counting down (make sure these are big enough to be identifiable for children). The cursor also may change into the active element when using sticky-drag-and-drop (when the element sticks to the cursor with the first click and then is placed with the second click) instead of click-and-drag interactivity. Make sure that the cursor is designed with a clearly defined point so children can tell how to activate hotspots. If a wait cursor is not used, find another way to communicate what the computer is doing during transitions. Audio can tell children that things are being set up. When children have not interacted with the computer for a lengthy period of time, signify that the program is waiting with some mild animation or audio (e.g., toe-tapping, humming).

Use rollover audio, animation, and highlighting to indicate where to find functionality. Hotspots on the screen can highlight or animate to indicate to children what is clickable and what is not. Navigational elements can animate on rollover to show children they have their cursor in the right place. Navigational elements can also have rollover audio that tells children what the function is (e.g., "quit"). Add a 0.1-0.5 second delay for rollover audio so children can use it deliberately. Otherwise they tend to randomly hear audio after their cursor is already somewhere else and they fail to make the connection.

Working with Development Teams

Sound research methods and guidelines based on empirical observation are critical tools for making good usability recommendations to teams. However, even the very best of recommendations will not ensure usable software if a development team is unwilling or unable to incorporate them into the product at the right time. This section of the chapter discusses what we have learned about integrating with teams at Microsoft and suggests guidelines that can help to ensure that a usability engineer’s expertise really benefits the product.

There are several roles that a usability engineer may take on in working with a team. These roles typically grow and evolve over time. Initially a developmental psychologist in this position may contribute primarily through the collection of data and through the application of the developmental literature to design issues. Team members may even view a usability engineer as having a sort of administrative role in setting up studies to allow the team to have access to target users. As the engineer’s position within the team matures, however, he or she can take the lead in mapping out long-term research strategies. The usability engineer may also be a record keeper and disseminator of knowledge gained about users in current and previous design cycles. As roles and responsibilities become more varied, the engineer’s influence becomes stronger and more pervasive throughout the design cycle. The engineer may be asked to settle debates objectively, to find the most cost effective ways of getting usability engineering into a design, to take a hand in design, and so on.

Table 3 lists the general guidelines for working with product teams. The sections below describe several practices and approaches we have found to be effective in incorporating usability engineering into products and promoting the growth of a usability engineer’s role within a team. These guidelines can be used by usability engineers themselves or by team members who want to facilitate the integration of usability into their teams.

Table 3. Guidelines for working with teams.

General guidelines

Implications

Be responsive to team needs

  • Know what the product goals, technology limitations, and development schedules are and align your usability engineering work with them.
  • Develop effective working relationships with key decision-makers.
  • Be aware of how usability work impacts the team’s schedule and find ways to make usability fit into that timeframe rather than the other way around.
  • Be flexible and creative in aligning your work with team goals.
  • Use a variety of methods for getting the team to think deeply about usability issues.

Carve a unique niche for usability engineering on the team

  • Always speak from data, including your own data, that of other usability engineers, and research findings in the developmental and educational literature.
  • Avoid taking sides in political issues.

Handle conflict constructively

  • Try not to take things personally.
  • Seek a positive solution for everyone.
  • Whenever possible try to resolve conflicts without escalating them to higher levels.

Be responsive to team needs.

Most products have well-articulated goals. These goals usually fit into a certain time frame, overall business strategy, and set of technological constraints. Understanding and constantly monitoring the team’s goals and the context that these goals arise in is imperative for delivering the right recommendations at the right time to the right people. The key to successfully doing this is working directly with the team (as opposed to through some intermediary) and seeing oneself and being seen as a partner rather than a regulator. The guidelines below suggest ways of developing a partnering role.

Know what the product goals, technology limitations, and development schedules are and align your usability engineering work with them. Figure out, from reading specs, product visions, schedules, and attending meetings what the critical usability work is and when it needs to be done. Listen hard and ask the right questions so as to have a deep rather than a surface level understanding of where teams are trying to go with features they are proposing. These, more than any other practices, can put you in a position to help the team realize their vision, which is quite different from merely being in a position to make the team aware of usability issues associated with a particular feature.

Develop effective working relationships with key decision-makers. Figure out who the critical decision-makers are and work with them. Note that a person’s job title will not always tell you if they are the key decision-makers. You will generally have to rely on observations of team dynamics to determine this. When you do identify the key decision-makers, make sure they review study proposals, come to usability tests, and attend all results meetings.

Be aware of how usability work impacts the team’s schedule and find ways to make usability fit into that timeframe rather than the other way around. For example, take on the job of creating study materials (e.g., screen shots, prototypes, etc.) when designers and developers on the team are working on their own deadlines. Get study results to teams as quickly as possible (within two days is best), and conduct results meetings as efficiently as possible.

Be flexible and creative in aligning your work with team goals. Create practices around the unique circumstances of the team rather than adopting a one-size-fits-all routine. Be ready to revise plans if the situation calls for it. Check another part of the design if a prototype will not be ready and a study is scheduled. Don’t panic if a certain prescribed approach does not or cannot work given the way a particular team functions. Always keep in mind that being flexible does not mean you are wishy-washy but rather means you differentiate between what is and is not important and you are ready to take any route necessary that leads you to what is important.

Use a variety of methods for getting the team to think deeply about usability issues. Phrase your findings and recommendations in ways that the team can quickly understand. Suggest ways to resolve usability problems but watch for and encourage team members to think of even better ways to address the issue. Creating drawings that reflect the way suggestions could be implemented in design often helps spur constructive discussion and debate. Provide varied and ample opportunities for team members to have contact with users.

Carve a unique niche for usability engineering on the team.

As the roles of a usability engineer multiply, it is important to have a basic framework that ties these roles together. This is important for the team because it gives them a context within which to interpret the engineer’s input to the design. It is important to the engineer as well because it gives a consistent set of assumptions to fall back on for providing that input. The guidelines below suggest practices that have been important in helping us to carve our niches within teams.

Always speak from data, including your own data, that of other usability engineers, and research findings in the developmental and educational literature. Know what to say to provide backups to your claims. You might show additional data, explain how it maps with or replicates previous findings, or use logic. Whatever method you use, do take the time to check backups to support your findings and recommendations. Do not be afraid to acknowledge when you do not have data on an issue and, if appropriate, suggest ways of getting some.

Avoid taking sides in political issues. This can hurt your standing on the team in at least two ways. First, you never know how the makeup of a team will change. The person you were against one day may be the same person you need to develop an alliance with the next. Second, showing a strong partisan bias will bring your overall objectivity into question. A better approach is to focus on finding an objective way of dealing with issues that pertain directly to your work and the usability of the product. Acknowledge that your role is to present the issues and not to determine the business case for addressing them. You are an unbiased party that provides the data to help the team make decisions.

Handle conflict constructively.

Conflict is bound to arise in any situation where groups with different backgrounds and values work together dynamically. This conflict may have to do with misunderstandings or disagreements about roles, procedures, or product goals. Even in the most professionally-run settings conflict can arise due to mismatches in personal style. Although conflict is usually considered something to be avoided, dealing directly with the issues that give rise to conflict can be a very constructive process, and one which could provide the usability engineer with opportunities for even greater integration with the team. The guidelines below suggest ways of handling conflict and potential conflicts constructively.

Try not to take things personally. Make sure you react out of a concern for the product rather than a concern for ownership. In some cases it may be wise to ask if the reason for conflict lies in your own working style or ability. The answer may give you valuable information, but make sure you are ready to hear it and use it in a constructive way. Take time to think through how to present a concern you have objectively, and ask questions that help you understand others’ perspectives as to why the issue came up.

Seek a positive solution for everyone. Work out a mutually agreed upon way of proceeding. Avoid digging in your heels about a particular situation. Try to negotiate rather than maintain a position and prepare some constructive solutions to suggest.

Whenever possible try to resolve conflicts without escalating them to higher levels. Escalation usually adds flame to the fire by threatening everyone, and makes the situation even more difficult. At the same time, you need to know when to walk away from a no-win situation or seek help from a supervisor.

In summary, usability engineers can help ensure that a product will benefit from their training, expertise, and research findings by nurturing the development of their position and reputation within a team. Although there is no set formula for doing this, we have found that working proactively to be responsive to team needs, to carve a unique and respected niche within the team, and to deal constructively with conflict when it arises have been effective for us. It is important to note that the whole notion of usability engineers nurturing the development of their position on a team implies that they work on a particular product or set of products for an extended amount of time. This can influence work practices at several levels. At an organizational level, institutions that use queuing methods to distribute usability resources (e.g., engineers are assigned to isolated studies for whatever work requests come in) may want to consider changing to models that dedicate engineers to a specific product line. At an individual level, engineers facing a re-organization should try, if possible, to move to products where they have had good working relationships with key team members in the past. The foundation that engineers have established with these people will carry over to each new product, making them much more effective than they would have been if they had to start from scratch with a completely new team.

Conclusion

This chapter has presented the history and work practices of Kids Usability at Microsoft, and some lessons learned from our experiences. Establishing a strong and effective program of usability research has meant gaining a foothold in the corporate structure by providing professional expertise in a variety of ways and working well within product teams. We have found it invaluable to have a solid foundation in child development, research techniques, and business production cycles. To know how to gather the essential information is not enough—it is also crucial to know how and when to present it so it can be used. As we continue our research program, we will continue to accumulate guidelines to strengthen our contributions to product design at the start. We will refine usability research techniques to enable us to gather user data from children for all aspects of product design, including conceptual approaches to content development as well as specifics of navigation and game-play. And we will strive to maintain children’s usability research as a viable and valuable business practice. We hope that others can use the information we have provided here to aid in their own development and evaluation of computer products for children.

References

Creative writer [Computer software]. (1994). Redmond, WA: Microsoft Corporation.

Creative writer 2 [Computer software]. (1996). Redmond, WA: Microsoft Corporation.

Druin, A., & Solomon, C. (1996). Designing multimedia environments for children. New York: John Wiley.

Dumas, J. S., & Redish, J. C. (1993). A practical guide to usability testing. Norwood, NJ: Ablex.

Haugland, S. W., & Shade, D. D. (1990). Developmental evaluations of software for young children. Albany, NY: Delmar.

Hanna, L., Risden, K., & Alexander, K. J. (1997). Guidelines for usability testing with children. Interactions, (September+October), 9-14.

Henninger, M. L. (1994). Software for the early childhood classroom: What should it look like? Journal of Computing in Childhood Education, 5, 167-175.

Lepper, M. R. (1988). Motivational considerations in the study of instruction. Cognition and Instruction, 5, 289-309.

Lieberman, D. A. (1997). Interactive video games for health promotion: Effects on knowledge, self-efficacy, social support, and health. In R. L. Street, Jr., W. R. Gold, & T. Manning (Eds.), Health Promotion and Interactive Technology: Theoretical Applications and Future Directions. Mahwah, NJ: Lawrence Erlbaum Associates.

Kulik, C. C., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: An updated analysis. Computers in Human Behavior, 7, 75-94.

Malone, T. (1980). What makes things fun to learn? A study of intrinsically motivating computer games. Cognitive and Instructional Sciences Series (Research Report No. CIS-7 SSL-80-11). Palo Alto, CA: Palo Alto Research Center.

McGee, P. E. (1971). Cognitive development and children’s comprehension of humor. Child Development, 42, 123-138.

My personal tutor preschool workshop [Computer software]. (1997). Redmond, WA: Microsoft Corporation.

Nielson, J. (1993). Usability engineering. Chestnut Hill, MA: Academic Press.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press.

Risden, K., Hanna, E., & Kanerva, A. (1997, April). Dimensions of intrinsic motivation in children’s favorite computer activities. Poster session presented at the meeting of the Society for Research in Child Development, Washington, DC.

Robertson, J. W. (1994). Usability and children’s software: A user-centered design methodology. Journal of Computing in Childhood Education, 5, 257-271.

Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. New York: John Wiley.

The magic school bus explores in the age of the dinosaurs [Computer software]. (1996). Redmond, WA: Microsoft Corporation.

Wright, J., Shade, D. D., Thouvenelle, S., Davidson, J. (1989). New directions in software development for young children. Journal of Computing in Childhood Education, 1, 45-57.

Whalen, S., & Csiksentmihalyi, M. (1991). Putting flow into educational practice. Chicago, IL: University of Chicago. (ERIC Document Reproduction Service No. PS 019 952)

Figures for Color Plates Section

Figure 7. A screen shot of the Patterns activity in My Personal Tutor Preschool Workshop (©1997 Microsoft Corporation).

Figure 8. A child playing with Actimates Barney (©1997 Microsoft Corporation).