Verified Document

Usability Evaluation Essay

Usability Evaluation Concept of Usability Evaluation

Heuristic Method

Issues in Usability Evaluation

Heuristic Evaluation Dimensions

The Evaluator

User Interfaces

Usability Problem Formats

Heuristic Evaluation Process

Inspection Phase 15

Identifying Usability Problems

Usability Problem Preparation Phase 16

Aggregation Phase 17

Procedure of Evaluation

Participants

The Static Web Interface

Observing and Quickly Visiting the Interface

Elaborating (Problems) and Revisiting (Interface and Materials)

Navigating the Interface

Annotating the Interface

Usability Evaluation

As part of the Web development process, Web developers are confronted with evaluating the usability of Web interfaces (i.e. Web sites and applications). Typically, a combination of manual methods and automatic tools are used for an effective Web site evaluation -- e.g. manual inspection is needed to supplement automatic validation tool results (Rowan 2000). However, Web projects are highly affected by their fast paced life cycles, leaving little room for full evaluations. Other major factors contributing to this situation are low budgeting assigned for testing and availability of usability experts.

Web developers need effective and cheap approaches to Web usability evaluation. Available automatic Web usability evaluation tools such as LIFT online and LIFT onsite (UsableNet 2002) and WebXACT (WatchFire 2007) have proven to be useful in finding syntactic problems. These include problems of consistency, verification of broken links, if pages contain links to the home page, alternative description of images (with use of the ALT tag in HTML), among others (Brajnik 2000). Other problems of semantic and pragmatic nature are left out by automatic evaluation tools (Farenc, 1996), and need to be handled. Farenc and collaborators (Farenc et al. 1996) explored the limitations of automatic usability evaluation tools. In analyzing 230 rules for their ERGOVAL automatic usability evaluation tool for Windows systems they found that a maximum of 78% of the rules could be automated "whatever the implemented methods are." The other 22% require input from humans to provide information and resolve semantic and pragmatic conflicts.

Usability problems that are not handled by automatic evaluation tools can be handled with semi-automatic and manual approaches. In semi-automatic approaches, the identification of usability problems start by the analysis of source files and completed with human intervention to provide information, make decisions or confirm problems. There are three manual methods that are typically used to find usability problems in user interfaces (Preece, 2002): a) usability testing where testers observe users performing tasks and report usability problems based on their observations, b) with questionnaires and interviews users are asked about their experience in using a system, missing features, and overall satisfaction, among other matters, c) in inspection methods experts examine user interfaces and report usability problems based on their judgment and expertise. Current paper is a report of usability evaluation that was conducted by the author

2. Methodology

The first step was to characterize the inspection process in Heuristic Evaluation to understand it better and come up with different ways to support it. A user study in the laboratory was conducted to understand how evaluators apply Heuristic Evaluation on Web interfaces. The output of this step is a rough characterization of the process and tool requirements.

Tool requirements were identified from the literature, Study findings, and experience. Evaluators in Study were found spending time in observing, annotating, and navigating the interface, as well as elaborating usability problems. Tools for inspection are proposed based on these activities.

Literature Review

Concept of Usability Evaluation

The concept of usability was defined in the field of human computer interaction (HCI) as the relationship between humans and computers. The International Organization for Standardization (ISO) proposed two definitions of usability, ISO 9241 and ISO 9126. ISO 9241 defines usability as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO 9241-11, 1998). In ISO 9126, usability compliance is one of five product quality categories, in addition to understandability; learn ability, operability, and attractiveness (ISO/IEC 9126, 2001). Usability depends on the interaction between user and task in a defined environment (Abran, Khelifi, Suryn, & Saffah, 2003; Bennett, 1984). Therefore, ISO 9126 defines usability as "the capability of the software product to be understood, learned, used and attractive to the user, when used under specified condition" (ISO/IEC 9126, 2001). While this definition focuses on ease of use, ISO 9241 uses the term "quality in use" to describe usability more broadly (Abran, et al., 2003; Bevan, 2001) (Figure 1-1). " quality in use" is defined as "the capability of the software product t enable specified users to achieve specified goals with effectiveness, productivity, safety, and satisfaction...

Thus, usability is evaluated trough the interaction of user, system, and task in a specified setting (Bennett, 1984). The socio-technical perspective also indicates that the technical featured of health IT interact with the social features of a healthcare work environment (Ash, J.S., et al., 2007; Reddy, Pratt, Dourish, & Shabot, 2003). The meaning of usability should therefore comprise four major components: user, tool, task, environment (Figure 1-2) Bennett, 1984).
Heuristic Method

This section discusses Heuristic Evaluation in detail. It supplements other major Heuristic Evaluation surveys (Cox 1998; Dykstra 1993; Woolrych, 2001). It focuses is on Heuristic Evaluation process and tool support, however. The reader will find software requirements highlighted throughout the section.

Heuristic Evaluation is an inspection method proposed by Nielsen and Molich (1990). It follows the "discount" philosophy, in which simplified versions of traditional methods are employed (e.g. discount usability testing not requiring elaborate laboratory setups). It consists of having a small number of evaluators independently examine a user interface in search for usability problems. Evaluators, then, collaborate to aggregate all usability problems. During interface inspection evaluators use a set of usability principles as guide, known as "heuristics," to focus on common problem areas in user interfaces. An example of such heuristics is "Help users recognize, diagnose, and recover from errors (Nielsen 2005b)." Interface features that violate the heuristics are reported as usability problems.

There have been just a couple of tools developed for assisting evaluators in Heuristic Evaluations. Problem aggregation has been supported (Cox 1998). There was no intent for automating the aggregation process but rather supporting evaluators in manual processes in problem aggregation. These include identifying unique problems, discarding duplicates, and merging descriptions using the affinity diagrams (Snyder 2003). There has been some effort in semi-automating problem identification in Heuristic Evaluation, but it is a formal, application-dependent approach. Loer and Harrison (2000) developed a system for querying a model checker for searching potential usability problems in user interfaces.

Heuristic Evaluation is an inspection method proposed by Nielsen and Molich (1990). It is a simple method used to discover usability problems in user interfaces. It consists of having a small set of evaluators individually examine a user interface and judge for compliance with recognized usability principles called "heuristics." The lists of potential usability problems are aggregated in a single usability report. Members of the development team are presented with the report to agree on the usability problem fixes and priorities. Figure 1 depicts the overall Heuristic Evaluation process

Figure 1-Heuristic Evaluation Overview

Nielsen makes recommendations to conduct a Heuristic Evaluation (Nielsen 2005a, 1994a). A typical Heuristic Evaluation session lasts 2 hours. The evaluation can start with 2 passes of the user interface. A pass to get a general idea of the user interface design and overall interaction. Evaluators focus on particular parts in a second pass. Heuristics are meant to be used to help identify usability problems. With heuristics in mind evaluators carefully examine an interface and report interface features that were noticed to have violated them.

The output of a Heuristic Evaluation is a list of potential usability problems. Lists generated by all evaluators are aggregated. Evaluators meet and identify duplicates, combine problem descriptions, suggest solutions to problems and possibly rate their severity so they can be prioritized. Nielsen recommends using a 0-4 severity rating scale (Nielsen 1995b).

Table 1-Nielsen's Severity Rating Scale Borrowed from [Nielsen 1995b]

"0 = I don't agree that this is a usability problem at all"

"1 = Cosmetic problem only: need not be fixed unless extra time is available on project"

"2 = Minor usability problem: fixing this should be given low priority"

"3 = Major usability problem: important to fix, so should be given high priority"

"4 = Usability catastrophe: imperative to fix this before product can be released"

Several Heuristic Evaluation dimensions can be identified from the description above: the heuristics that are used to guide the inspection, evaluators performing the inspection, the user interface that is being evaluated, and the process that is followed. These are discussed immediately below.

Issues in Usability Evaluation

The first idea of a tool for Heuristic Evaluation looked like a combination of a logging tool to keep track of usability problem, and…

Sources used in this document:
References

Bailey, R.W., Allan, R.W., AND Raiello, P. (1992). Usability Testing vs. Heuristic Evaluation: A Head-to-Head Comparison. In Proceedings of the Human Factors Society 36th Annual Meeting. Human Factors Society, Santa Monica, CA. 409-413.

Bevan, N., Barnum, C., Cockton, G., Nielsen, J., Spool, J., AND Wixon, D. (2003). The "Magic Number 5": Is it Enough for Web Testing? In the CHI'03 Extended Abstracts on Human Factors in Computing Systems, Cockton, G., and Korhonen, P., Eds. ACM, New York, NY. 698 -- 699.

Brajnik, G. (2000). Automatic Web Usability Evaluation: What Needs to be Done? In Proceedings of the 6th Conference on Human Factors and the Web, Austin, Texas. http://www.tri.sbc.com/hfweb/brajnik/hfweb-brajnik.html.

Chaudhary, A. (2008).Video Annotation Tools. Master's thesis, Department of Computer Science, Texas A&M University, College Station, TX.
Instone, K. (1997). Site Usability Heuristics. http://instone.org/heuristics/.
Nielsen, J. (2005b). Ten Usability Heuristics. http://www.useit.com/papers/heuristic/heuristic_list.html.
Usablenet. (2002). LIFT Online and LIFT Onsite. http://www.usablenet.com/.
Watchfire. (2007). WebXACT. http://webxact.watchfire.com/.
Cite this Document:
Copy Bibliography Citation

Related Documents

Usability Testing and Implementation the Intent of
Words: 593 Length: 2 Document Type: Essay

Usability Testing and Implementation The intent of this analysis is to evaluate the usability of these three websites, each of which have innate strengths and weaknesses in their design and navigation. The three websites are http://www.usability.gov /, http://www.useit.com/jakob / and http://www.transparencynow.com/index.html . Using the foundation of a usability framework, each of the three websites have been evaluated both for use on traditional personal computers and also on mobile devices. Usability criterion for

Usability the Internet Is Now
Words: 1576 Length: 5 Document Type: Term Paper

Suzanne L. Byerley, Mary Beth Chambers. 2002. Accessibility and usability of Web-based library databases for non-visual users. Library hi-tech 20, no. 2 (January 1): 169. http://www.proquest.com. (Accessed 12 April 2008). Donoghue.A (2004). Web accessibility drive for UK government websites. Macromedia hoping to cash in. Published: Friday 18 June 2004. Available from: www.silicon.com (Accessedon 19 February 2008) EUD (2003). European Union of the Deaf (EUD). Available at http://www.eudeaf2003.org (Accessedon 21 February 2008) King, JW Thatcher,

Usability Analysis Human Health Is Delicate and
Words: 1477 Length: 5 Document Type: Research Paper

Usability Analysis Human health is delicate and working in fields that deal with such aspect requires the utmost sensitivity and accuracy among other positive elements. The modern world is, characterized by widespread use of computerized equipment. The ability of these devices and functions to present professional with the required positive features is very apparent. The use of these devices in the medical field comes in a variety of aspects, say in

Usability Assessment the Importance of Usability in
Words: 730 Length: 2 Document Type: Essay

Usability Assessment The importance of usability in web design is a vital key in the survival of a thriving website. Neilsen states that, "it is necessary for the survival of a website, If a website is difficult to use, people just leave, and frequently never come back to check and see if you have taken stock, and improved the site." Factors to be considered are that design be easy to use, based

Digital Sound Synthesis and Usability Testing
Words: 5640 Length: 20 Document Type: Dissertation

Digital Sound Synthesis and Usability Testing Overview of analogue/digital synthesizers and their input in the music industry An essential issue in the researches of new media happens to be to investigate and clarify the connection between technological development and cultural revolutions. In this case, digital technology has usually been seen as the single most significant tool in the design of numerous of today's new cultural terminologies. In media that is heavily reliant

Intercultural Usability: How Language Affects
Words: 381 Length: 1 Document Type: Research Proposal

" (Ford and Kotze, date unknown) The work of Spencer-Rodgers and McGovern (2002) entitled: "Attitudes Toward the Culturally Different: The Role of Intercultural Communication Barriers, Affective Responses, Consensual Stereotypes and Perceived Threat" report that testing was conducted for the purpose of examining the psychological impact of intercultural communication barriers on intergroup attitudes. It is reported that it was indicated by regression analyses indicated that "....intercultural communication emotions were strongly and

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now