Universal property: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
'''Usability testing''' is a technique used in [[user-centered design|user-centered]] [[interaction design]] to evaluate a product by testing it on users. This can be seen as an irreplaceable [[usability]] practice, since it gives direct input on how real users use the system.<ref>Nielsen, J. (1994). Usability Engineering, Academic Press Inc, p 165</ref> This is in contrast with [[usability inspection]] methods where experts use different methods to evaluate a user interface without involving users.
Hi there! :) My name is Isabelle, I'm a student studying American Politics from Kangaroo Point, Australia.<br><br>My page: [http://Bestvegetarianrecipes.org/fifa-15-coin-generator/ Fifa 15 Coin Generator]
 
Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are [[food]]s, consumer products, [[web design|web sites]] or web applications, [[user interface|computer interfaces]], documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general [[human-computer interaction]] studies attempt to formulate universal principles.
 
==Goals of usability testing==
 
{{Empty section|date=December 2013}}
 
==What usability testing is not==
Simply gathering opinions on an object or document is [[market research]] or [[qualitative research]] rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product.<ref>http://jerz.setonhill.edu/design/usability/intro.htm</ref> However, often both qualitative and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions.
 
Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to ''use'' something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they are asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.
 
==Methods==
Setting up a usability test involves carefully creating a [[scenario]], or realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes. Several other test instruments such as scripted instructions, [[paper prototypes]], and pre- and post-test questionnaires are also used to gather feedback on the product being tested. For example, to test the attachment function of an [[e-mail]] program, a scenario would describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what people like. Techniques popularly used to gather data during a usability test include [[think aloud protocol]], [[Co-discovery Learning]] and [[eye tracking]].
 
===Hallway testing===
 
'''Hallway testing''' (or '''Hall Intercept Testing''') is a general [[method]] of  usability testing. Rather than using an in-house, trained group of testers, just five to six [[random]] people are brought in to test the product, or service.  The name of the technique refers to the fact that the testers should be random people who pass by in the hallway.<ref name="useit">{{cite web|url=http://www.useit.com/alertbox/20000319.html|title=Usability Testing with 5 Users (Jakob Nielsen's Alertbox)|publisher=useit.com|date=2000-03-13}}; references {{cite conference |url=http://dl.acm.org/citation.cfm?id=169166&CFID=159890676&CFTOKEN=16006386 |title=A mathematical model of the finding of usability problems |author=Jakob Nielsen,  Thomas K. Landauer |date=April 1993 |booktitle=Proceedings of ACM INTERCHI'93 Conference (Amsterdam, The Netherlands, 24–29 April 1993)}}</ref>
 
Hallway testing is particularly effective in the early stages of a new design when the designers are looking for "brick walls," problems so serious that users simply cannot advance.  Anyone of normal intelligence other than designers and engineers can be used at this point.  (Both designers and engineers immediately turn from being test subjects into being "expert reviewers." They are often too close to the project, so they already know how to accomplish the task, thereby missing ambiguities and false paths.)
 
===Remote Usability Testing===
 
In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user’s other tasks and technology can be either synchronous or asynchronous. Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.<ref>{{cite journal |doi=10.1145/1240624.1240838 |chapter=What happened to remote usability testing? |title=Proceedings of the SIGCHI conference on Human factors in computing systems  - CHI '07 |year=2007 |last1=Andreasen |first1=Morten Sieker |last2=Nielsen |first2=Henrik Villemann |last3=Schrøder |first3=Simon Ormholt |last4=Stage |first4=Jan |isbn=9781595935939 |page=1405}}</ref>
 
Asynchronous methodologies include automatic collection of user’s click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.<ref>{{cite journal|doi=10.1145/971258.971264|title=Remote possibilities?|year=2004|last1=Dray|first1=Susan|last2=Siegel|first2=David|journal=Interactions|volume=11|issue=2|page=10}}</ref> Similar to an in-lab study, an asynchronous remote usability test is task-based and the platforms allow you to capture clicks and task times. Hence, for many large companies this allows you to understand the WHY behind the visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioural type. The tests are carried out in the user’s own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organisational overheads.
 
Numerous tools are available to address the needs of both these approaches.  WebEx and Go-to-meeting are the most commonly used technologies to conduct a synchronous remote usability test.<ref>http://www.boxesandarrows.com/view/remote_online_usability_testing_why_how_and_when_to_use_it</ref> However, synchronous remote testing may lack the immediacy and sense of “presence” desired to support a collaborative  testing process. Moreover, managing inter-personal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants’ in their native environment.<ref>{{cite journal|last=Dray|first=Susan|coauthors=Siegel, David|title=Remote possibilities?: international usability testing at a distance|journal=Interactions|date=March 2004|volume=11|pages=10–17|doi=10.1145/971258.971264|issue=2}}</ref> One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.<ref>{{cite journal|last=Chalil Madathil|first=Kapil|coauthors=Joel S. Greenstein|title=Synchronous remote usability testing: a new approach facilitated by virtual worlds|journal=Proceedings of the 2011 annual conference on Human factors in computing systems|date=May 2011|series=CHI '11|pages=2225–2234|doi=10.1145/1978942.1979267|isbn=9781450302289}}</ref>
 
===Expert review===
 
Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product.
 
A [[Heuristic evaluation]] or '''Usability Audit''' is an evaluation of an interface by one or more Human Factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on 10 usability heuristics originally defined by Jakob Nielsen in 1994.<ref>{{cite web|title=Heuristic Evaluation|url=http://www.usabilityfirst.com/usability-methods/heuristic-evaluation/|publisher=Usability First|accessdate=April 9, 2013}}</ref>
 
Nielsen’s Usability Heuristics, which have continued to evolve in response to user research and new devices, include:
* Visibility of System Status
* Match Between System and the Real World
* User Control and Freedom
* Consistency and Standards
* Error Prevention
* Recognition Rather Than Recall
* Flexibility and Efficiency of Use
* Aesthetic and Minimalist Design
* Help Users Recognize, Diagnose, and Recover from Errors
* Help and Documentation
 
===Automated expert review===
 
Similar to expert reviews, '''automated expert reviews''' provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the Artificial Intelligence community.
 
===A/B Testing===
 
In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the currently used version, while Version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors. Multivariate testing or bucket testing is similar to A/B testing, but tests more than two different versions at the same time.
 
==How many users to test?==
 
In the early 1990s, [[Jakob Nielsen (usability consultant)|Jakob Nielsen]], at that time a researcher at [[Sun Microsystems]], popularized the concept of using numerous small usability tests—typically with only five test subjects each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."<ref name="useit" /> Nielsen subsequently published his research and coined the term [[heuristic evaluation]].
 
The claim of "Five users is enough" was later described by a mathematical model<ref>Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468.</ref> which states for the proportion of uncovered problems U
 
<math>U = 1-(1-p)^n</math>
 
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).
 
[[Image:Virzis Formula.PNG]]
 
In later research Nielsen's claim has eagerly been questioned with both [[empirical]] evidence<ref>http://citeseer.ist.psu.edu/spool01testing.html</ref> and more advanced [[mathematical model]]s.<ref>Caulton, D.A., Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 2001. 20(1): p. 1-7</ref> Two key challenges to this assertion are:
# Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent
#Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.<ref>Schmettow, Heterogeneity in the Usability Evaluation Process. In: M. England, D. & Beale, R. (ed.),  Proceedings of the HCI 2008, British Computing Society, 2008, 1, 89-98</ref>
 
It is worth noting that Nielsen does not advocate stopping after a single  test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers.  The number of users actually tested over the course of the project can thus easily reach 50 to 100 people.
 
In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities.  For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive user and self-identified power users both failed repeatedly.<ref>{{cite web|url=http://www.asktog.com/columns/000maxscrns.html|author=Bruce Tognazzini|title=Maximizing Windows}}</ref>  Later on, as the design smooths out, users should be recruited from the target population.
 
When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed:  The sample size ceases to be small and usability problems that arise with only occasional users are found.  The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over.  While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.
 
==See also==
{{portal|Software Testing}}
* [[ISO 9241]]
* [[Software testing]]
* [[Educational technology]]
* [[Universal usability]]
* [[Commercial eye tracking]]
* [[Don't Make Me Think]]
* [[Software performance testing]]
* [[System Usability Scale|System Usability Scale (SUS)]]
* [[Test method]]
* [[Tree testing (information architecture)|Tree testing]]
* [[RITE Method]]
* [[Component-Based Usability Testing]]
* [[Crowdsource testing]]
* [[Usability goals]]
* [[Heuristic evaluation]]
* [[Diary studies in user research|Diary studies]]
 
==References==
<references />
 
==External links==
* [http://www.usability.gov/ Usability.gov]
* [http://www.measuringusability.com/blog/five-history.php A Brief History of the Magic Number 5 in Usability Testing]
<!--
    Do not place advertisements here.
    COMMERCIAL LINKS WILL BE REMOVED.
    See Talk, WP:EL, and WP:SPAM for more information.
    Wikipedia is not a link directory.  Consider submitting your link to DMOZ instead.
-->
 
{{Product testing}}
 
{{DEFAULTSORT:Usability Testing}}
[[Category:Usability]]
[[Category:Software testing]]
[[Category:Educational technology]]
[[Category:Evaluation methods]]
[[Category:Tests]]
[[Category:Product testing]]

Latest revision as of 10:59, 30 July 2014

Hi there! :) My name is Isabelle, I'm a student studying American Politics from Kangaroo Point, Australia.

My page: Fifa 15 Coin Generator