It involves techniques to assess application functionality, verify the impact of the interface on users, and identify intermittent problems which might emerge. Evaluation can be performed either as formative evaluation (which takes place during design) or summative evaluation (which takes place after the product has been developed). There are typically two broad categories of evaluation methods that can be used at different stages of product development—user testing and usability inspection methods. It is to be noted that all evaluations should be conducted with the final user of the application and not with other stakeholders like developers or website owners. Let us study the two evaluation techniques—

  1. User testing: It involves observing and analyzing user performance of a certain set of tasks, collecting empirical data, and thereafter, improving the application. Typical areas measured are user task time, number of errors, user opinions, user satisfaction, etc.Steps for a robust usability testing include:
    • Defining testing goals: The objectives of testing should be clearly defined if they pertain to a larger functionality testing or just checking the landing page content.
    • Choosing sample for testing: The sample chosen should exhaustively cover different population types in terms of user experience, age, frequency of use, etc.
    • Selecting tasks and scenarios: The tasks for testing should be representative of the actual tasks performed on the application.
    • Defining measurement parameters: Measures deployed should be a mix of subjective (effectiveness, user satisfaction) and objective ones (task time, errors).
    • Preparing experimental environment: The environment should include equipment like computer/video, test audience, supporting material, etc.
  2. Inspection methods: It involves predicting usability problems which can typically crop up during user testing at a later stage. Key methods involve:
    • Heuristic evaluation: It involves having a set of experts analyze the application against a list of recognized usability principles. Each evaluator goes to the interface at least twice, first to get a feel of the interaction and then to focus on specific objectives and functionality to evaluate a list of heuristics.
    • Cognitive walkthrough: The users’ problem-solving process, that is what the users will do in specific situations and why they would do so, is simulated.
    • Web usage analysis: It refers to analyzing user-browsing patterns through data collected from user’s access on application pages and collected in the web server log.

Apart from the methods shared above, designers also deploy automatic tools to support their evaluation on important areas like accessibility analysis, usability analysis, web usage analysis, etc., which help deal with the most repetitive evaluation tasks effectively, without requiring much time and investment as in user training and inspection methods.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *