Human Computer Interaction
Cover
HCI lecturer 1.pdf
Summary
# Course introduction and structure
This section provides an overview of the Human-Computer Interaction (HCI) course, detailing its modules, recommended resources, coursework, reading materials, and tutorial format, while also drawing a distinction between this course and a more practically oriented one [7](#page=7) [8](#page=8) [9](#page=9).
### 1.1 Course overview and comparison
The Human-Computer Interaction course is presented alongside an alternative, more practically applied class, to highlight its specific focus and approach [8](#page=8).
#### 1.1.1 Human-Computer Interaction (HCI) course
* **Emphasis:** Focuses on the theoretical grounding of HCI and working with users [8](#page=8).
* **Content:** More theoretical with some practical application [8](#page=8).
* **Prerequisites:** No programming knowledge is assumed [8](#page=8).
* **Assessment:** 100% coursework [8](#page=8).
#### 1.1.2 Practical-applied-class
* **Emphasis:** How to build and test a user interface [8](#page=8).
* **Content:** Practical and applied [8](#page=8).
* **Prerequisites:** Programming experience is assumed [8](#page=8).
* **Assessment:** 30% coursework and 70% exam [8](#page=8).
### 1.2 Course modules
The course is structured into three main modules [10](#page=10):
* Design requirements gathering [10](#page=10).
* Designing an interface [10](#page=10).
* Evaluating an interface [10](#page=10).
### 1.3 Recommended books
Two key books are recommended for the course [11](#page=11):
* A quick guide to common methodologies [11](#page=11).
* A practical guide to building and testing usable interfaces [11](#page=11).
### 1.4 Coursework assignments
There are two main coursework assignments (CW) [12](#page=12):
* **CW1: Prototype a smart refrigerator app**
* Groups of size two [12](#page=12).
* Students must decide on the tasks the app will support [12](#page=12).
* A functional prototype is to be created using Processing [12](#page=12).
* **CW2: Evaluate an app**
* Groups of size two [12](#page=12).
* Each group is randomly assigned another group’s prototype from CW1 [12](#page=12).
* The assignment involves evaluating the usability of the assigned prototype [12](#page=12).
### 1.5 Readings
Reading materials are categorized into three types [13](#page=13):
* **Short readings:**
* Typically two pages per methodology [13](#page=13).
* Expected to be known for the exam [13](#page=13).
* Should take less than ten minutes to read [13](#page=13).
* **Long readings:**
* Provide comprehensive coverage of topics [13](#page=13).
* Clarify slide material [13](#page=13).
* **Supplemental readings:**
* Offer additional information for interested students [13](#page=13).
### 1.6 Tutorials
Tutorials commence in the third week of the course [14](#page=14).
* **Focus:** Hands-on application of methodologies [14](#page=14).
* **Content:** Includes working through sample exam questions [14](#page=14).
> **Tip:** Pay close attention to the short readings, as they are explicitly mentioned as likely exam content [13](#page=13).
>
> **Tip:** The tutorials offer a valuable opportunity to practice applying course concepts and prepare for the exam by working through sample questions [14](#page=14).
* * *
# The design process in HCI
The design process in HCI outlines a systematic approach to creating user-centered interactive systems.
## 2\. The design process in HCI
The design process in Human-Computer Interaction (HCI) is a structured, iterative approach to developing interactive systems. It typically involves several distinct phases, from initial conception to post-launch monitoring. Methodologies and techniques can be applied at various stages to support the design efforts [17](#page=17) [18](#page=18) [20](#page=20).
### 2.1 Core phases of the design process
While specific models may vary, a common understanding of the design process includes the following key stages:
#### 2.1.1 Planning, scoping, and definition
This initial phase focuses on understanding what needs to be achieved and defining the project's scope. The central question is: "What do we want to do?". This involves identifying user needs and system requirements [17](#page=17) [18](#page=18) [19](#page=19).
#### 2.1.2 Exploration, synthesis, and design implications
Following the definition phase, this stage explores the feasibility and implications of potential solutions. It asks: "Would it work? Would it solve the problem?". This involves analyzing the current state versus the desired state and can include methods like scenario creation and task analysis [17](#page=17) [19](#page=19).
#### 2.1.3 Concept generation
This phase involves creating tangible representations of design ideas to test and refine them. The core activity here is to "create a prototype and try it out" [17](#page=17).
#### 2.1.4 Evaluation, refinement, and production
Once concepts are generated, they are built, tested, and improved. This stage is about building the system, testing its usability and effectiveness, and fixing identified issues. Methods such as heuristic evaluation and architectural design are relevant here [17](#page=17) [19](#page=19).
#### 2.1.5 Launch and monitor
The final stage involves releasing the system to users and continuously observing its performance in the real world. Ongoing review and updates are crucial to ensure the system remains effective and relevant. This phase also includes providing documentation and help resources for users [17](#page=17) [19](#page=19).
### 2.2 Alternative process models
Some models present a slightly different breakdown of the design process, often emphasizing a cyclical or iterative nature. One such model includes:
* **What is wanted/needed:** Similar to planning and definition.
* **Analysis:** Understanding the problem space, user needs, and existing solutions.
* **Design:** Developing the conceptual and detailed design of the system.
* **Prototyping:** Creating early versions of the system for testing.
* **Implement and deploy:** Building and releasing the final product [18](#page=18).
This model also visually integrates specific design activities and methods within these phases, such as task analysis during analysis, precise specifications and dialog notations during design, and documentation and help during implementation [19](#page=19).
> **Tip:** The Universal Methods of Design often lists which design phases a particular method can be used in, aiding in the selection of appropriate tools at each step [17](#page=17) [20](#page=20).
* * *
# User skill levels and technical proficiency
This section outlines the varying degrees of technical skill possessed by average computer users, categorizing them by their ability to perform tasks [3](#page=3).
### 3.1 Understanding user skill levels
Computer skills at different levels can be defined by the types of tasks individuals at that level can successfully accomplish. It is crucial to understand these varying proficiencies to design effective user interfaces and experiences [5](#page=5).
### 3.2 Defining skill levels
The document categorizes user skill levels as follows, with descriptions and examples for each:
#### 3.2.1 Below Level 1 (14% of adults)
Users at this level can perform well-defined problems involving a single function on a generic interface with very few steps [5](#page=5).
> **Example:** "Delete this email message" in an email application [5](#page=5).
#### 3.2.2 Level 1 (29% of adults)
This level involves widely available programs, requiring minimal navigation, a few steps, and a limited number of operations [5](#page=5).
> **Example:** "Find all emails from John Smith" [5](#page=5).
#### 3.2.3 Level 2 (26% of adults)
Users at Level 2 can utilize both generic and specific applications. They can navigate across pages, perform multiple steps, and may need to define their own goals. Unexpected outcomes or impasses are more likely to occur at this level [5](#page=5).
> **Example:** "Find a sustainability-related document that was sent to you by John Smith in October last year." [5](#page=5).
#### 3.2.4 Level 3 (5% of adults)
This advanced level also involves generic and specific applications with navigation across pages and multiple steps. However, users at Level 3 often need to define their goals, and they experience a high demand for monitoring. Unexpected outcomes or impasses are even more likely to occur than at Level 2 [5](#page=5).
> **Example:** "You want to know what percentage of the emails sent by John Smith last month were about sustainability." [5](#page=5).
### 3.3 Implications for design
Understanding these skill levels is fundamental for designing interfaces that are accessible and usable for the broadest range of users, from those with minimal technical abilities to those with advanced capabilities. Designing for the lowest common denominator often means alienating more proficient users, while designing solely for advanced users can exclude novices. Therefore, a nuanced approach considering these distinct skill levels is essential for successful human-computer interaction design [3](#page=3) [5](#page=5).
* * *
# Case study: Redesigning app permission screens
This case study details a Master's project focused on redesigning Android app permission screens to enhance user comprehension of permission usage in context [21](#page=21).
### 4.1 Problem definition and context
The core problem identified is that users often do not understand the implications of app permissions, leading to a lack of informed decision-making regarding which permissions they grant. This sub-problem arises because most users lack sufficient knowledge about permissions to actively worry about them. The goal was to create a new permission screen that leverages static analysis tool output to clarify the context in which permissions are utilized by an app [24](#page=24) [25](#page=25).
### 4.2 Static analysis tools
Static analysis tools are employed to break down an application into a control flow diagram. This process can reveal how an app uses permissions, illustrating specific actions like opening internet connections or accessing location data, which might be used to send this information elsewhere. An example shows how a "requestLocationUpdates" permission might be linked to an "openConnection" call, suggesting location data could be sent over the internet [23](#page=23).
### 4.3 User study: Affinity diagram
To address the problem of user understanding, an affinity diagram study was conducted with Computer Security MSc students [26](#page=26).
#### 4.3.1 Study protocol
The protocol involved several steps:
1. Pre-printing a list of Android permissions and their associated contexts [28](#page=28).
2. Participants brainstorming answers to specific questions onto sticky notes. These questions included [28](#page=28):
* Name three permissions [28](#page=28).
* List app behaviors that you are not comfortable with [28](#page=28).
* Describe situations that would cause a permission to be used [28](#page=28).
3. Placing all sticky notes on a wall for an affinity diagram exercise [28](#page=28).
4. Encouraging participants to design a hierarchy for the information [28](#page=28).
5. Discussing the outcomes with participants as a group [28](#page=28).
#### 4.3.2 Affinity diagram process
The sticky notes were then sorted by participants into logical groups, with the possibility of adding new notes during this process. This sorting and grouping continued across the entire wall of notes [30](#page=30) [32](#page=32).
#### 4.3.3 Outcomes of the affinity diagram study
The study revealed several key user concerns and insights regarding permission usage [35](#page=35):
* Users are concerned about what happens "with my permission" [35](#page=35).
* The context of permission access is important, differentiating between button presses, general app opening, and background activity [35](#page=35).
* The timing of permission access (when the permission is accessed) is crucial [35](#page=35).
* Users are concerned about the purpose of permission usage, particularly related to ads and uploading private data like contacts and device IDs [35](#page=35).
* Sensitive permissions related to input/output were highlighted [35](#page=35).
* Several permissions were identified as confusing [35](#page=35).
Key permissions and contexts identified included Camera, Location, Microphone, Accounts and device info, Contacts, SMS, Internet (often combined with reading data), Calendar, Settings, Ads, Sounds, Notifications, and screen space usage, particularly in the background [34](#page=34).
### 4.4 Redesigned interface and A/B testing
Based on the study outcomes, a new interface was designed to present permissions within the context of when they can be used. This included categorizations like "Button push required," "Only when app is open," and "Anytime in the background (Ad software)". Two interfaces were created to conduct an A/B test. The testing aimed to evaluate user understanding by asking questions like "Which of the following can this app do?" [37](#page=37) [39](#page=39) [40](#page=40).
### 4.5 Results of the user testing
The results indicated significant user misinterpretation of standard permission screens. Specifically, 27 percent of people believed they understood the screen but were incorrect, and another 13 percent were uncertain about its true meaning. This highlights a substantial gap in user comprehension that the redesigned interfaces aimed to bridge [42](#page=42) [43](#page=43).
> **Tip:** The affinity diagram method is effective for uncovering user-centric concerns and categorizing complex information by grouping qualitative data from participants. **Example:** The study demonstrated how abstract permissions like "Internet" become more understandable when linked to concrete actions like "uploading private data" or serving "Ads" [35](#page=35).
* * *
# Case study: Evaluating an email encryption plugin
This section details a Master's project focused on evaluating the usability of an email encryption plugin named Mailvelope, exploring its challenges and employing usability testing methodologies [44](#page=44).
### 5.1 Project background and goals
The project's core objective was to assess the usability of a new email encryption plugin released by Google, called Mailvelope. The evaluation aimed to understand if the plugin was user-friendly. The research acknowledged existing knowledge regarding user preferences for email and the reasons why email encryption has historically faced adoption challenges [45](#page=45) [48](#page=48).
### 5.2 Challenges with email encryption
Historically, email encryption has struggled with user adoption. A seminal study by Whitten and Tygar, titled "Why Johnny Can’t Encrypt," highlighted significant usability issues with PGP 5.0. In this study, only 4 out of 12 Carnegie Mellon computer scientists could successfully send an encrypted email using PGP 5.0 within 90 minutes. The researchers identified several critical errors and points of confusion, including [49](#page=49):
* Accidentally sending emails without encryption [49](#page=49).
* Confusion surrounding the key management system [49](#page=49).
* Users eventually giving up due to complexity [49](#page=49).
These findings underscore the inherent difficulties users face when attempting to encrypt emails, suggesting that technical features alone are insufficient if the user experience is not intuitive [49](#page=49).
### 5.3 Evaluation methodologies
To address the usability of Mailvelope, two primary methodologies were employed: Cognitive Walkthrough and Think-Aloud Study.
#### 5.3.1 Cognitive walkthrough
The Cognitive Walkthrough is a usability inspection method where evaluators step through a task, asking a series of questions at each step to identify potential user difficulties. For this study, a specific scenario was defined: a user who has already installed the Mailvelope plugin and wishes to send an encrypted email to another person [51](#page=51) [52](#page=52).
The walkthrough involved analyzing the steps a user would take, such as opening the Mailvelope plugin by clicking its icon and subsequently clicking an "Options" button. For each step, four critical questions were asked [51](#page=51) [52](#page=52):
1. Will users try to achieve the outcome of performing this action [51](#page=51) [52](#page=52)?
2. Will users see the correct element (e.g., button) for the action [51](#page=51) [52](#page=52)?
3. Once users find the element, will they recognize that interacting with it will produce the desired effect [51](#page=51) [52](#page=52)?
4. After the action is performed, will users understand the feedback provided, enabling them to proceed confidently to the next step [51](#page=51) [52](#page=52)?
The Cognitive Walkthrough phase was designed to identify expected areas where users might encounter problems [53](#page=53).
#### 5.3.2 Think-aloud study
Following the Cognitive Walkthrough, a Think-Aloud study was set up to validate the potential failure points identified. This method involves observing actual users as they perform tasks while vocalizing their thoughts, feelings, and decisions. The aim was to determine if real users would encounter the same difficulties predicted by the Cognitive Walkthrough [53](#page=53).
One of the tasks assigned to participants in this study was to "Write an encrypted email". The subsequent pages and likely contain further details, results, or analyses from this study, though specific content from these pages was not detailed in the provided excerpts [54](#page=54) [56](#page=56) [57](#page=57) [59](#page=59).
> **Tip:** The combination of a Cognitive Walkthrough (expert evaluation) and a Think-Aloud Study (user-based evaluation) provides a robust approach to identifying and confirming usability issues. The former predicts problems, while the latter verifies them with real user behavior.
* * *
# Requirements gathering and problem definition
This topic focuses on the crucial initial stage of understanding and defining the problem to be solved in a Human-Computer Interaction (HCI) project.
### 6.1 Understanding the problem
A successful HCI project begins with a thorough understanding of the problem it aims to solve. It is essential to distinguish between features and requirements, as confusing them can lead to ill-defined projects [62](#page=62) [63](#page=63).
#### 6.1.1 Features vs. Requirements
* **Features** are specific functionalities or elements of a proposed solution, often presented as a list of "what to build". For example, "Build an app for new UG1 and MSc students that lists course locations" describes features rather than underlying needs [63](#page=63).
* **Requirements** are goals that the system must accomplish. They focus on the tasks users need or want to achieve, how they currently perform those tasks, their dissatisfactions with current methods, and their desired future state [64](#page=64).
> **Tip:** Shift feature requests to focus on understanding the underlying requirements. Ask "what tasks do users need to accomplish?", "how are they currently doing them?", "what do they dislike?", and "what would they like to be doing?" [64](#page=64).
#### 6.1.2 Example: Project Matching System
Consider a project to replace an existing UG4/MSc project matching system. An initial brief might suggest features like an "app" and "automatic matching". However, a deeper inquiry reveals the underlying requirements [63](#page=63):
* **Problem:** The current manual system, relying on spreadsheets and text files, is unsustainable due to rising student numbers and is time-consuming for the coordinator [67](#page=67).
* **User Needs (Students/Supervisors):**
* Students struggle to choose projects, often flocking to popular ones, leading to disappointment when they cannot secure their desired project [67](#page=67).
* Supervisors find meeting with every student a burden due to increasing numbers, yet these meetings are important for assessing student suitability and project understanding [68](#page=68).
* Supervisors and students may not always communicate project selections effectively, leading to last-minute issues and the need for re-matching [69](#page=69).
* **Desired Outcomes:**
* Automate the matching process to handle increased student numbers [67](#page=67).
* Provide clarity on project popularity to guide student choices [67](#page=67).
* Reduce the burden of manual matching and re-matching [69](#page=69).
* **Current System Strengths:** The current system is perceived as "simple" and "easy to use" as it's a straightforward list of projects and supervisors, allowing students to email supervisors directly if interested [69](#page=69).
> **Tip:** Always question the initial proposed features and dig deeper to understand the fundamental problems and user needs they are intended to address. The system must be usable for the tasks it is meant to support [71](#page=71).
### 6.2 Requirements gathering methods
Various methods can be employed to gather design requirements. The choice of method depends on the specific context and the type of information needed [81](#page=81).
#### 6.2.1 Interviews
* **Interviews with users:** Directly asking potential users about their needs, experiences, and pain points.
* **Interviews with experts:** Consulting individuals with specialized knowledge in the domain.
#### 6.2.2 Contextual Inquiries
Observing users in their natural environment while they perform their tasks.
#### 6.2.3 Surveys
* **Retrospective surveys:** Asking people about past events using a survey format [85](#page=85).
* **When to use:** For critical, memorable, or impactful recent events (e.g., describing a negative software update experience, where you had dinner last night) [85](#page=85).
* **When not to use:** When events are difficult to recall accurately (e.g., "How many times did you cross a road last month?") [85](#page=85).
* **Example:** Surveying new students about their recent experiences or current students about recent questions they have asked [86](#page=86).
#### 6.2.4 Focus Groups
Gathering a small group of users to discuss a specific topic or product.
#### 6.2.5 Reading Background Literature
Reviewing existing research, reports, and documentation related to the problem domain.
#### 6.2.6 Diary Studies
Asking participants to record events as they happen [82](#page=82).
* **When to use:**
* For rare events that cannot be easily observed [82](#page=82).
* For events that are easily forgotten [82](#page=82).
* When the actual frequency of actions is important (e.g., tracking water intake) [82](#page=82).
* **Why not to use:** The act of tracking behavior can cause participants to change their behavior [82](#page=82).
* **Example:** Asking someone at an information desk to record questions received during a busy period like Welcome Week. The resulting data can include FAQs about bank accounts, campus maps, transportation, accommodation, health services, and more [83](#page=83) [84](#page=84).
#### 6.2.7 AEIOU (Activity, Environment, Interaction, Object, User)
A framework often used in ethnographic research to structure observations.
#### 6.2.8 Artifact Analysis
Examining "things" that people create or leave behind to understand a problem [87](#page=87).
* **When to use:**
* In physical spaces where workflows generate meaningful artifacts [87](#page=87).
* When tasks involve artifact creation (e.g., using Microsoft Word) [87](#page=87).
* When interactions generate artifacts (e.g., emails, social media posts) [87](#page=87).
* **Why not to use:** When no meaningful artifacts exist or when other methods are faster for gathering information [87](#page=87).
* **Example:** Sorting student posts into categories like Admissions, Social, Studying, Lectures, Accommodation, Transportation, Finance, and Visiting/International student issues can reveal common concerns and information needs [90](#page=90).
> **Tip:** Artifact analysis can be particularly insightful for understanding the context and practices of users by examining the tangible byproducts of their activities.
### 6.3 Defining the problem statement
After gathering information, the next step is to clearly define the problem. A well-defined problem statement is essential for guiding the design process and ensuring the final solution effectively addresses user needs. It should articulate what tasks the system needs to support and for whom [71](#page=71).
* * *
## Common mistakes to avoid
* Review all topics thoroughly before exams
* Pay attention to formulas and key definitions
* Practice with examples provided in each section
* Don't memorize without understanding the underlying concepts
Glossary
| Term | Definition |
|------|------------|
| Human-Computer Interaction (HCI) | An interdisciplinary field that focuses on the design of computer technology and, in particular, the interaction between humans and computers. It aims to make computers more useful, usable, and accessible. |
| User Interface (UI) | The means by which a user interacts with a computer or software application. This includes visual elements, controls, and feedback mechanisms. |
| Usability | A measure of how easily a user can learn and use a system to achieve their goals effectively, efficiently, and with satisfaction. |
| Technical Skill Levels | Describes the varying degrees of proficiency users possess when interacting with computers and technology, ranging from basic operation to advanced manipulation. |
| Control Flow Diagram | A graphical representation that shows the sequence of operations or instructions executed by a computer program, illustrating decision points and paths. |
| Static Analysis | A method of analyzing a computer program by examining its code without executing it, often used to detect errors or understand program structure. |
| Permission Screen | An interface in a software application that informs users about the access an app requests to their device's data or features and allows them to grant or deny that access. |
| Affinity Diagram | A method used to organize ideas and data into natural groupings based on perceived relationships, often used in the early stages of design to understand user concerns. |
| Cognitive Walkthrough | An usability inspection method where evaluators step through a task, simulating a user, and ask a series of questions at each step to identify potential usability problems. |
| Email Encryption Plugin | A software add-on for email clients that enables users to encrypt and decrypt email messages, enhancing security and privacy. |
| Requirements Gathering | The process of collecting information from stakeholders and users to define what a system or product needs to do to meet their needs and solve their problems. |
| Features | Specific functionalities or characteristics of a system that are proposed as solutions, distinct from the underlying needs or goals they are intended to address. |
| Diary Study | A research method where participants record their experiences, thoughts, or actions over a period of time, typically as events occur. |
| Retrospective Survey | A survey method that asks participants to recall and report on past events or behaviors, often used for memorable or critical experiences. |
| Artifact Analysis | A research technique that involves examining objects or traces left behind by users to understand their behavior, habits, or context of use. |
| Contextual Inquiry | An ethnographic research method where researchers observe and interview users in their natural environment to understand their tasks, needs, and workflows. |
| Heuristics | A set of general usability principles or rules of thumb used for evaluating the usability of a user interface. |