COMP322 · Handbook
COMP322 · Spring semester 2025/26

The interactive handbook for human–computer interaction.

What follows is the material we have covered together so far, written up in one place so you can revisit it before a quiz, recall a method during the project, or just remind yourself of an idea that came up in passing. The notes go deeper than what fits in a fifty-minute session: where we touched a concept briefly in class, the handbook unpacks it with examples, sketches, and short walkthroughs.

Expect concrete rather than abstract. There are case studies on the Therac-25 radiation overdoses and the everyday Norman door, six worked apps you already use - WhatsApp, Instagram, Talabat among them - all six of Norman’s principles set against ten of Nielsen’s heuristics, and a full walk through the user-centred design loop your project follows. Hover the dotted terms for Arabic translations. Click any sketch or expandable block to dive deeper.

InstructorHisham Ihshaish
InstitutionBirzeit University
TextbookRogers, Sharp & Preece (2024)
How to readTop to bottom, or jump in

How this handbook is structured

Five movements, in deliberate order. Each one earns the next: you cannot evaluate before you understand the mind; you cannot design before you have principles; you cannot run a project before you have methods. Skip ahead if you must, but the order rewards reading.

Part one
Foundations

Why HCI matters - Therac-25 and the Norman door. ISO usability goals. Six cognitive concepts every designer should know. Social and emotional interaction, with the cultural angle for Arabic users.

Part two
Principles

Norman’s three models and the gulfs of execution and evaluation. His six design principles with concrete examples for each. Nielsen’s ten heuristics with respect-vs-violation cases. Hick’s and Fitts’s laws. Eight dark patterns to recognise.

Part three
Process & methods

The UCD loop with an animated walkthrough. Five interaction types. Eight interface families. A methods reference table. The full heuristic-evaluation method, severity ratings with a practice tool, interview technique, usability testing, and accessibility (POUR).

Part four
Practice

Worked heuristic evaluations on WhatsApp, Instagram and Talabat with clickable, animated mockups. Lina Khoury - a fully evidenced persona, a scenario, and seven measurable requirements. The team project, phase by phase, with a clear note on how it differs from the individual assignment.

Part five
Reference

Fourteen pitfalls that lose marks in past projects. A bilingual glossary of the core vocabulary. A full reference list to cite in your reports.

Full contents

Part oneFoundations
i.Why this course exists

Bad design kills. Good design disappears.

Before any technique, any heuristic, any method, you should know why human–computer interactionThe discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use, and the study of major phenomena surrounding them. (ACM SIGCHI definition)التفاعل بين الإنسان والحاسوب exists as a field. The honest answer: because when designers ignore how humans actually think, perceive, and act, real people get hurt. Sometimes by frustration. Sometimes by far worse.

Case study · 1985–1987

The Therac-25: when a software interface killed people.

Between June 1985 and January 1987, a computer-controlled radiation-therapy machine called the Therac-25 gave at least six patients massive radiation overdoses - sometimes more than a hundred times the prescribed amount. Several died. Several more were left with permanent injuries.

The root cause was not a single bug. It was a constellation of interface failures. Operators could edit the radiation mode using a cursor-up shortcut faster than the machine could re-validate the inputs - a classic race condition. When something went wrong, the screen showed only “Malfunction 54”: a code with no plain-language explanation, no clear consequence. Operators, trusting the software, pressed proceed and delivered the lethal dose.

Earlier Therac models had hardware interlocks that physically prevented the unsafe state. The Therac-25 removed them, on the assumption that software could do the job. The same software bug had existed in the Therac-20 for years, harmlessly, because hardware caught it. When hardware was gone, the bug became fatal.

A modern software interface is not just a layer on top of a system. In safety-critical contexts, it is the safety system.
↗ Wikipedia: Therac-25
Case study · everyday

The Norman door: why design failures are everywhere.

You have used one this week. A glass door with a vertical handle on both sides. You pulled. It said push. You felt stupid. You were not stupid. The door was badly designed.

Don Norman called these Norman doors: any door whose visible features (the handle) suggest the wrong action (pull) instead of the correct one (push). The handle is an affordanceA property of an object that suggests how it can be used. A handle affords pulling. A flat plate affords pushing.الإيحاء for pulling. Putting a handle on a push door is a design lie.

Norman’s observation was simple but radical: when people fail to use everyday objects correctly, it is almost never the user’s fault. It is the designer’s. Every Norman door blames a person for a designer’s mistake.

Software is full of Norman doors. The save button that does nothing visible. The form that resets when you press back. Every one of them blames a user for a design failure.
↗ Watch: The Norman Door (Vox · 99% Invisible · 4 min)

So what is HCI, exactly?

HCI is the study, design, and evaluation of interactive systems for human use. It draws on computer science (what we can build), cognitive psychology (how people think), design (how things should look and feel), and ergonomics (how bodies and minds tire). The field formed in the 1980s when computers escaped from specialists’ labs and met everyone else.

Its operational target is usabilityThe extent to which a system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. (ISO 9241-11)قابلية الاستخدام: not whether a system can be used, but whether it can be used well, by the people it’s for, in the situations they’re actually in. The newer concept of user experienceA person’s perceptions and responses resulting from the use or anticipated use of a product, system, or service. Includes emotion, belief, preference, perception, comfort, and behaviour. (ISO 9241-210)تجربة المستخدم (UX) extends this to include emotional, aesthetic, and social qualities that “usability” alone cannot capture.

The six measurable goals of usability

ISO 9241-11 (2018) breaks usability into six concrete attributes. You will use these in Phase 1 to write requirements and again in Phase 3 to measure whether your design met them.

E

Effectiveness

Can users complete the task at all? Did they get to the right outcome, accurately and completely?

Measure: task completion rate, error rate.
F

Efficiency

How much effort, time, and resources does the task take? Speed matters once correctness is established.

Measure: time on task, number of clicks or steps.
S

Satisfaction

How does using the system feel? Frustration is itself a usability cost.

Measure: SUS (System Usability Scale), Likert questions, qualitative quotes.
L

Learnability

How quickly can a new user become productive? Cliff or curve?

Measure: time-to-first-success, errors-on-first-use.
M

Memorability

If a user returns after a week, do they remember how it works, or start from zero?

Measure: performance on second session vs first.
R

Errors / safety

How often do users make mistakes, how serious are they, and how easily can they recover?

Measure: error frequency, severity, recovery time.
Did you know
The elevator close-door button is often a placebo.

In many lifts in the United States and Europe, the close-door button is wired to do nothing. Disability legislation requires the doors to remain open long enough for a wheelchair user to enter, and overriding that with a button could be unsafe. The button stays on the panel because users expect a sense of control. It is a deliberate violation of visibility of system status - and we accept it.

ii.The user’s mind

Cognitive aspects: how people think, and what that means for design.

To design well, you need a working model of the mind on the other side of the screen. Cognitive psychology gives us six concepts that show up over and over in interface design: attention, perception, memory, learning, mental models, and cognitive load. Get these wrong and your interface fights the user’s brain.

A

Attention

Attention is selective and finite. People can’t process everything on a screen at once; they sample. The visual hierarchy of an interface decides what gets sampled and what gets missed.

Designers exploit attention with size, colour, motion, and isolation. They protect attention by removing competition: notifications grouped, ads boxed off, primary actions emphasised.

WhatsApp: the unread badge on a chat is bright, contrasting, top-right - designed to grab attention in the first 200 milliseconds of looking at the list. The chat list itself is monochrome to keep the badges the only thing that “pops”.
Design lesson: if everything shouts, nothing is heard. Decide what deserves attention; subdue everything else.
P

Perception

How we group and interpret what we see, governed by Gestalt principles: proximity, similarity, closure, continuity, common region. Things that look related are treated as related, even before reading.

These are not optional. The eye applies them automatically. Designers either work with them or fight them.

Instagram’s grid: three columns of evenly-spaced thumbnails. Proximity tells the eye these belong to one feed; similarity (square shape) reinforces it. The same photos in random sizes and gaps would feel chaotic without changing a pixel of content.
Design lesson: spacing and alignment carry meaning. Items grouped by proximity say “these go together” before the user reads a word.
M

Memory

Working memoryThe system that holds and manipulates information for short periods. Capacity is famously about 7 ± 2 items (Miller, 1956), now thought to be closer to 4 ± 1 chunks.الذاكرة العاملة holds about 4 chunks at a time, briefly. Long-term memory is huge but unreliable: we recognise far better than we recall.

Good interfaces make memory the system’s job, not the user’s. Don’t make people remember the order number from screen 3 to type on screen 5; show it.

Google Maps: recently visited places appear as one-tap suggestions. Your home address autocompletes after one letter. The system remembers; you don’t have to.
Design lesson: if the system already knows it, show it. Asking users to retain information across screens is the friction that closes accounts.
L

Learning

People form rules from patterns. The same control should always do the same thing. Break the rule and the user must re-learn. Most apps you use today taught you their language in 30 seconds.

Two phases: initial learning (first use) and relearning (after a break). Both matter; designers often optimise the first and forget the second.

iOS swipe-to-delete: learned once in Mail, transfers to Messages, Notes, Reminders, and dozens of third-party apps. Consistency turns learning into reuse. WhatsApp uses the same gesture - reinforcing it across an entire ecosystem.
Design lesson: reuse existing conventions instead of inventing new ones. A novel gesture is a tax on every user.
M

Mental models

A mental modelAn internal representation of how a system works that a user constructs to predict its behaviour. Often inaccurate, but always influential.النموذج الذهني is the user’s working theory of how a system behaves. Designers have one model; users build their own from what they see. When they don’t match, the user is “confused”.

The fix isn’t a tutorial. It’s a clearer interface that shows its model.

Email folders vs Gmail labels: users imagine emails sit in folders like paper files. In Gmail, labels are tags; an email can have many. The mental-model mismatch is why some users find Gmail unintuitive even after years of use.
Design lesson: if your mental model is unusual, your interface needs to teach it explicitly - or you lose users to a competitor whose model is the one users already have.
E

Cognitive load

The total mental effort a task demands. Heavy load means slower, more error-prone behaviour. Every label to read, decision to make, and place to remember adds load. Good design strips it down.

Three flavours: intrinsic (the task itself), extraneous (caused by bad design), germane (effort spent learning). The first you can’t reduce. The second you must. The third you should support.

Apple Pay: hold phone near reader, look at it, done. Three steps that used to take seven (find card, swipe, sign, confirm, store receipt, etc.). Cognitive load reduced almost to zero.
Design lesson: measure how much thinking your interface forces. Then remove what isn’t the actual task.

Recognition is cheaper than recall

A foundational result from cognitive psychology, with massive implications for interface design. Recognition (seeing something and knowing what it is) is psychologically much faster and more accurate than recall (retrieving something from memory unprompted). This is why Nielsen’s sixth heuristic (“recognition rather than recall”) exists, and why dropdowns, search-with-suggestions, and recently-used lists are so common.

Examples everywhere: Google search shows recent searches the moment you tap the box. Instagram shows recently-tagged friends as suggested mentions. iPhone shows recently-used emojis at the front of the picker. Each one converts a recall task (“what did I search for yesterday?”) into a recognition task (“is one of these it? Yes, that one.”).

Fun aside
Why phone numbers are 7 digits - Miller’s 1956 paper.

In 1956, psychologist George Miller published The Magical Number Seven, Plus or Minus Two. He showed people’s short-term memory holds about seven items. Bell Labs had already been using seven-digit phone numbers for decades. The two facts converged into a famous design principle: chunk by sevens.

Modern research suggests the real number is closer to 4 ± 1, which is why credit cards now break their 16 digits into four groups of four. Every chunking decision in an interface is paying respect to working-memory limits.

iii.Beyond the individual

Interaction is social and emotional.

People don’t use software in a vacuum. They use it with other people, in front of other people, and they feel things while doing it. A design that is technically usable but emotionally cold, socially awkward, or culturally tone-deaf will lose to one that isn’t.

Social interaction: designing for groups

From the moment computers got networked, interaction became collaborative. Today most apps are social by default. Three concepts you should know:

P

Presence and awareness

The cues that tell you who else is here, what they’re doing, when they last appeared. Designers shape these intentionally.

WhatsApp: the “last seen” timestamp, the two blue ticks (read), the “typing…” indicator. Each is a deliberate awareness cue. Each was also controversial when introduced - users could now tell when they were being ignored.
T

Telepresence and shared space

The feeling of being together when physically apart. Video, voice, shared cursors, real-time editing.

Google Docs: coloured cursors with names floating where someone is editing. Figma: the same idea taken further - you can see your collaborator’s mouse trail as they move. You can almost feel them in the document with you.
N

Social norms in design

Conventions of polite behaviour the system encodes. Read receipts. Online indicators. Group-add permissions. These shape what feels rude vs normal.

Instagram DMs: read receipts can be turned off. The platform learned that mandatory “seen” created social anxiety. The choice is now the user’s - a deliberate design concession.

Emotional interaction: designing for feelings

Emotion is not the icing on the cake. It influences attention, memory, decision-making, and whether someone returns. Don Norman’s book Emotional Design (2004) breaks the design’s emotional impact into three levels.

V

Visceral

The first immediate reaction. Beauty, novelty, threat. Decided in milliseconds, before any thinking.

Apple unboxing: the product reveal is engineered for visceral delight. Soft paper, perfect-friction lid, satisfying weight. The product hasn’t been used yet - but you already love it.
B

Behavioural

The pleasure of using something well. Smooth, fast, predictable, satisfying. The level designers can most directly engineer.

Tinder swipe: the gesture, the spring, the card flipping. Behavioural pleasure makes the act of swiping itself rewarding - independent of the match. iPhone keyboard: the tiny haptic click on each key serves no information; it serves feeling.
R

Reflective

The story you tell yourself about the product later. Identity, status, meaning. The level brands compete most fiercely for.

Spotify Wrapped: a December tradition where the app reflects your year back at you. Behavioural in the moment, deeply reflective afterwards. Half the social-media internet shares it - for free.

The cultural angle

What feels warm in one culture feels presumptuous in another. Right-to-left layouts, address conventions, name fields that assume a Western format, calendar systems, gender pronouns - these are all design decisions, not defaults. For your project, consider Arabic users specifically:

None of this is exotic. It is the norm for the users your project will interview. Design that doesn’t take it seriously will be marked accordingly.

Part twoPrinciples
iv.Models and gulfs

Norman’s three models, and the two gulfs between them.

Before introducing his six principles, Don Norman gives us a deeper diagnostic framework: three models that exist whenever a person uses a system, and the two gaps between them that cause every usability problem you will ever encounter. If you understand this section, you can diagnose interface failures with surgical precision.

The three models

Whenever a system exists in the world, there are three different mental pictures of how it works:

Designer’s model
In the designer’s head

How the designer intends the system to work. Built from requirements, technical constraints, business goals, and personal taste.

The designer is rarely a typical user. Their model is informed by knowledge users don’t have.

System image
In the world - the actual interface

What the system actually shows: the screens, the labels, the buttons, the documentation. The only channel through which users learn how it works.

Users see the system image. They do not see the designer’s model.

User’s model
In the user’s head

What the user believes about how the system works, built entirely from the system image plus their previous experience.

Often wrong, always influential. Users act on this model, not on reality.

The job of design is to make sure the system image communicates the designer’s model clearly enough that the user’s model matches reality. When the user’s model and the designer’s model diverge, the system fails - even if the code is perfect. Most usability problems are not bugs. They are failures of the system image to convey the designer’s intentions.

The seven stages of action

Norman models any goal-directed action as seven stages: one of forming intention, three of execution (acting), and three of evaluation (sensing the result). This is what every user does, every time they touch your interface.

Stage 1
Form a goal

“I want to send Layla 200 shekels.”

Stage 2 · Execution
Plan the action

“I’ll use the bank app, find her contact, type the amount.”

Stage 3 · Execution
Specify the action sequence

“Open app → Send → Layla → 200 → confirm.”

Stage 4 · Execution
Perform the action

Tap, tap, tap, tap, confirm.

Stage 5 · Evaluation
Perceive the state

Did anything happen on screen?

Stage 6 · Evaluation
Interpret the perception

Was that confirmation, error, or unclear?

Stage 7 · Evaluation
Compare to goal

Is the money sent? Goal met or not?

The two gulfs

Between the user and the system there are two gaps where things go wrong. Norman called them the gulf of execution and the gulf of evaluation.

User goals · intentions interpretations System interface · state behaviour · output Gulf of execution how do I make the system do what I want? Gulf of evaluation did what I wanted actually happen?
Gulf of execution The gap between what the user wants to do and how the system says to do it. Closed by clear affordances, visible options, plain language, and good mapping. “I want to attach a file… where is the attach button?”
Gulf of evaluation The gap between the system’s actual state and the user’s perception of it. Closed by feedback, status indicators, and clear error messages. “I clicked send. Did it actually send?”

Diagnosing failures with the gulfs

Almost every usability problem is a failure on one side or the other. This is the most useful diagnostic tool Norman gives us.

Source: Norman, D. A. (2013). The Design of Everyday Things, Revised and Expanded Edition, chapters 2 and 3.

v.Generative principles

Norman’s six principles: how to design.

Norman’s six principles are generative - they help you create. They describe the universal rules where human cognition meets the designed world. Norman developed them analysing not just software but everyday objects: doorknobs, kettles, light switches. The principles transfer cleanly to digital interfaces because the underlying psychology is the same.

01

Visibility

الوضوح

The relevant parts of a system should be visible, communicating what the system can do and how to do it. Hidden options are unused options. If users can’t see it, they can’t use it.

Real-world example · Instagram Every primary action - like, comment, share, save, story, reel - is visible on the home screen as an icon or tab. There are no “secret menus”. New users discover features just by looking.
Compare with · Snapchat Snapchat famously hides most features behind swipes and undocumented gestures. Power users love it; new users feel lost. Visibility traded for “cool” - a tradeoff that costs them new users.
02

Feedback

التغذية الراجعة

After every action, the system should communicate the result clearly and without delay. Feedback can be visual (a flash, colour change), auditory (a click), or haptic (a vibration). If feedback is absent or delayed, users repeat the action - and trust drops.

Real-world example · WhatsApp message ticks One grey tick = sent to server. Two grey = delivered to recipient. Two blue = read. Three escalating layers of feedback in a 4×8 pixel detail. Without them, users would not know if a message arrived; with them, the social conversation flows.
Tap to expand
The four feedback states - animated sequence
hello layla PENDING hello layla SENT hello layla DELIVERED hello layla READ
Compare with · iPhone Face ID On unlock: a tiny lock icon transitions from closed to open with a haptic tap. Silent, instant, unmistakable. Three layers of feedback (visual, motion, haptic) in 200ms.
03

Constraints

القيود

Limit what the user can do at any moment to prevent errors. Constraints can be physical, logical, semantic, or cultural. The fewer wrong actions are possible, the fewer wrong actions are taken.

Real-world example · Booking.com date picker After you select a check-in date, all earlier dates are disabled (greyed out). You cannot book a check-out before check-in. An entire class of errors is designed away by visible constraint.
Tap to expand
The constraint, drawn
May 2026 MonTueWed ThuFriSatSun 456 78910 11 12 1314151617 181920 21222324 Check-in selected: 12 May. Earlier dates greyed; later dates active.
Compare with · Airbnb min-nights warning Airbnb adds a soft constraint: if a host requires 3-night minimum, dates that would yield a 1- or 2-night stay grey out automatically when you start clicking. The user never sees an error message because the error is impossible to make.
04

Mapping

المطابقة

The relationship between controls and effects should be intuitive. Push-up arrow = move up. Volume slider goes right = volume gets louder. Bad mapping is when up means down, or where the layout of buttons doesn’t match the layout of what they control.

Real-world example · iPhone volume buttons Top button up, bottom button down. The physical layout of the buttons maps directly to the conceptual direction of volume. You never need to read a label.
Compare with · car stove-top controls Norman’s classic example: stoves with four burners arranged in a square but four knobs in a row. Which knob controls which burner? Without numbered diagrams, every user guesses. Bad mapping you live with for years.
05

Consistency

الاتساق

Similar things should look similar; similar actions should work similarly. Inconsistency forces re-learning every time, and breeds doubt about whether two things really are the same.

Real-world example · Material Design (Android) Floating action button (FAB) is always bottom-right, always circular, always for the screen’s primary action. Across thousands of apps, the same gesture works. Consistency at the platform level scales to billions of users.
Compare with · iOS Human Interface Guidelines Apple’s rules are even stricter: tab bars at the bottom, navigation bars at the top, modal dismiss in a specific corner. The cost of inconsistency: rejection from the App Store.
06

Affordance

الإيحاء

Visual properties of an object that suggest how to use it. A button looks pressable. A slider looks draggable. A flat plate on a door affords pushing; a vertical handle affords pulling. The famous concept Norman borrowed from psychologist J. J. Gibson.

Real-world example · Tinder cards Stacked rectangular cards with rounded corners and a subtle shadow afford swiping. You don’t need a tutorial. The shape and physics of the cards say “swipe me” before any text does.
Tap to expand
Stacked cards · the swipe affordance
SWIPE LEFT SWIPE RIGHT
Compare with · the “flat design” problem Around 2013, iOS 7 removed shadows and gradients from buttons. Suddenly nothing looked clickable. New users tapped on plain text labels and shrugged. Affordance lost. iOS now subtly restored visual cues - partial colour, mild shadows - to bring affordance back.
Norman backstory
The Norman door is named after the man who hated them.

Don Norman is a cognitive scientist who worked at Apple in the 1990s (he was Vice President of Advanced Technology) and co-founded the Nielsen Norman Group with Jakob Nielsen. He became famous after publishing The Design of Everyday Things (1988), where he argued that bad design was not user error but designer error.

He never set out to have terrible doors named after him. The internet did that. He has since said he both loves and hates the legacy: he wishes his books were better known than the doors, but if it makes people notice bad design, he’ll take it.

↗ Vox · The Norman Door (4 min)
vi.Evaluative heuristics

Nielsen’s ten heuristics: how to judge.

Where Norman’s principles help you create, Jakob Nielsen’s ten heuristics help you evaluate. They were derived empirically: Nielsen and Molich analysed 249 usability problems and looked for the rules that, if followed, would have prevented them. The result is the most widely used inspection tool in HCI - and the basis for your individual assignment and Phase 1.

Nielsen’s research with Tom Landauer (1993) showed that three to five expert evaluators catch about 75% of usability problems. This is why heuristic evaluation is the most cost-effective method when users are not yet available. Click each heuristic to expand.

1
Visibility of system status
+
وضوح حالة النظام

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Users need to know whether their action worked, whether the system is busy, and where in a process they are.

Ask: If I do something, does the system tell me what happened? If something is loading, do I know it is loading? If a process has many steps, do I know which step I’m on?
How WhatsApp respects this

Sending a message: a clock icon (sending), one tick (sent), two ticks (delivered), two blue ticks (read). Four distinct states for one operation. If the network drops, a red exclamation mark appears with “tap to retry”.

How some banking apps violate this

Tap “Send” on a transfer. Spinner for 8 seconds. Returns to home dashboard with no confirmation. Did it go through? Should I try again? Without status feedback, the user invents stress that the system created.

2
Match between system and the real world
+
التطابق بين النظام والعالم الحقيقي

Speak the user’s language. Use words, phrases, and concepts familiar to the user, rather than internal jargon. Follow real-world conventions, making information appear in a natural and logical order.

Ask: Would a normal user - someone who has never seen the codebase - understand every label and every error without a manual?
How Gmail respects this

The compose button is a pencil. The trash is a bin. Folders look like folders. The interface borrows real-world metaphors so deeply that even a first-time user can guess what each icon does.

Classic violation

“Error 429: rate limit exceeded” shown to a user who has no idea what 429 means. Plain language version: “Too many requests. Please wait 30 seconds and try again.”

3
User control and freedom
+
تحكّم المستخدم وحرّيته

Users often choose system functions by mistake and need a clearly marked “emergency exit” to leave the unwanted state. Support undo and redo. Don’t trap people. Don’t make them confirm forever.

Ask: Can the user always back out of any state, without losing work and without consequence?
How Gmail respects this

After sending an email, a black bar appears with “Message sent · Undo · View message”. For 10 seconds, you can take it back. Gmail actually delays sending by that amount; a kindness implemented in code.

Classic violation

A multi-step checkout where pressing the browser back button loses everything. Users learn to never press back, then double-click submit instead, creating duplicate orders. The original sin compounds.

4
Consistency and standards
+
الاتساق والمعايير

Users should not have to wonder whether different words, situations, or actions mean the same thing. Two flavours: internal consistency (your app uses the same word for the same thing everywhere) and external consistency (your app follows the platform’s conventions: iOS apps look like iOS apps).

Ask: Does the same word always mean the same thing? Does my app feel like it belongs on the platform it’s on?
How Apple enforces this

The Human Interface Guidelines specify exact patterns for navigation, gestures, modals. Following them, your app feels “native”; ignoring them, it feels foreign. Most apps rejected from the App Store are rejected for inconsistency.

Classic violation

One screen says “Sign Out”, another “Log Out”, a third “Exit Account”. All do the same thing. Users wonder if they’re different.

5
Error prevention
+
منع الأخطاء

Even better than good error messages is a careful design that prevents problems from occurring in the first place. Either eliminate error-prone conditions, or check for them and ask the user to confirm before they commit.

Ask: Has the system designed away mistakes a user could plausibly make?
How Apple Pay respects this

Before a transfer, you see the recipient’s name and the exact amount. You hold the phone next to the reader. You authenticate with Face ID. Three independent checks before money moves. The flow is faster than tapping a card, yet harder to do wrong.

Classic violation

A booking form accepts “31 February” without complaint. The error is only caught four screens later, after payment. The user must restart from scratch.

6
Recognition rather than recall
+
التعرّف بدلاً من التذكّر

Minimise the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the interface to another. Recognition (seeing something and knowing it) is psychologically much cheaper than recall (retrieving something from memory unprompted).

Ask: Am I asking the user to remember things the system already knows?
How Google search respects this

The search box shows your recent searches as you focus it. You don’t need to remember what you looked up yesterday - Google offers it back. Suggestions appear after one letter. Recognition replaces recall.

Classic violation

On the payment screen, you can’t see your order details. You have to remember what you ordered to verify the price. The data exists; the interface just hides it.

7
Flexibility and efficiency of use
+
المرونة وكفاءة الاستخدام

Accelerators - often unseen by the novice - may speed up the interaction for the expert. Allow users to tailor frequent actions. Cater to both inexperienced and experienced users.

Ask: Can a regular user become faster over time? Are there shortcuts, defaults, saved preferences, customisations?
How Gmail respects this

Beginners use the visible buttons. Experts use keyboard shortcuts (J, K to navigate; E to archive; / to search). Same software, two interfaces, no mode switch needed.

Classic violation

A returning user re-types their address every order. The system has stored it 47 times. It just never offers to save it.

8
Aesthetic and minimalist design
+
التصميم الجمالي والبساطة

Interfaces should not contain information that is irrelevant or rarely needed. Every extra unit of information competes with the relevant units and diminishes their relative visibility. Note: this is not about being decorative - it’s about relevance.

Ask: Is everything on this screen earning its place? Is there visual noise drowning the important content?
How Google search respects this

The home page is a white page with a logo and a search box. Anything else would be noise. Compare to Yahoo’s home page in 2002 - a wall of text and links. Google won by removing.

Classic violation

A news site where ads, banners, sidebars, video pop-ups, newsletter prompts, and cookie banners together occupy more screen than the article. The user must fight to read.

9
Help users recognise, diagnose, and recover from errors
+
مساعدة المستخدمين على التعامل مع الأخطاء

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. The three jobs of an error message: recognise something went wrong, diagnose what specifically, recover by acting on the suggestion.

Ask: When something goes wrong, does the user understand what happened, why, and how to fix it?
How Stripe’s payment errors do this

“Your card was declined. This is often because of insufficient funds, an expired card, or a fraud-prevention check by your bank. Try a different card, or contact your bank to authorise this purchase.” Diagnosis, three likely causes, two concrete recovery actions.

Classic violation

“Invalid input.” Which field? Which input? What would valid look like? The user is left to guess. Or the famous Therac-25: “Malfunction 54”.

10
Help and documentation
+
المساعدة والتوثيق

Even though it is best if the system can be used without documentation, it may be necessary to provide help. Such help should be easy to search, focused on the user’s task, list concrete steps, and not be too large.

Ask: When the user is stuck, is help available where they need it, in language they understand, focused on their actual question?
How Notion respects this

Inline tooltips on every feature. A help icon next to obscure controls. A searchable docs site. Most importantly: contextual help - typing “/” in a Notion page shows you what you can do right where you are.

Classic violation

The only help link goes to a 60-page generic FAQ. The user’s actual question (“why won’t my card add?”) is not in it. The Help button is theatre.

Source: Nielsen, J. (1994). Enhancing the explanatory power of usability heuristics. ACM CHI ’94, pp. 152–158. The exact wording follows the canonical Nielsen Norman Group articulation, with examples added.

vii.Two famous lists

Norman’s principles, Nielsen’s heuristics: different jobs.

Students confuse these constantly. They are not the same kind of thing. One helps you build; the other helps you judge.

Norman’s principlesgenerative

Used to create designs. Describe how human cognition meets the designed world.

  • OriginCognitive science & psychology. Don Norman, late 1980s.
  • DomainAll designed objects: doorknobs, kettles, websites, cockpits.
  • UseGenerative - they help you decide what to build.
  • CountSix (visibility, feedback, constraints, mapping, consistency, affordance).
  • Example use“The save button has no visible feedback when clicked. Let’s add a state change and a confirmation toast.”

Nielsen’s heuristicsevaluative

Used to judge existing designs. Empirically derived from real usability problems.

  • OriginEmpirical analysis of 249 usability problems. Nielsen, 1994.
  • DomainSpecifically interactive computing systems.
  • UseEvaluative - they help you find what is broken.
  • CountTen (above).
  • Example use“This page violates Heuristic 1 (visibility of system status). The user clicks submit and nothing visible happens.”

The takeaway: in your Phase 2 design rationale you will cite Norman when justifying choices. In your Phase 1 and Phase 3 evaluations you will cite Nielsen when judging existing or proposed interfaces. Many concepts overlap (consistency appears in both lists) but the verb is what matters: are you making something, or marking something?

viii.The laws

Two laws every designer should know by name.

Beyond principles and heuristics, two empirical laws from cognitive psychology have travelled into HCI and stayed there. They are predictive: given some inputs, they tell you what users will do. Use them when justifying design choices about menu length, button size, or navigation depth.

H

Hick’s Law (Hick-Hyman, 1952)

The time to make a decision grows logarithmically with the number of choices. Roughly: T = b · log2(n + 1), where n is the number of options.

Practical takeaway: doubling your menu items doesn’t double the decision time, but adding any items always slows decisions. When users hesitate, fewer choices help.

Example · iOS share sheet shows just 4–6 most-used apps at the top, with a “more” button to reveal the rest. Hick’s Law made operational: collapse the long tail.
Use it for: justifying primary navigation depth, dropdown length, button group sizes, recommendation list cuts.
F

Fitts’s Law (1954)

The time to point at a target depends on its size and distance. Big, near targets are fast; small, far targets are slow. Bigger and closer = faster.

On mobile, this means thumb-reachable zones (lower half of screen) are golden. On desktop, edges and corners (especially the corners) are infinitely large because the cursor stops there - that’s why macOS puts the menu bar at the very top.

Example · Material Design 44dp minimum touch target. Apple’s HIG specifies 44×44 points. Anything smaller forces precision the thumb can’t deliver. Fitts’s Law in practice.
Use it for: button sizing on mobile, primary action placement, designing for one-handed use.

These two laws plus Miller’s ~4 chunks of working memory give you a quantitative toolkit. Most design intuitions can be back-justified with one of them. When evaluating a design, ask: are we asking too much (Miller)? Too many choices at once (Hick)? Targets too small or far (Fitts)?

ix.The shadow side

Dark patterns: principles abused.

Everything you have learned can be inverted to manipulate users instead of help them. The term dark patterns was coined by Harry Brignull in 2010. Recognising them is part of being a responsible designer - and increasingly part of being legally compliant, as European Union and California regulations now ban several of these explicitly.

Confirmshaming

The opt-out option is worded to make the user feel guilty. “No thanks, I don’t want to save money.”

Spotted inNewsletter signup popups across e-commerce. Users learn to ignore them or, worse, click yes out of social pressure.

Roach motel

Easy to get into, hard to get out. Signing up takes 30 seconds; cancelling takes 30 minutes, three confirmations, and a phone call.

Spotted inStreaming services, gym memberships, telecoms. The asymmetry is the point.

Pre-checked boxes

Marketing consent or paid extras checked by default. The user must uncheck to opt out - betting they won’t notice.

Spotted inBooking flows, account creation, purchase confirmation. Now illegal under GDPR; companies still try.

Hidden costs

Final price reveals fees only at the last step. By that point, the user has invested too much effort to start over.

Spotted inAirline bookings, ticket marketplaces. “Service fee + tax + processing + handling” can add 30%.

Disguised ads

Ads styled to look like content. The line between editorial and commercial blurs deliberately.

Spotted inSearch engine results, social feeds, news sites. Required to be labelled, often barely.

Forced continuity

Free trial that silently converts to paid subscription. No reminder before charging. The card on file makes friction one-sided.

Spotted inSubscription services. EU now requires reminder emails before auto-renewal; the US is catching up.

Privacy zuckering

Trick users into sharing more than they meant to. Default to public, demand multiple confirmations to make private. Named for Facebook’s historical pattern.

Spotted inSocial platform onboarding, “import contacts” prompts.

Bait and switch

The user expects one thing to happen but a different action occurs. The classic Windows 10 “close this dialog” X that started the upgrade install.

Spotted inInstall dialogs, modal close buttons that confirm rather than dismiss.

Why study these in a design course? Because students will be asked, professionally, to implement them. Recognising the pattern means you can name the cost (regulatory risk, brand damage, user trust) and offer an alternative (the equivalent goal achieved through honesty). The interface you build will be measured by whether you used your skills for the user or against them.

Worth knowing
The European Union now legally bans several dark patterns by name.

The EU’s Digital Services Act (2022) and the Unfair Commercial Practices Directive specifically prohibit pre-ticked boxes for paid extras, hidden subscription auto-renewals, and confirmshaming on cookie consent. The US Federal Trade Commission has issued similar rules. Designers need to know dark patterns not only ethically but for compliance.

Part threeProcess & methods
x.The process

User-centred design: a loop, not a line.

UCD is the process this entire course teaches. It is defined by ISO 9241-210:2019 as four iterative activities. Note the word iterative: you don’t do them once. You do them, find problems, do them again. Your project mirrors this loop directly: Phase 1 understand and specify, Phase 2 design, Phase 3 evaluate and revise.

user-centred design ISO 9241-210 01 Understand context 02 Specify requirements 03 Produce designs 04 Evaluate against reqs
01
Understand context of use

Who are the users? What are their goals? Where, when, with what device, under what constraints? Methods: interviews, observation, contextual inquiry.

02
Specify requirements

Translate context into specific, measurable requirements. Personas, scenarios, usability targets mapped to ISO 9241-11 goals.

03
Produce design solutions

Sketches, wireframes, prototypes. Low-fidelity first to test ideas cheaply; higher fidelity later when ideas survive.

04
Evaluate against requirements

Heuristic evaluation, usability testing with real users, think-aloud protocol, SUS scoring. Findings feed back into stage 1 or 2.

The loop continues until the requirements are met. In a commercial product, that means continuously, for the lifetime of the product. In your course project, it means three rounds: research, design, evaluate. You will close the loop visibly in your Phase 3 reflection.

The four guiding principles of UCD

ISO 9241-210 also gives us four principles that underpin the activities themselves:

Source: ISO 9241-210:2019, “Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems”.

xi.Interaction types

How users engage with systems.

Rogers, Sharp and Preece distinguish five fundamental interaction types - the verbs of human-computer interaction. A single product usually combines several. Knowing which type dominates helps you choose the right design idiom.

Instructing

The user tells the system what to do, command-style. Fast for experienced users; high precision; needs the user to know what to ask.

ExamplesTerminal commands, keyboard shortcuts, “Hey Siri, set a 5-minute timer”.
Conversing

Dialogue back and forth. The system asks, the user answers, refines, corrects. Mental model: the system is an interlocutor.

ExamplesBank chatbots, ChatGPT, “OK Google, find a vegetarian restaurant nearby”.
Manipulating

Direct manipulation of objects on screen: drag, drop, resize, swipe. Mental model: things in the interface behave like physical things.

ExamplesFigma canvas, drag-to-rearrange in iOS, Tinder swipe.
Exploring

Moving through a space - physical or virtual - to find things. Spatial memory carries information.

ExamplesGoogle Maps, Wikipedia rabbit holes, walking through Apple Vision Pro spatial UI.
Responding

The system initiates; the user responds (or doesn’t). Notifications, suggestions, prompts. Power: gets attention. Risk: annoys.

ExamplesPush notifications, Spotify Daylist suggestions, fitness tracker reminders.

Most modern apps blend types. WhatsApp conversing + responding + manipulating (image edits). Google Maps exploring + conversing (search) + responding (traffic alerts). When designing, name the dominant type for each task - and ask whether the interaction idiom you’ve chosen actually matches it.

xii.Interface families

The families of interfaces you’ll meet.

Not all interfaces are screens with buttons. The history of HCI is partly a history of new interface paradigms, each opening new use cases and demanding new design rules. Here are the major families, with the example you probably have in your pocket.

CLI · 1960s
Command-line

Type a command, get a response. Powerful, fast for experts, terrible for beginners. Discoverability is zero - you must already know the command.

Example: a developer terminal, git commands.

GUI · 1984
Graphical (WIMP)

Windows, Icons, Menus, Pointer. The desktop metaphor. Made computers accessible to non-experts.

Example: macOS, Windows, every desktop OS.

Mobile · 2007
Touchscreen mobile

Multi-touch, gestures, contextual. Apps replace files; gestures replace menus. The iPhone redefined the genre.

Example: any app on your phone right now.

Voice · 2014+
Voice user interfaces

Spoken commands, conversational responses. Hands-free, eyes-free. New design challenges: no screen to inspect, error recovery is harder.

Example: Siri, Alexa, Google Assistant.

XR · 2016+
Augmented & virtual reality

3D space, head-tracked, sometimes hand-tracked. Spatial computing breaks free from the rectangle. Still finding its native use cases.

Example: Apple Vision Pro, Meta Quest.

Wearable
Wearables & on-body

Watches, fitness bands, smart rings. Tiny screens or no screens. Information glance-able; interaction minimal.

Example: Apple Watch, Garmin, Oura ring.

Conversational
Chat interfaces

Text-based dialog with a system. Old (IRC bots) and now newly central with LLM chat. Mental model: a person you message.

Example: ChatGPT, Claude, customer-service chatbots.

Ambient
Ambient & tangible

Computing embedded in the environment or in physical objects. Ideally invisible; you interact through everyday things.

Example: smart-home lighting, Nest thermostat.

Each interface family demands a different toolkit. A voice interface cannot use Heuristic 8 (aesthetic and minimalist design) literally - there is no aesthetic. But the spirit applies: don’t make the user listen to options that aren’t relevant. Heuristics translate; interfaces evolve.

xiii.Methods reference

The methods, at a glance.

A compact reference for the main HCI methods you’ll encounter in this course. Pick the one that matches your stage, your question, and your resources. Several of these will appear later in the textbook chapters on data gathering and experimental research.

MethodWhen to useWhat you getTime cost
Heuristic evaluation Early or late, no users available, fast feedback on an existing design or prototype. List of usability violations with severity ratings. ~1 day per evaluator
Cognitive walkthrough Early design, focused on learnability for first-time users. Walk through specific tasks asking “will the user know what to do?” Step-by-step list of confusion points. ~half-day per task
Semi-structured interview Phase 1 of any project. Understanding goals, frustrations, current behaviours. Qualitative themes, quotes, persona material. 30–60 min per participant
Contextual inquiry Understanding real use, in real environments. Field observation plus questioning. Rich qualitative data on actual behaviour vs reported behaviour. 2–4 hours per session
Survey / questionnaire Many participants, standardised questions, statistical claims. Quantitative summaries; hard to get depth. ~5 min per participant; analysis longer
Usability test (think-aloud) Phase 3 of a project. Watching real users use a prototype, narrating their thoughts. Specific failure points, time on task, error counts. 45–60 min per participant
System Usability Scale (SUS) After a usability test. 10 standard questions producing a 0-100 score. Single comparable usability score. ~3 min per participant
A/B test Live product, comparing two variants on real users at scale. Statistical evidence one variant outperforms the other on a metric. days–weeks of live traffic
Diary study Behaviour over time; long-term feature use; emotional context. Longitudinal qualitative data; usage patterns. days–weeks per participant

A practical guideline: use heuristic evaluation first when no users are available; it’s the cheapest way to find the most problems. Then move to interviews to understand context, then to usability testing once you have a prototype. Surveys are most useful late, to validate findings at scale.

xiv.Heuristic evaluation

Heuristic evaluation: the method itself.

Knowing the heuristics is half the job. The other half is running the evaluation properly - the steps, who does it, how they aggregate, and how their findings are written up. Your individual assignment is one full evaluation; Phase 1 of the team project is another. Both expect the methodology below.

Why the method works: the Nielsen-Landauer curve

Nielsen and Landauer (1993) found that the proportion of usability problems detected scales with the number of evaluators following an exponential curve. One evaluator catches about 35% of problems. Three to five evaluators catch about 75%. Ten evaluators catch about 95%, but you pay diminishing returns past five - you find more, but mostly issues the first few already found, just confirmed.

The result has a sharp managerial implication: if you can afford five expert evaluations on your prototype, you can probably skip the first round of expensive user testing. That is why heuristic evaluation became the standard early-stage usability method.

The five steps

1
Each evaluator works alone

Independence is essential. Two evaluators discussing as they go contaminate each other’s findings. They will rediscover the same things, miss the same blind spots, and the “agreement” will be a single observation duplicated.

For your project: each team member evaluates the system alone first, takes their own notes, and only later compares with teammates.

2
Walk the system task by task

Don’t evaluate “in general”. Pick the realistic tasks a user would do (e.g. “register for a course”, “view my grades”, “reset my password”) and walk through each one screen by screen.

For each screen, hold each of Nielsen’s 10 heuristics in mind. Most won’t apply to most screens; that’s normal.

3
Document each violation completely

For each issue found, write five things:

  • Heuristic violated (one of the 10, by number and name)
  • Location (which screen, which element - with a screenshot)
  • Description of what is wrong, in concrete terms
  • Severity rating (0–4, see next section)
  • Proposed fix (specific and feasible, not “make it better”)
4
Aggregate across evaluators

Once everyone has finished alone, the team merges findings. Issues spotted by multiple evaluators get higher confidence. Disagreement on severity is normal - discuss until consensus, or take the median.

Your Phase 1 deliverable is one consolidated list, not three separate lists.

5
Identify dominant patterns

The deliverable expects a paragraph on patterns. Look across all your violations: do five of them all relate to feedback (Heuristic 1)? Then your dominant pattern is “the system rarely tells users what is happening”.

Patterns matter more than individual issues, because patterns guide redesign priorities for Phase 2.

A common mistake: treating heuristic evaluation as a brainstorm of complaints. It is not. Each finding must be tied to a specific heuristic with a specific reason. “The colours are bad” is not an evaluation finding. “Heuristic 8 violated - the page has 17 sidebar widgets competing with body content; visual noise drowns the primary task” is.

Source: Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of ACM INTERCHI ’93, pp. 206–213. And: Nielsen Norman Group, How to conduct a heuristic evaluation.

xv.Severity

Rating severity, properly.

Nielsen’s 0–4 scale is not a feeling. It is a function of three sub-factors: frequency (how often does the user hit it?), impact (how bad is it when they do?), and persistence (does it stay annoying or do users adapt?). A rating of 3 needs at least two of these high; a rating of 4 needs all three.

0
Not a problem

Disagree it’s a usability issue at all.

1
Cosmetic

Need not be fixed unless extra time available.

2
Minor

Fixing this should be given low priority.

3
Major

Important to fix; high priority.

4
Catastrophic

Imperative to fix before release.

Source: Nielsen, J. (1995). Severity ratings for usability problems. Nielsen Norman Group.

Practice tool

Pick a real-world violation and rate it. The tool computes the severity from your three sub-factor judgements and explains why.

Practice: rate this violation

A user tries to register for a course on Ritaj. After clicking “Register”, nothing visibly happens for 3 seconds. The user clicks again. The course is registered twice and the user must contact admin to fix it.

xvi.Interviewing users

Conducting user interviews well.

The Phase 1 deliverable expects two to four exploratory interviews per team. The quality of your personas, scenarios, and requirements depends entirely on the quality of those interviews. Bad interview, bad data, bad design. Here is the method.

Before the interview

The interview itself: focus on stories, not opinions

The single most important interviewing principle: ask about specific past experiences, not generic preferences. Opinions are what people think they should say. Stories are what they actually did.

Bad question: “Do you find Ritaj easy to use?” (yes, no, fine - useless)

Good question: “Walk me through the last time you registered for a course. Where were you, what device were you using, what happened?”

Specific stories surface real workarounds, real frustrations, and real moments of success. They also surface things the participant hadn’t thought to mention.

Probing techniques

After each substantive answer, follow up. The richest data is in the second and third answer, not the first.

After the interview

Sanity check
The most common interview mistake.

Asking “would you use a feature that…” questions. Users will say yes to almost anything hypothetical. The 1986 Sony Walkman study famously found that focus-group participants overwhelmingly chose yellow Walkmans. When offered a free Walkman on the way out, almost all picked black. People are bad at predicting their own behaviour. Ask about what they did, not what they would.

xvii.Usability testing & SUS

Watching real users use your prototype.

Phase 3 of your project is a usability test. Each team runs at least three participants through realistic tasks on your interactive prototype, narrating their thoughts as they go. Done well, it’s the most informative method we have. Done badly, it’s theatre.

The think-aloud protocol

Developed by Ericsson and Simon (1980s) and adapted for HCI. The participant performs the tasks while continuously verbalising their thoughts: “OK, I’m looking for the register button… I see something that says enrol, that’s probably it… clicking… nothing happened, why?”

The narration reveals the user’s mental model in real time. Where they hesitate, you have a usability problem. Where they say one thing but click another, you have a label-mismatch problem. Where they go silent, they are confused.

How to run a session

The System Usability Scale (SUS)

Developed by John Brooke in 1986 and now the most-used usability questionnaire in the world. Ten standard 5-point Likert items, alternating positive and negative phrasing. Produces a single number from 0 to 100. The math:

Bangor, Kortum and Miller (2008) gave us the interpretation grades:

Always report sample size with SUS. “SUS = 78” means little. “SUS = 78 (n = 4 participants), interpretation: Good (Bangor et al., 2008)” is what your Phase 3 report needs.

Sources: Brooke, J. (1986). SUS: A “quick and dirty” usability scale. Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability Scale.

xviii.Accessibility

Designing for everyone: accessibility and inclusive design.

About 16% of the world’s population - roughly 1.3 billion people - lives with significant disability (WHO, 2023). Designing only for the “average” user is not a neutral choice; it’s a choice to exclude. The Web Content Accessibility Guidelines (WCAG) 2.2 organise accessibility around four principles: POUR.

P
Perceivable

Users must be able to perceive the information, regardless of their senses. Includes captions, alt text, sufficient colour contrast.

In practice: minimum 4.5:1 contrast for body text; alt text on every image.
O
Operable

Users must be able to operate the interface, regardless of input method. Includes full keyboard navigation, large enough touch targets.

In practice: every action reachable by Tab key; 44×44 pixel minimum touch targets.
U
Understandable

Content and operation must be understandable. Plain language, predictable behaviour, helpful error messages.

In practice: clear labels in user’s language; errors explain how to fix.
R
Robust

Content must work with current and future user-agents, including assistive technologies (screen readers, switch controls).

In practice: semantic HTML; proper ARIA roles where needed.

Specifically for your project

The Phase 2 rubric requires accessibility to be addressed meaningfully - not as an afterthought. For a Birzeit student project, that means at minimum:

Keyboard navigation

Every interactive element reachable by Tab. Visible focus indicator. Logical tab order matching visual order.

A

Screen reader labels

Buttons need text or aria-label. Form fields need associated labels. Icons-only buttons fail unless labelled.

RTL for Arabic

Text aligns right; UI mirrors (back button on right, navigation reversed). Mix LTR-RTL carefully when bilingual.

Colour contrast (WCAG 2.2)

4.5:1 minimum for body text, 3:1 for large text and UI components. Use WebAIM’s contrast checker to verify.

The wider concept is universal designDesigning systems to be usable by all people to the greatest extent possible, without need for adaptation. Coined by architect Ronald Mace.التصميم الشامل: design once, work for everyone. The famous example is the kerb cut - the slope between pavement and road, originally fought for by wheelchair users, that turned out to be useful to parents with strollers, travellers with suitcases, delivery workers with trolleys, and skateboarders. Designing for an edge case made the centre better.

Worth knowing
Captions weren’t made for the deaf - but they ended up being for everyone.

Closed captions were originally developed for deaf and hard-of-hearing viewers in the 1970s. Today, surveys find that over 80% of viewers who use captions are not deaf: they watch with sound off in public, in noisy environments, while learning a language, or because they understand better with text reinforcing speech. Designing for accessibility ended up serving the entire audience.

Part fourPractice
xix.Worked evaluations

Worked heuristic evaluations on apps you use every day.

Three short evaluations on familiar apps. The point is to see how the same heuristics surface in different shapes, and how each violation gets documented for a report. Switch between tabs; each evaluation walks one task.

System evaluatedWhatsApp (mobile)
Task walkedSending a photo to a group chat
NoteIllustrative - based on common patterns
Tap to expand CS CS Group 2025 12 members, 4 online Tutorial moved to room 304 10:24 Did anyone do question 3? I'm stuck on the recursion bit 10:31 10:32 Type a message
WhatsApp chat · photo upload

iPhoto bubble overlaid with a static spinner. No percentage, no estimated time, no “cancel” affordance. On a slow network the user has no idea whether to wait or retry.

iiSend button shows no state. Compare with iOS Mail, where the send button briefly shows “Sending…” before transitioning to a success or failure state.

iiiThe header gives no “active upload” indicator. If the user navigates away from this chat while uploading, they lose the only visible progress hint.

Use this sketch to walk through the violations below. Each one corresponds to a specific element of this screen.

i
H7 · Flexibility & efficiency
No way to schedule a message for later

A user wanting to wish a friend happy birthday at midnight has to either remember to do it then, or send it early. Competitor apps (Telegram, even iMessage on recent iOS) support scheduled messages. WhatsApp does not.

Fix: add long-press on send button → schedule. Standard pattern across messaging apps.
SEV 2
ii
H3 · User control & freedom
“Delete for everyone” window arbitrary and inconsistent across versions

The window in which a user can unsend a message has changed multiple times (originally 7 minutes, then 1 hour, then around 2 days). Long-time users don’t know the current limit. Once it expires, no recovery.

Fix: show countdown timer on long-press menu (“Delete for everyone - 47h 22m left”). Make the limit visible and predictable.
SEV 2
iii
H6 · Recognition not recall
Forwarded message context is lost

When a friend forwards you a long article from a group, the new message shows only “Forwarded” with no source. The user must remember (or ask) where it came from. Misinformation thrives in this gap.

Fix: show original sender (with permission) or original date. WhatsApp added “Forwarded many times” in 2019 - partial step in the right direction.
SEV 3
iv
H1 · Visibility of system status
Photo upload progress unclear in poor connections

On a slow network, a sent photo shows a clock icon but no percentage, no estimated time, and no obvious way to cancel mid-upload. The user is left guessing whether to wait or retry.

Fix: circular progress ring with percentage; tap to cancel. Standard pattern in iOS Mail, Gmail.
SEV 2

Notice that WhatsApp gets many things right - message ticks, voice messages with waveforms, end-to-end encryption indicators. A heuristic evaluation finds violations, but a balanced report acknowledges what works.

System evaluatedInstagram (mobile)
Task walkedFinding a saved post from three weeks ago
NoteIllustrative - based on common patterns
Tap to expand Instagram + Your story layla omar sara huda layla.kh Ramallah 📷 ♡ 💬 ↗ SUGGESTED FOR YOU naturepix.daily Suggested for you · Follow [reel preview] SPONSORED brand.shop Shop now
Instagram home feed · mixed content

iFriend post, suggested account, sponsored ad - visually almost identical. Only thin labels distinguish them. Users are trained to ignore the labels.

iiNo top-level “Saved” entry. The bottom tab bar shows home, search, post, reels, profile. To reach saved posts: profile (5th tab) → menu icon → Saved → All. Four taps deep.

iiiGestures on this screen are inconsistent with the “reels” tab and the “stories” circles at the top - three layouts, three interaction grammars.

Use this sketch to walk through the violations below. The visual confusion in the feed is itself the H8 violation.

i
H6 · Recognition not recall
Saved posts are buried four taps deep

Profile → menu → Saved → All posts. New users do not know they can save posts, much less how to find them again. The feature is invisible to recognition; you have to recall it exists.

Fix: add “Saved” as a top-level tab on the profile, alongside posts and reels. The capability already exists - only its visibility is the problem.
SEV 3
ii
H8 · Aesthetic & minimalist design
The home feed mixes posts, reels, ads, and suggestions confusingly

What used to be a chronological list of friends is now an algorithmic mosaic. Posts from people you follow are interspersed with reels, sponsored content, and “suggested for you”. The visual treatment doesn’t always distinguish them clearly.

Fix: stronger visual treatment for sponsored content; user-controllable balance of friends vs algorithmic content. Instagram has been criticised for this since 2016.
SEV 3
iii
H3 · User control & freedom
Cannot turn off “Suggested for you” permanently

You can “Snooze for 30 days”. After that it returns. There is no permanent off switch. The user’s choice is overridden every month.

Fix: add a permanent off switch in settings. Trust the user’s preference.
SEV 2
iv
H4 · Consistency
Stories vs Reels vs Posts use different gestures

Stories: tap to advance, swipe up for actions. Reels: swipe up to skip, double-tap to like. Posts: scroll, double-tap to like. Three formats, three interaction grammars. New users frequently mistap.

Fix: unify gestures where possible. If three formats need three behaviours, make the differences visible, not hidden.
SEV 2

Instagram is built and owned by Meta and has hundreds of UX researchers. Many of these “violations” are deliberate trade-offs against business goals (engagement, ad revenue). A good evaluator names the violation; understanding the trade-off comes next.

System evaluatedTalabat (food delivery, mobile)
Task walkedOrdering shawarma to be delivered to a Birzeit dorm
NoteIllustrative - based on common patterns
Tap to expand Order tracking 🛵 You Driver: Marwan Talabat Bike Call Arriving in 3 minutes Driver is near your location ETA UPDATED · 19 MIN AGO Order #A4F-2089 Chicken shawarma · large fries · pepsi 25.50 NIS Help  ›
Talabat order tracking · stale ETA

iThe orange info card claims “3 minutes” but the small grey line below admits the ETA is 19 minutes old. The big number is wrong; the truth is in the small print.

iiThe driver pin is shown in a fixed position on the map even though the location data is stale. False precision. The pin should be greyed out or surrounded by an uncertainty halo when data is old.

iiiNo way to refresh, no way to message the driver in-app. The only escape hatch is the “Call” button - which puts the driver on the phone while they’re riding a bike.

Use this sketch to walk through the violations below.

i
H1 · Visibility of system status
Driver location updates lag behind reality

The map shows the driver near your location, but the food has already been at your door for 4 minutes. The arrival estimate has been stuck at “3 minutes” for the last 20 minutes. Status is shown but not accurate, which is worse than nothing.

Fix: if location data is older than 60 seconds, show “last updated 2 min ago”. Honesty about uncertainty beats false certainty.
SEV 3
ii
H5 · Error prevention
Wrong delivery address common because of vague map pins

Birzeit has many buildings without precise street addresses. Users drop a pin on a map; pins drift; drivers end up at the wrong gate. No system check warns the user that the pin location is ambiguous.

Fix: after pin drop, show a satellite view with a circle of likely buildings; ask user to confirm building. Offer free-text note for delivery instructions.
SEV 3
iii
H7 · Flexibility & efficiency
Frequent reorders require full re-customisation

If you order “chicken shawarma, no garlic, extra pickles, double bread” every Tuesday, you must select all four customisations every time. The app remembers past orders but not their customisations.

Fix: “Reorder” button on past orders that copies all customisations. Standard in Uber Eats, Deliveroo.
SEV 2
iv
H9 · Error recovery
“Restaurant unavailable” with no explanation or alternatives

Selecting a restaurant during peak hours sometimes shows “currently unavailable”. No explanation (closed? overwhelmed? out of stock?) and no suggestion (similar restaurants nearby? open in 20 minutes?).

Fix: show specific reason and concrete alternatives. “Closed until 6pm. Try these similar nearby: …”
SEV 2

Local services like Talabat face context-specific challenges (vague addresses, peak-hour load) that global apps often handle better. Local context is also where a Birzeit team can add the most value in a redesign.

xx.Knowing your users

Personas, scenarios, requirements: turning interviews into design targets.

The trio that turns raw interview transcripts into specific, testable design targets. A persona is who you’re designing for. A scenario is how they’d use the system. A requirement is the measurable thing your design must achieve. All three must be grounded in evidence, not invention.

Why personas exist

“Design for our students” is not a design target. There is no “the student”. There is a first-year who’s never registered for a course, a third-year power-user who wants shortcuts, a commuter who only uses the system on her phone in 15-minute corridor windows, an international student whose first language isn’t Arabic. Each has different goals, different constraints, different frustrations.

Personas force specificity. When you’re deciding whether to add a feature, the question stops being “would users want this?” (always yes, in a vague way) and becomes “would Lina use this?” (specific, answerable, often no).

Crucially: personas are evidence-based, not invented. The phrase “let’s say she’s called Sara and she’s 20” is the wrong starting point. Personas come from interview data. Each field cites the participants who support it.

A. The persona

Built from interviews P1, P2, P3, P4. Every claim cites a participant code.

LK

Lina Khoury

Second-year CS student who registers on her phone between classes

Demographics

19, second-year BSc CS, off-campus, 45-min commute from Ramallah P1

Tech literacy

Android user, prefers Arabic UI when offered, comfortable but not a power user P1P2

Goals

Register without timing conflicts; finish in under 10 minutes; get advisor confirmation in the same flow P1P2P4

Frustrations

System times out on slow Wi-Fi; timetable hard to read on a phone; vague “course full” messages with no alternatives P1P2P3

Context of use

Mid-morning, between classes, on her phone; sometimes asks her cousin via WhatsApp when stuck P1P2

Devices

Primary: Android phone (small screen). Secondary: friend’s laptop in the library when at home P1

“I just want to know quickly if a course is full or not. Then I can decide.” - P1, anonymised

B. A scenario for Lina

A scenario is a story, not a feature list.

Scenario 1 · Adding an elective between classes

It’s 10:45am on a Tuesday and Lina has 15 minutes between her algorithms tutorial and her next class. She opens the redesigned Ritaj on her phone. Her dashboard already shows three suggested electives that fit her timetable. She taps the first, sees its current capacity (“28 of 30”), and adds it. The system warns her that one elective in her plan now overlaps with the new one and offers to swap. She accepts the swap, confirms, and gets a push notification 90 seconds later: “Advisor approved. Schedule updated.” She closes the app and walks to her next class.

C. The measurable requirements that scenario implies

Each requirement is testable; each maps to one of the six ISO 9241-11 usability goals.

#RequirementISO goal
R1Users can complete the “add elective” flow in under 4 minutes with at most 1 error.Efficiency
R2The system displays current course capacity (e.g. “28 of 30”) on every course card; never just “available”.Effectiveness
R3When a schedule conflict is detected, the system offers a specific resolution within 1 second.Effectiveness
R4Users can return to and complete a registration session up to 24 hours later without re-doing earlier steps.Memorability
R5All primary actions are reachable on a 360px-wide screen without horizontal scrolling.Effectiveness
R6Error messages name the failed field and suggest a concrete fix (no generic “invalid input”).Learnability
R7SUS score ≥ 70 across at least 3 think-aloud participants in Phase 3.Satisfaction
xxi.The project

The project, phase by phase.

Three phases mirroring the UCD cycle. Phase 1: understand and specify. Phase 2: design and prototype. Phase 3: evaluate and revise. Click each tab for what to deliver.

A note on marks

The team project described below is worth 30% of the course, distributed as Phase 1 (30% of the project), Phase 2 (35%), Phase 3 (35%). The individual heuristic-evaluation assignment - a 2–3 page report on a public website or app of your choice - is a separate piece of work worth 15% of the course on its own. It is not part of the project marks. The final presentation that defends Phase 3 is also separate, worth a further 15%. See the project specification document for current dates.

Due date see project specification document
weight30% of project

Single team-authored report, 5–7 pages plus appendix. The goal is to understand who the users are and what they need.

1
Team and topic

Half page. Members, working arrangements, chosen topic, scope statement, and why the problem matters.

no marks
2
Heuristic evaluation of the existing system

1–2 pages. Minimum 10 violations team-wide using Nielsen’s 10. Each: heuristic, screenshot, description, severity 0–4. Plus a paragraph on dominant patterns.

25 pts
3
Exploratory user interviews

1–2 pages plus appendix. Minimum 2 interviews (3–4 better), 20–45 min each. Signed consent form. Notes per interview, key findings.

20 pts
4
Personas

1–2 pages. Two personas grounded in interview data with full evidence trail. See part xx above.

20 pts
5
Scenarios

Half to one page. Three short narratives showing how personas would realistically use the redesigned system.

15 pts
6
Measurable usability requirements

Half page. 5–7 requirements as measurable targets, mapped to the six ISO 9241-11 goals.

15 pts

Plus 5 points for clear writing and structure within the page limit. Total: 100.

Due date see project specification document
weight35% of project

Goal: produce a design that addresses the requirements from Phase 1. Decisions must be justified with reference to course concepts, not intuition alone.

1
Design rationale document

3–4 pages. Conceptual model; how the design addresses each persona’s goals; references to Norman’s principles in 3+ places; references to 5+ of Nielsen’s heuristics; cognitive-load considerations; accessibility (keyboard, screen reader, RTL Arabic, WCAG 2.2 contrast).

45 pts
2
Low-fidelity wireframes

Paper sketches or digital wireframes of at least 8 screens covering all three Phase 1 scenarios. Annotations explaining key decisions.

15 pts
3
Interactive prototype

Built in Figma, Adobe XD, Sketch, or equivalent. Functional enough for a user to complete the three scenario tasks. Shareable link plus PDF export. Realistic content, not placeholder text.

15 pts
4
Accessibility addressed meaningfully

Not as an afterthought. Documented contrast ratios, RTL behaviour, keyboard order, alternative text strategy.

10 pts

Plus 15 points for a coherent conceptual model. Total: 100.

Due date see project specification document; defended in final presentation
weight35% of project

Evaluate the prototype against the requirements from Phase 1, demonstrate the ability to close the UCD loop by iterating.

1
Usability test report

3–4 pages plus appendices. Test protocol; at least 3 participants distinct from Phase 1; think-aloud protocol; signed consent; quantitative measures (task completion, time, errors); SUS averaged across participants; qualitative findings organised by theme.

35 pts
2
Heuristic re-evaluation and comparison

Apply Nielsen’s 10 to the new prototype. Direct comparison table (original vs redesign). Severity ratings for remaining issues.

15 pts
3
Revision plan

One page. Prioritised list of what you would change next, with rationale. Shows you understand iterative design.

15 pts
4
Reflection

One page. What you learned by going through the full UCD loop. What was harder than expected. How real research differed from initial assumptions.

15 pts

Plus 20 points distributed across rigour of methodology and quality of qualitative analysis. Total: 100.

Part fiveReference
xxii.Pitfalls

What loses marks.

These are what cost teams marks in past iterations. Pre-empt them.

Personas without evidence trails

Every claim about a persona must cite a participant code (P1, P2). Empty fields are honest; invented fields are dangerous.

All severities rated 3

Use the three sub-factors. A 3 needs at least two of frequency, impact, persistence to be high. A 4 needs all three.

Heuristic violations that aren’t violations

“The colours are ugly” is not Heuristic 4 (consistency). Pick the most specific heuristic that fits, or don’t list it.

Scenarios that read like feature lists

Scenarios are stories. “At 10:45am, Lina opens Ritaj on her phone, wanting to add an elective…” - not bullets.

Convenience-sample interviews

Real users, not roommates. Mix of contexts (year, department, device, commuting status).

Unmeasurable requirements

Numbers, time, error rates. “Easy to use” is not a requirement. “Complete in under 4 min with ≤1 error” is.

Redesigning the whole system

You have 8 weeks. Three user journeys done well will score higher than ten done badly.

Inventing a persona

“Let’s say she’s called Sara and she’s 20” is the wrong starting point. Personas come from data, not before it.

Citing Norman where you mean Nielsen

Norman = generative (designing). Nielsen = evaluative (judging). Use the right list for the right activity.

Accessibility as one paragraph at the end

If the WCAG section reads like an afterthought, it loses points. Bake it into design decisions throughout.

SUS reported without sample size

“SUS = 78” means nothing without N. Always: “SUS = 78 (n = 4 participants), interpretation: Good (Bangor et al., 2008)”.

Reflection that’s generic

“We learned a lot about UCD” is empty. “We assumed users wanted X but found in interviews that they actually do Y because Z” is a real reflection.

Hypothetical interview questions

“Would you use a feature that…” gets meaningless yeses. Ask about specific past behaviour instead: “Walk me through the last time you…”.

Helping users during testing

If a participant gets stuck during a usability test, watch them struggle. The struggle is the data. Helping defeats the test.

xxiii.Glossary

Bilingual glossary.

Core vocabulary in English and Arabic. Use these in your team discussions and written reports.

Usability

The extent to which a system can be used effectively, efficiently, and satisfyingly by specified users in a specified context.

قابلية الاستخدام
User-centred design

An iterative design approach centred on real users and their tasks, defined by ISO 9241-210.

التصميم المتمحور حول المستخدم
Heuristic evaluation

An expert inspection method using rules of thumb to find usability problems before users see the system.

التقييم الإرشادي
Severity rating

A 0–4 score combining frequency, impact, and persistence of a usability problem.

تصنيف الخطورة
Persona

An evidence-grounded portrait of a user pattern, used to make design decisions concrete.

شخصية نموذجية
Scenario

A short narrative showing how a persona uses the system to achieve a goal.

سيناريو
Affordance

A visual property suggesting how an object can be used (a button looks pressable; a door handle looks pullable).

الإيحاء
Mapping

The relationship between controls and their effects (steering wheel right = car turns right).

المطابقة
Feedback

The system’s communication of what is happening or what just happened.

التغذية الراجعة
Conceptual model

The user’s mental picture of how the system works, ideally close to the designer’s model.

النموذج المفاهيمي
Mental model

The user’s working theory of how a system behaves, built from observation and experience.

النموذج الذهني
Gulf of execution

The gap between what a user wants to do and how the system says to do it.

فجوة التنفيذ
Gulf of evaluation

The gap between the system’s actual state and the user’s perception of it.

فجوة التقييم
Working memory

The system that holds and manipulates information for short periods. About 4 ± 1 chunks.

الذاكرة العاملة
Cognitive load

The total mental effort a task demands. Heavy load means slower, more error-prone behaviour.

العبء المعرفي
Think-aloud protocol

A usability-test method where participants verbalise their thoughts as they perform tasks.

بروتوكول التفكير بصوت مرتفع
Accessibility

Ensuring people with disabilities can perceive, understand, navigate, and interact with a system.

إمكانية الوصول
Universal design

Designing systems to be usable by all people, to the greatest extent possible, without need for adaptation.

التصميم الشامل
System Usability Scale (SUS)

A 10-question survey producing a 0–100 score representing overall perceived usability.

مقياس قابلية استخدام النظام
User experience (UX)

A user’s perceptions and responses from using or anticipating use of a product, including emotion and aesthetics.

تجربة المستخدم
Dark pattern

An interface designed to manipulate users against their interests, often by abusing their trust or attention.

النمط المظلم
xxiv.References

References.

Sources used in this handbook and across the course. Cite these when drawing on their ideas in your reports.

Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574–594.
Brignull, H. (2010). Dark Patterns: Deception vs. Honesty in UI Design. A List Apart.
Brooke, J. (1986). SUS: A “quick and dirty” usability scale. In P. W. Jordan et al. (Eds.), Usability Evaluation in Industry, pp. 189–194. Taylor & Francis.
Cooper, A. (1999). The Inmates Are Running the Asylum. Sams Publishing.
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215–251.
Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6), 381–391.
Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11–26.
ISO 9241-11:2018. Ergonomics of human-system interaction, Part 11: Usability: Definitions and concepts.
ISO 9241-210:2019. Ergonomics of human-system interaction, Part 210: Human-centred design for interactive systems.
Leveson, N. G., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. IEEE Computer, 26(7), 18–41.
Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
Nielsen, J. (1994). Enhancing the explanatory power of usability heuristics. In Proceedings of the ACM CHI ’94 Conference, pp. 152–158.
Nielsen, J. (1995). Severity ratings for usability problems. Nielsen Norman Group.
Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of ACM INTERCHI ’93, pp. 206–213.
Norman, D. A. (2004). Emotional Design: Why We Love (or Hate) Everyday Things. Basic Books.
Norman, D. A. (2013). The Design of Everyday Things, Revised and Expanded Edition. Basic Books.
Portigal, S. (2013). Interviewing Users: How to Uncover Compelling Insights. Rosenfeld Media.
Pruitt, J., & Adlin, T. (2006). The Persona Lifecycle: Keeping People in Mind Throughout Product Design. Morgan Kaufmann.
Rogers, Y., Sharp, H., & Preece, J. (2024). Interaction Design: Beyond Human-Computer Interaction, 6th edition. Wiley.
W3C (2023). Web Content Accessibility Guidelines (WCAG) 2.2. World Wide Web Consortium.
World Health Organization (2023). Disability and health fact sheet.