Notes/HCI Foundations: The Laws That Govern Every Click
Back to Notes

HCI Foundations: The Laws That Govern Every Click

Fitts's law, Hick's law, Miller's 7 plus or minus 2, cognitive load theory, mental models, and affordances: the science behind usable interfaces.

2025-07-15AI-Synthesized from Personal NotesSource2000+ words of raw notesEnrichmentsCode blocks, GraphicsPipelineMulti-pass AI review · Score: 99/100
Share
Design UxFitts LawCognitive LoadInteraction Design

Term Definition
HCI Human-Computer Interaction: the study of how people interact with computers and how to design systems that are effective, efficient, and satisfying to use
Fitts's Law A predictive model stating that the time to move to a target is a function of the distance to the target divided by the target's width: $T = a + b \cdot \log_2\!\bigl(\frac{D}{W} + 1\bigr)$
Hick's Law Decision time increases logarithmically with the number of choices: $T = b \cdot \log_2(n + 1)$
Miller's Law Working memory can hold roughly $7 \pm 2$ chunks of information at once
Cognitive Load The total amount of mental effort being used in working memory. Divided into intrinsic (task complexity), extraneous (poor design), and germane (learning effort)
Mental Model A user's internal representation of how a system works, which may differ from the actual implementation model
Affordance A property of an object that suggests how it can be used (a button affords pressing, a slider affords dragging). Coined by Gibson, applied to design by Norman
Signifier A perceivable cue that communicates where and how to act. Norman later distinguished signifiers from affordances: the affordance is the possibility, the signifier is the indicator
Index of Difficulty (ID) In Fitts's law, the $\log_2\!\bigl(\frac{D}{W} + 1\bigr)$ term measured in bits, representing how hard a pointing task is

What & Why

Human-Computer Interaction is the discipline that sits between psychology and engineering. It asks a deceptively simple question: how do we design systems that people can actually use?

The answer is not intuition. It is science. Over decades, researchers have distilled human behavior into quantitative laws that predict how fast someone can click a button, how long they will take to choose from a menu, and how much information they can juggle at once. These laws are not guidelines or opinions. They are empirical models backed by thousands of experiments.

Why should a software engineer care? Because every interface you build is a conversation with a human brain. If you violate Fitts's law, your buttons are too small or too far away. If you ignore Hick's law, your menus overwhelm. If you exceed Miller's limit, your forms confuse. Understanding these foundations turns UI design from guesswork into engineering.

This post covers the six pillars of HCI: Fitts's law, Hick's law, Miller's law, cognitive load theory, mental models, and affordances.

How It Works

Fitts's Law: Pointing and Targeting

Paul Fitts published his law in 1954 after studying human motor performance. The Shannon formulation (the most widely used) states:

$T = a + b \cdot \log_2\!\left(\frac{D}{W} + 1\right)$

Where $T$ is movement time, $D$ is the distance from the starting point to the center of the target, $W$ is the width of the target along the axis of motion, and $a$, $b$ are empirically determined constants for the device.

The key insight: making a target twice as wide has the same effect as moving it twice as close. This is why large buttons near the cursor are fast to click, and tiny icons in distant corners are slow.

Fitts's Law: Distance vs. Target Width Cursor D W₁ Small target High ID, slow W₂ Large target Low ID, fast

Design implications of Fitts's law:

  • Place frequently used actions close to where the cursor already is
  • Make clickable targets large enough (minimum 44px for touch, per WCAG)
  • Screen edges and corners have effectively infinite width (the cursor stops there), making them prime real estate
  • Pie menus outperform linear menus because every option is equidistant from the center

Hick's Law: The Cost of Choice

Hick's law (also called the Hick-Hyman law) quantifies decision time:

$T = b \cdot \log_2(n + 1)$

Where $n$ is the number of equally probable choices and $b$ is a constant (~150ms per bit for simple choices). The $+1$ accounts for the option of not choosing at all.

Hick's Law: Decision Time vs. Number of Choices Number of choices (n) Time (ms) 2 4 8 16 32 237ms 349ms 475ms 604ms 735ms

The logarithmic relationship means that going from 2 to 4 choices adds the same delay as going from 16 to 32. This is why categorized menus (where you first pick a category, then an item) can be faster than flat lists, even though they require two clicks: each decision involves fewer options.

Important caveat: Hick's law applies to equally probable, unfamiliar choices. Expert users who know exactly what they want (e.g., keyboard shortcuts) bypass it entirely.

Miller's Law: The Limits of Working Memory

George Miller's 1956 paper "The Magical Number Seven, Plus or Minus Two" established that human working memory can hold approximately $7 \pm 2$ chunks of information simultaneously.

A "chunk" is a meaningful unit. The digits 1-9-4-5 are four chunks, but if you recognize them as the year 1945, they collapse into one chunk. Chunking is the brain's compression algorithm.

Miller's Law: Working Memory Capacity 1 2 3 4 5 6 7 8 9 Overload zone Comfortable: 5-7 chunks

Design implications: keep navigation menus to 5-7 top-level items, group form fields into logical sections, and use progressive disclosure to avoid dumping everything on screen at once.

Cognitive Load Theory

John Sweller's cognitive load theory (1988) breaks mental effort into three types:

  • Intrinsic load: the inherent difficulty of the task itself (filing taxes is harder than checking email)
  • Extraneous load: unnecessary difficulty caused by poor design (confusing layout, inconsistent icons, jargon)
  • Germane load: productive effort spent building understanding (learning a new feature through a well-designed tutorial)

The designer's job is to minimize extraneous load so that the user's limited working memory can focus on intrinsic and germane load. Total cognitive load must stay within working memory capacity:

$\text{Intrinsic} + \text{Extraneous} + \text{Germane} \leq \text{Working Memory Capacity}$

Mental Models

A mental model is the user's internal simulation of how a system works. Users do not read source code. They build a model from experience, and they interact with the system based on that model.

When the conceptual model (what the designer intended) matches the user's mental model, the interface feels intuitive. When they diverge, errors and frustration follow.

Mental Model Alignment Designer's Conceptual Model User's Mental Model Overlap = Usability

Classic example: the "desktop" metaphor. Files, folders, and a trash can map to physical objects people already understand. The mental model transfers from the physical world to the digital one.

Affordances and Signifiers

James Gibson coined "affordance" in 1979 to describe what an environment offers an animal. Don Norman brought the concept to design in 1988 with "The Design of Everyday Things."

An affordance is a relationship between an object and an actor. A flat plate on a door affords pushing. A handle affords pulling. In digital interfaces, a raised, shadowed rectangle affords clicking (it looks like a physical button).

Norman later clarified the distinction between affordances and signifiers. The affordance is the action possibility. The signifier is the perceivable indicator that communicates that possibility. A door handle is both an affordance (it can be pulled) and a signifier (its shape tells you to pull). In flat UI design, affordances can exist without signifiers, which is why flat buttons sometimes confuse users: the click affordance exists, but the visual signifier is missing.

Complexity Analysis

HCI laws are predictive models with quantifiable parameters. Here is a comparison of their computational characteristics:

Law / Model Formula Growth Key Variable
Fitts's Law $T = a + b \cdot \log_2\!\bigl(\frac{D}{W}+1\bigr)$ $O(\log(D/W))$ Distance / Width ratio
Hick's Law $T = b \cdot \log_2(n+1)$ $O(\log n)$ Number of choices
Miller's Law Capacity $\approx 7 \pm 2$ chunks Constant bound Chunk size
Cognitive Load $I + E + G \leq C$ Additive Extraneous load (minimize)

Fitts's law and Hick's law both exhibit logarithmic growth, which means doubling the problem (distance or choices) adds only a constant amount of time. This is why both laws are forgiving: even large menus or distant targets do not produce catastrophic slowdowns. The real danger is compounding: a distant, small target (high Fitts ID) inside a large, unfamiliar menu (high Hick time) creates multiplicative delays.

For a multi-step task with $k$ sequential pointing actions and $m$ decision points:

$T_{\text{total}} = \sum_{i=1}^{k}\left(a + b \cdot \log_2\!\left(\frac{D_i}{W_i}+1\right)\right) + \sum_{j=1}^{m} b_j \cdot \log_2(n_j + 1)$

This composite model lets you estimate total interaction time for a workflow and identify which steps are the bottleneck.

Implementation

The following pseudocode implements a Fitts's law calculator and a Hick's law decision-time estimator that can be used to evaluate UI layouts programmatically.

ALGORITHM FittsTime(distance, width, a, b)
  INPUT: distance (pixels), width (pixels), a (intercept ms), b (slope ms/bit)
  OUTPUT: predicted movement time in milliseconds

  IF width <= 0 THEN
    RETURN error "Target width must be positive"
  END IF

  indexOfDifficulty ← log2(distance / width + 1)
  movementTime ← a + b * indexOfDifficulty
  RETURN movementTime
END ALGORITHM

ALGORITHM HickTime(numChoices, b)
  INPUT: numChoices (positive integer), b (ms per bit, typically ~150)
  OUTPUT: predicted decision time in milliseconds

  IF numChoices <= 0 THEN
    RETURN 0
  END IF

  decisionBits ← log2(numChoices + 1)
  decisionTime ← b * decisionBits
  RETURN decisionTime
END ALGORITHM

ALGORITHM EstimateWorkflowTime(steps)
  INPUT: steps: array of { type, params }
    where type is "point" or "decide"
    "point" params: { distance, width, a, b }
    "decide" params: { numChoices, b }
  OUTPUT: total estimated time in milliseconds

  totalTime ← 0

  FOR EACH step IN steps DO
    IF step.type = "point" THEN
      totalTime ← totalTime + FittsTime(step.params.distance,
                                          step.params.width,
                                          step.params.a,
                                          step.params.b)
    ELSE IF step.type = "decide" THEN
      totalTime ← totalTime + HickTime(step.params.numChoices,
                                         step.params.b)
    END IF
  END FOR

  RETURN totalTime
END ALGORITHM

ALGORITHM CognitiveLoadCheck(intrinsic, extraneous, germane, capacity)
  INPUT: intrinsic, extraneous, germane (load units), capacity (typically 7)
  OUTPUT: { withinBounds: boolean, utilization: float }

  totalLoad ← intrinsic + extraneous + germane
  utilization ← totalLoad / capacity

  IF totalLoad > capacity THEN
    RETURN { withinBounds: false, utilization: utilization }
  END IF

  RETURN { withinBounds: true, utilization: utilization }
END ALGORITHM

Real-World Applications

  • Mobile touch targets: Apple's Human Interface Guidelines recommend 44x44pt minimum tap targets, directly derived from Fitts's law applied to finger-based pointing
  • Context menus: Right-click menus appear at the cursor position (zero distance), minimizing Fitts's law movement time
  • Progressive disclosure in wizards: Multi-step checkout flows break a complex form into 3-4 screens, keeping each screen within Miller's limit
  • Search over browse: Google's single search box eliminates Hick's law entirely for users who know what they want, reducing decision time to zero
  • Undo as affordance: Gmail's "Undo Send" reduces cognitive load by removing the fear of making an irreversible mistake
  • Skeuomorphic design: Early iOS used leather textures and page-curl animations to leverage existing mental models from physical objects
  • Flat design backlash: The shift to flat UI removed signifiers (shadows, gradients) that indicated clickability, causing usability regressions that led to "flat 2.0" with subtle depth cues
  • Infinite scroll vs. pagination: Infinite scroll reduces Hick's law decisions (no "next page" choice) but increases cognitive load because users lose their position. Pagination trades a small decision cost for better spatial orientation
  • Toolbar ribbon (Microsoft Office): The ribbon groups related actions into tabs, applying Miller's law by chunking hundreds of commands into 7-8 top-level categories. Each tab then shows a manageable subset
  • Video game tutorials: Games like Portal teach mechanics through progressive disclosure, introducing one concept at a time to keep germane load high and extraneous load near zero

Key Takeaways

  • Fitts's law tells you that target size and distance are the two levers for pointing speed: make targets big and close to where the cursor already is
  • Hick's law tells you that fewer choices means faster decisions, but categorization (hierarchical menus) can reduce effective $n$ at each level
  • Miller's $7 \pm 2$ is a capacity limit on working memory, not a design rule to always use 7 items. Chunking lets you pack more information into fewer slots
  • Cognitive load theory gives you a framework for evaluating designs: minimize extraneous load, respect intrinsic load, and invest in germane load
  • Mental models explain why "intuitive" interfaces work: they match what users already expect. When models mismatch, users make errors
  • Affordances are possibilities for action, signifiers are the visual cues that communicate those possibilities. Both must be present for usable design