Notes/A Visual History of Computing: From Looms to LLMs
Back to Notes

A Visual History of Computing: From Looms to LLMs

The story of computing told through interactive timelines, diagrams, and charts. From Babbage's Analytical Engine to modern AI, tracing the key inventions, people, and ideas that built the digital world.

2025-09-10AI-Synthesized from Personal NotesSource1000+ words of raw notesEnrichmentsCode blocks, Interactive chartsPipelineMulti-pass AI review · Score: 99/100
Share
GuidesHistoryComputingReference

Terminology

Term Definition
Turing Machine A theoretical device that manipulates symbols on a tape according to rules. Defines the mathematical limits of what can be computed
Von Neumann Architecture Computer design where program instructions and data share the same memory. The basis of nearly all modern computers
Transistor A semiconductor switch that replaced vacuum tubes. Smaller, faster, more reliable, and the building block of all modern chips
Moore's Law The observation that transistor count on chips doubles roughly every two years, driving exponential growth in computing power
ARPANET The US military research network (1969) that evolved into the modern internet
ISA Instruction Set Architecture: the interface between software and hardware, defining what operations a CPU can perform
FLOPS Floating Point Operations Per Second: the standard measure of computing performance

What & Why

The history of computing is not a straight line. It is a story of ideas that waited decades for the right hardware, hardware that enabled software nobody imagined, and accidents that changed everything. Understanding this history matters because the constraints and trade-offs that shaped early computing still echo in the systems we build today.

This article traces computing from mechanical calculators to large language models, organized into seven eras. Each era is defined by a fundamental shift in what was possible: from manual calculation to automatic computation, from room-sized machines to pocket devices, from isolated computers to a global network, and from programmed logic to learned intelligence.

How It Works

Era 1: Mechanical Foundations (1800s-1940s)

Before electronics, computing was mechanical. The key insight of this era: computation can be automated by machines that follow instructions.

Era 2: Electronic Computers (1940s-1950s)

The transition from mechanical to electronic computing happened during World War II, driven by military needs: code-breaking, ballistics calculations, and nuclear weapons design.

Era 3: The Integrated Circuit and Personal Computing (1960s-1980s)

The integrated circuit put multiple transistors on a single chip, launching the exponential scaling that continues today.

Moore's Law: The Engine of Progress

Gordon Moore observed in 1965 that the number of transistors on a chip doubles roughly every two years. This exponential trend held for over 50 years and is the single most important driver of the computing revolution.

Era 4: The Internet Age (1990s-2000s)

The World Wide Web transformed computers from isolated tools into nodes on a global network.

Era 5: The AI Revolution (2010s-Present)

Deep learning, massive datasets, and GPU computing converged to create artificial intelligence that can see, speak, write, and reason.

The Architecture of a Modern Computer

Every computer from a smartwatch to a supercomputer follows the same basic architecture that von Neumann described in 1945.

Computing Power Over Time

The growth of computing power spans many orders of magnitude. This chart shows peak FLOPS of notable systems.

Programming Paradigms

The way we tell computers what to do has evolved through several paradigms, each building on the last.

Complexity Analysis

Computing history is itself an exponential story. The key metrics:

Metric 1945 (ENIAC) 1971 (4004) 2000 (Pentium 4) 2023 (M2 Ultra) Growth Factor
Transistors 0 (vacuum tubes) 2,300 42,000,000 134,000,000,000 58 million x
Clock Speed 100 kHz 740 kHz 1.5 GHz 3.5 GHz 35,000 x
Memory 20 words 640 bytes 256 MB 192 GB ~10 billion x
Storage None None 20 GB 8 TB Infinite
Power 150 kW 1 W 75 W 60 W 0.4 x (less!)
Cost $6M (2024 $) $200 $800 $5,200 0.001 x

The most remarkable trend: performance increased by millions of times while power consumption stayed roughly flat and cost per computation dropped by a factor of a billion.

Implementation

The Fetch-Execute Cycle

Every CPU, from the Intel 4004 to the latest Apple M-series, runs the same fundamental loop:

ALGORITHM FetchExecuteCycle(memory, registers)
INPUT:  memory: array of instructions and data
        registers: CPU registers including PC (program counter)
OUTPUT: side effects of executing the program

BEGIN
    PC ← 0  // program counter starts at address 0

    LOOP FOREVER
        // 1. FETCH: read instruction from memory
        instruction ← memory[PC]
        PC ← PC + 1

        // 2. DECODE: determine what operation to perform
        opcode ← EXTRACT_OPCODE(instruction)
        operands ← EXTRACT_OPERANDS(instruction)

        // 3. EXECUTE: perform the operation
        SWITCH opcode
            CASE ADD:  registers[operands.dest] ← registers[operands.src1] + registers[operands.src2]
            CASE LOAD: registers[operands.dest] ← memory[operands.addr]
            CASE STORE: memory[operands.addr] ← registers[operands.src]
            CASE JUMP: PC ← operands.addr
            CASE HALT: EXIT LOOP
        END SWITCH
    END LOOP
END

This cycle runs billions of times per second on a modern CPU. Every program you have ever used, from a calculator to ChatGPT, is ultimately executed by this loop.

Real-World Applications

Computer architecture courses use this history to explain why modern CPUs have caches (memory is slow relative to computation), pipelines (overlap fetch/decode/execute), and branch predictors (guessing which instruction comes next).

Software engineering is shaped by this history: we use high-level languages because assembly is unmaintainable, we use cloud computing because AWS made it cheaper than owning servers, and we use version control because collaborative development requires it.

AI/ML research builds directly on the hardware trajectory: deep learning became practical only when GPUs provided enough parallel compute, and scaling laws show that model capability improves predictably with more compute, data, and parameters.

Startup strategy is informed by Moore's Law: if you are building something that is barely possible today, it will be easy in 5 years. Many successful companies (YouTube, Dropbox, Uber) launched at the exact moment when hardware costs crossed a critical threshold.

Key Takeaways

  • Computing history is driven by exponential hardware improvement (Moore's Law) combined with key architectural ideas (stored programs, integrated circuits, packet switching, neural networks) that unlocked new capabilities.

  • The von Neumann architecture from 1945 still describes every computer you use today. The fetch-execute cycle is the heartbeat of all computation.

  • Each era was triggered by a hardware breakthrough (vacuum tubes, transistors, ICs, GPUs) that enabled a software revolution (operating systems, personal computing, the web, AI).

  • The people who shaped computing came from diverse backgrounds: a mathematician (Turing), a physicist (Shockley), a weaver's son (Jacquard), a countess (Lovelace), college dropouts (Jobs, Gates, Zuckerberg), and research labs (Bell Labs, Xerox PARC, Google Brain).

  • The next era is already beginning. AI systems that can reason, code, and create are changing what it means to program a computer, completing a circle that started with Lovelace writing the first algorithm in 1843.