This document summarizes several historically important non–von Neumann (or von-Neumann-adjacent-but-alternative) architectures, compares them, and places them on a rough timeline. It also includes example Graphviz and PlantUML diagram source you can render locally.
| Architecture | Has Program Counter? | Memory Addressing | Parallelism | Execution Model | Hard to Map General-Purpose Code? |
|---|---|---|---|---|---|
| Von Neumann | Yes | Byte/word array | Limited | Sequential instructions | No |
| Dataflow | No | Tokens in graph | Massive | Graph firing | Yes |
| Lisp Machine | Yes-ish | Object pointers & tag bits | Moderate | Symbolic eval | Moderately |
| Stack Machine | Yes-but-simple | Stack + return stack | Low | Stack ops | No-ish |
| Associative Mem. | Sometimes | Content match | High (bitwise parallel) | Parallel search | Yes |
| Cellular Automata | No traditional PC | Local registers | Massive | Local rules | Very |
| Analog/Neural | No | Continuous weights/signals | Massive | Differential equations | Yes |
| Prolog / Logic | No classical PC | Terms / heap | Moderate | Logical inference | Yes |
Notes:
- “Has Program Counter?” is about whether there is a single central instruction pointer driving execution in the classical von Neumann sense.
- “Memory Addressing” captures what the machine conceptually deals in: raw bytes, structured objects, content queries, or something more exotic.
- “Hard to Map General-Purpose Code?” is a qualitative feel for how tricky it is to run mainstream imperative languages efficiently on the model.
This is approximate and focuses on when ideas were actively developed / prototyped or commercialized, not when they were first vaguely imagined.
- 1945 – Von Neumann architecture formalized
- Stored-program computer idea (EDVAC report).
- Memory holds both code and data, accessed via addresses.
- 1950s – Early analog & neural ideas
- Cybernetics, analog computing machines, early neural nets (McCulloch–Pitts, etc.).
- Never standardized the way digital von Neumann machines did, but foundational for later neuromorphic work.
- Early 1960s – Stack machine / concatenative models
- Burroughs B5000 (and successors) use an architecture deeply influenced by Algol and stack-based evaluation.
- Hardware call stack, reentrant procedures, higher-level support baked in.
- Mid–Late 1960s – Associative / content-addressable memory (CAM)
- Research machines with content-based lookup: parallel comparison on each memory cell.
- Seen as promising for AI, database, and pattern-matching workloads.
- Late 1960s – Cellular automata concepts as computing substrates
- Conway’s Game of Life popularizes cellular automata as universal computation; hardware concepts start appearing in the literature.
- Early 1970s – Dataflow models emerge
- Jack Dennis and others formalize dataflow as an alternative to sequential control.
- Programs are graphs; nodes fire when operands are available.
- Mid–1970s – Lisp Machine designs begin
- MIT AI Lab designs specialized hardware for Lisp.
- Memory words carry type tags; GC and dynamic typing are hardware features.
- Mid–Late 1970s – Forth / stack machines
- Commercial Forth machines and microprocessors optimized for a concatenative stack-based language.
- Minimal instruction sets, direct mapping from source to hardware.
- Late 1970s – SIMD / fine-grained massively parallel machines
- Early precursors to machines like the Connection Machine (CM-1).
- Focus on many simple processing elements operating in lockstep.
- Early 1980s – Commercial Lisp Machines
- Symbolics, Lisp Machines Inc. and others ship workstations where Lisp is the native language of the machine.
- Hardware support for: tagged memory, fast consing, garbage collection, type dispatch.
- Early–Mid 1980s – Dataflow prototypes
- Manchester Dataflow Machine, MIT Tagged Token Machine, and others built as functional prototypes.
- Demonstrate massive parallelism but struggle with practical general-purpose workloads and memory constraints.
- 1980s – Connection Machine & massively parallel SIMD
- Thinking Machines CM-1, CM-2: cellular automata–like large arrays of simple processors.
- Very high theoretical throughput on certain workloads, challenging to program and commercialize.
- 1980s – Prolog / Logic machines & Fifth Generation
- WAM (Warren Abstract Machine) influences Prolog implementations.
- Japan’s Fifth Generation Computer Systems (FGCS) project explores logic-programming-based hardware to leapfrog the West.
- Mid–Late 1980s – Analog / neuromorphic VLSI
- Carver Mead and others design analog circuits inspired by biology, treating transistors as modeling neurons, synapses, and continuous-time dynamics.
- Early 1990s – Decline of Lisp Machines and special-purpose AI hardware
- Commodity microprocessors outpace specialized machines on cost/performance.
- OS and compiler ecosystems coalesce around C, Unix, and PC-like designs.
- 1990s – VLIW / EPIC concepts
- Very Long Instruction Word: compiler-scheduled parallelism (e.g., Multiflow, later Intel Itanium).
- Interesting hybrid between von Neumann and dataflow ideas, but still fundamentally instruction-stream–driven.
- 1990s – GPUs as fixed-function graphics pipelines
- Not yet general-purpose, but lay groundwork for large-scale SIMD / SIMT compute.
- Early–Mid 2000s – GPGPU emerges
- General-Purpose computing on GPUs: using graphics hardware for numeric workloads.
- SIMT (single instruction, multiple threads) unlocks massively parallel numeric kernels.
- 2000s – FPGAs mature
- Field-Programmable Gate Arrays let you implement dataflow-like custom pipelines in reconfigurable hardware.
- Often used as accelerators rather than primary CPUs.
- 2010s – Dataflow IRs in compilers (MLIR, XLA, TVM, etc.)
- Compilers for ML and high-performance computing represent programs as dataflow graphs, then schedule them onto heterogeneous hardware.
- 2010s – Deep learning hardware begins
- Google TPU, NVIDIA tensor cores, and similar accelerators.
- These architectures are closer to dense linear-algebra dataflow engines than traditional general-purpose CPUs.
- 2020s – Neuromorphic hardware (again)
- Intel Loihi, IBM TrueNorth, and others revisit event-driven, spiking neural architectures.
- 2020s – Analog / in-memory compute research
- Computation performed inside memory arrays (e.g., resistive RAM) to reduce data movement.
- Chips that blur the line between storage and compute revisit ideas that undermine the von Neumann separation of CPU and memory.
- 2020s – GPUs as the “real” mainframes of compute
- Much large-scale compute (AI training, scientific workloads) happens on architectures that are arguably closer to cellular/SIMD or dataflow than classic CPUs.
Below are two small diagrams you can render with Graphviz or PlantUML.



