HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
,
computer engineering Computer engineering (CE, CoE, or CpE) is a branch of engineering specialized in developing computer hardware and software. It integrates several fields of electrical engineering, electronics engineering and computer science. Computer engi ...
and programming language implementations, a stack machine is a
computer processor Cryptominer, In computing and computer science, a processor or processing unit is an electrical component (circuit (computer science), digital circuit) that performs operations on an external data source, usually Memory (computing), memory or som ...
or a process virtual machine in which the primary interaction is moving short-lived temporary values to and from a push down
stack Stack may refer to: Places * Stack Island, an island game reserve in Bass Strait, south-eastern Australia, in Tasmania’s Hunter Island Group * Blue Stack Mountains, in Co. Donegal, Ireland People * Stack (surname) (including a list of people ...
. In the case of a hardware processor, a hardware stack is used. The use of a stack significantly reduces the required number of
processor register A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-onl ...
s. Stack machines extend push-down automata with additional load/store operations or multiple stacks and hence are
Turing-complete In computability theory, a system of data-manipulation rules (such as a model of computation, a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be ...
.


Design

Most or all stack machine instructions assume that operands will be from the stack, and results placed in the stack. The stack easily holds more than two inputs or more than one result, so a rich set of operations can be computed. In stack machine code (sometimes called
p-code Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normal ...
), instructions will frequently have only an
opcode In computing, an opcode (abbreviated from operation code) is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs), central processing units (CPUs), and ...
commanding an operation, with no additional fields identifying a constant, register or memory cell, known as a zero address format. A computer that operates in such a way that the majority of its instructions do not include explicit addresses is said to utilize zero-address instructions. This greatly simplifies instruction decoding. Branches, load immediates, and load/store instructions require an argument field, but stack machines often arrange that the frequent cases of these still fit together with the opcode into a compact group of bits. The selection of operands from prior results is done implicitly by ordering the instructions. Some stack machine instruction sets are intended for interpretive execution of a virtual machine, rather than driving hardware directly. Integer constant operands are pushed by or instructions. Memory is often accessed by separate or instructions containing a memory address or calculating the address from values in the stack. All practical stack machines have variants of the load–store opcodes for accessing
local variable In computer science, a local variable is a variable that is given ''local scope''. A local variable reference in the function or block in which it is declared overrides the same variable name in the larger scope. In programming languages with ...
s and formal parameters without explicit address calculations. This can be by offsets from the current top-of-stack address, or by offsets from a stable frame-base register. The
instruction set In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, s ...
carries out most ALU actions with postfix (
reverse Polish notation Reverse Polish notation (RPN), also known as reverse Łukasiewicz notation, Polish postfix notation or simply postfix notation, is a mathematical notation in which operators ''follow'' their operands, in contrast to prefix or Polish notation ...
) operations that work only on the expression stack, not on data registers or main memory cells. This can be very convenient for executing high-level languages because most arithmetic expressions can be easily translated into postfix notation. For example, consider the expression ''A''*(''B''-''C'')+(''D''+''E''), written in reverse Polish notation as ''A'' ''B'' ''C'' - * ''D'' ''E'' + +. Compiling and running this on a simple imaginary stack machine would take the form: # stack contents (leftmost = top = most recent): push A # A push B # B A push C # C B A subtract # B-C A multiply # A*(B-C) push D # D A*(B-C) push E # E D A*(B-C) add # D+E A*(B-C) add # A*(B-C)+(D+E) The arithmetic operations 'subtract', 'multiply', and 'add' act on the two topmost operands of the stack. The computer takes both operands from the topmost (most recent) values of the stack. The computer replaces those two values with the calculated difference, sum, or product. In other words the instruction's operands are "popped" off the stack, and its result(s) are then "pushed" back onto the stack, ready for the next instruction. Stack machines may have their expression stack and their call-return stack separated or as one integrated structure. If they are separated, the instructions of the stack machine can be pipelined with fewer interactions and less design complexity, so it will usually run faster. Optimisation of compiled stack code is quite possible. Back-end optimisation of compiler output has been demonstrated to significantly improve code, and potentially performance, whilst global optimisation within the compiler itself achieves further gains.


Stack storage

Some stack machines have a stack of limited size, implemented as a register file. The ALU will access this with an index. A large register file uses a lot of transistors and hence this method is only suitable for small systems. A few machines have both an expression stack in memory and a separate register stack. In this case, software, or an interrupt may move data between them. Some machines have a stack of unlimited size, implemented as an array in RAM, which is cached by some number of "top of stack" address registers to reduce memory access. Except for explicit "load from memory" instructions, the order of operand usage is identical with the order of the operands in the data stack, so excellent prefetching can be accomplished easily. Consider . It compiles to ; ; . With a stack stored completely in RAM, this does implicit writes and reads of the in-memory stack: * Load X, push to memory * Load 1, push to memory * Pop 2 values from memory, add, and push result to memory for a total of 5 data cache references. The next step up from this is a stack machine or interpreter with a single top-of-stack register. The above code then does: * Load X into empty TOS register (if hardware machine) or Push TOS register to memory, Load X into TOS register (if interpreter) * Push TOS register to memory, Load 1 into TOS register * Pop left operand from memory, add to TOS register and leave it there for a total of 3 data cache references, worst-case. Generally, interpreters don't track emptiness, because they don't have to—anything below the stack pointer is a non-empty value, and the TOS cache register is always kept hot. Typical Java interpreters do not buffer the top-of-stack this way, however, because the program and stack have a mix of short and wide data values. If the hardwired stack machine has 2 or more top-stack registers, or a register file, then all memory access is avoided in this example and there is only 1 data cache cycle.


History and implementations

Description of such a method requiring only two values at a time to be held in registers, with a limited set of pre-defined operands that were able to be extended by definition of further operands, functions and subroutines, was first provided at conference by Robert S. Barton in 1961.


Commercial stack machines

Examples of stack instruction sets directly executed in hardware include * the Z4 (1945) computer by
Konrad Zuse Konrad Ernst Otto Zuse (; ; 22 June 1910 – 18 December 1995) was a German civil engineer, List of pioneers in computer science, pioneering computer scientist, inventor and businessman. His greatest achievement was the world's first programm ...
had a 2-level stack. * the
Burroughs large systems The Burroughs Large Systems Group produced a family of large 48-bit computing, 48-bit mainframe computer, mainframes using stack machine instruction sets with dense Syllable (computing), syllables.E.g., 12-bit syllables for B5000, 8-bit syllables f ...
architecture (since 1961) * the English Electric KDF9 machine. First delivered in 1964, the KDF9 had a 19-level deep pushdown stack of arithmetic registers, and a 17-level deep stack for subroutine return addresses * the Collins Radio Collins Adaptive Processing System minicomputer (CAPS, since 1969) and
Rockwell Collins Rockwell Collins, Inc. was a multinational corporation headquartered in Cedar Rapids, Iowa, providing avionics and information technology systems and services to government agencies and aircraft manufacturers. It was formed when the Collins Radi ...
Advanced Architecture Microprocessor (AAMP, since 1981). * the Xerox Dandelion, introduced 27 April 1981, and the
Xerox Daybreak Xerox Daybreak (also Xerox 6085 PCS, Xerox 1186) is a workstation computer marketed by Xerox from 1985 to 1989. Overview Daybreak is the final release in the D* (pronounced D-Star) series of machines, some of which share the Wildflower CPU desig ...
utilized a stack machine architecture to save memory. * the
UCSD Pascal UCSD Pascal is a Pascal programming language system that runs on the UCSD p-System, a portable, highly machine-independent operating system. UCSD Pascal was first released in 1977. It was developed at the University of California, San Diego (UC ...
p-machine (as the Pascal MicroEngine and many others) supported a complete student programming environment on early 8-bit microprocessors with poor instruction sets and little RAM, by compiling to a virtual stack machine. * MU5 and
ICL 2900 Series The ICL 2900 Series was a range of mainframe computer systems announced by the British manufacturer International Computers Limited on 9 October 1974. The company had started development under the name "New Range" immediately on its formation ...
. Hybrid stack and accumulator machines. The accumulator register buffered the memory stack's top data value. Variants of load and store opcodes controlled when that register was spilled to the memory stack or reloaded from there. *
HP 3000 The HP 3000 series is a family of 16-bit computing, 16-bit and 32-bit computing, 32-bit minicomputers from Hewlett-Packard. It was designed to be the first minicomputer with full support for time-sharing in the hardware and the operating system, ...
(Classic, not PA-RISC) * HP 9000 systems based on the HP FOCUS microprocessor. *
Tandem Computers Tandem Computers, Inc. was the dominant manufacturer of fault-tolerant computer systems for Automated teller machine, ATM networks, banks, stock exchanges, telephone switching centers, 911 systems, and other similar commercial transaction proc ...
T/16. Like HP 3000, except that compilers, not microcode, controlled when the register stack spilled to the memory stack or was refilled from the memory stack. * the Atmel MARC4
microcontroller A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Pro ...
* Several "Forth chips" such as the RTX2000, the RTX2010, the F21 and the PSC1000 * The Setun
Ternary computer A ternary computer, also called trinary computer, is one that uses ternary logic (i.e., base 3) instead of the more common binary system (i.e., base 2) in its calculations. Ternary computers use trits, instead of binary bits. Types of states ...
performed
balanced ternary Balanced ternary is a ternary numeral system (i.e. base 3 with three Numerical digit, digits) that uses a balanced signed-digit representation of the integers in which the digits have the values −1, 0, and 1. This stands in contrast to the stand ...
using a stack. * Patriot Scientific's Ignite stack machine designed by Charles H. Moore holds a leading ''functional density'' benchmark. * Saab Ericsson Space Thor radiation hardened microprocessor *
Inmos Inmos International plc (trademark INMOS) and two operating subsidiaries, Inmos Limited (UK) and Inmos Corporation (US), was a British semiconductor company founded by Iann Barron, Richard Petritz, and Paul Schroeder in July 1978. Inmos Limited ...
transputer The transputer is a series of pioneering microprocessors from the 1980s, intended for parallel computing. To support this, each transputer had its own integrated memory and serial communication links to exchange data with other transputers. ...
s. * ZPU A physically-small CPU designed to supervise
FPGA A field-programmable gate array (FPGA) is a type of configurable integrated circuit that can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to as programmable logic devices (PLDs). They consist of a ...
systems. *Some technical handheld calculators use reverse Polish notation in their keyboard interface, instead of having parenthesis keys. This is a form of stack machine. The Plus key relies on its two operands already being at the correct topmost positions of the user-visible stack.


Virtual stack machines

Examples of virtual stack machines interpreted in software: * the Whetstone
ALGOL 60 ALGOL 60 (short for ''Algorithmic Language 1960'') is a member of the ALGOL family of computer programming languages. It followed on from ALGOL 58 which had introduced code blocks and the begin and end pairs for delimiting them, representing a ...
interpretive code, on which some features of the Burroughs B6500 were based * the
UCSD Pascal UCSD Pascal is a Pascal programming language system that runs on the UCSD p-System, a portable, highly machine-independent operating system. UCSD Pascal was first released in 1977. It was developed at the University of California, San Diego (UC ...
p-machine; which closely resembled Burroughs * the Niklaus Wirth p-code machine *
Smalltalk Smalltalk is a purely object oriented programming language (OOP) that was originally created in the 1970s for educational use, specifically for constructionist learning, but later found use in business. It was created at Xerox PARC by Learni ...
* the
Java virtual machine A Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally descr ...
instruction set (note that only the abstract instruction set is stack based, HotSpot, the Sun Java Virtual Machine for instance, does not implement the actual interpreter in software, but as handwritten assembly stubs) * the
WebAssembly WebAssembly (Wasm) defines a portable binary-code format and a corresponding text format for executable programs as well as software interfaces for facilitating communication between such programs and their host environment. The main goal of ...
bytecode * the
Virtual Execution System The Virtual Execution System (VES) is a run-time system of the Common Language Infrastructure CLI which provides an environment for executing managed code. It provides direct support for a set of built-in data types, defines a hypothetical mach ...
(VES) for the
Common Intermediate Language Common Intermediate Language (CIL), formerly called Microsoft Intermediate Language (MSIL) or Intermediate Language (IL), is the intermediate language binary instruction set defined within the Common Language Infrastructure (CLI) specification. ...
(CIL) instruction set of the .NET Framework (ECMA 335) * the Forth programming language, especially the integral virtual machine * Adobe's
PostScript PostScript (PS) is a page description language and dynamically typed, stack-based programming language. It is most commonly used in the electronic publishing and desktop publishing realm, but as a Turing complete programming language, it c ...
*
Sun Microsystems Sun Microsystems, Inc., often known as Sun for short, was an American technology company that existed from 1982 to 2010 which developed and sold computers, computer components, software, and information technology services. Sun contributed sig ...
' SwapDrop programming language for Sun Ray
smartcard A smart card (SC), chip card, or integrated circuit card (ICC or IC card), is a card used to control access to a resource. It is typically a plastic credit card-sized card with an embedded integrated circuit (IC) chip. Many smart cards include a ...
identification * Adobe's
ActionScript ActionScript is an object-oriented programming language originally developed by Macromedia Inc. (later acquired by Adobe). It is influenced by HyperTalk, the scripting language for HyperCard. It is now an implementation of ECMAScript (mean ...
Virtual Machine 2 (AVM2) *
Ethereum Ethereum is a decentralized blockchain with smart contract functionality. Ether (abbreviation: ETH) is the native cryptocurrency of the platform. Among cryptocurrencies, ether is second only to bitcoin in market capitalization. It is open-s ...
's EVM * the
CPython CPython is the reference implementation of the Python programming language. Written in C and Python, CPython is the default and most widely used implementation of the Python language. CPython can be defined as both an interpreter and a comp ...
bytecode Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normal ...
interpreter * the
Ruby Ruby is a pinkish-red-to-blood-red-colored gemstone, a variety of the mineral corundum ( aluminium oxide). Ruby is one of the most popular traditional jewelry gems and is very durable. Other varieties of gem-quality corundum are called sapph ...
YARV bytecode interpreter * the Rubinius virtual machine * the
bs (programming language) bs is a programming language and a compiler/interpreter for modest-sized programs on UNIX systems. The bs command can be invoked either for interactive programming or with a file containing a program, optionally taking arguments, via a Unix shell, ...
in
Unix Unix (, ; trademarked as UNIX) is a family of multitasking, multi-user computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, a ...
uses a virtual stack machine to process commands, after first transposing provided input language form, into reverse-polish notation * the dc (computer program) one of the oldest
Unix Unix (, ; trademarked as UNIX) is a family of multitasking, multi-user computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, a ...
programs * the
Lua (programming language) Lua is a lightweight, high-level, multi-paradigm programming language designed mainly for embedded use in applications. Lua is cross-platform software, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relati ...
C API * the TON Virtual Machine (TVM) for The Open Network smart contracts


Hybrid machines

Pure stack machines are quite inefficient for procedures which access multiple fields from the same object. The stack machine code must reload the object pointer for each pointer+offset calculation. A common fix for this is to add some register-machine features to the stack machine: a visible register file dedicated to holding addresses, and register-style instructions for doing loads and simple address calculations. It is uncommon to have the registers be fully general purpose, because then there is no strong reason to have an expression stack and postfix instructions. Another common hybrid is to start with a register machine architecture, and add another memory address mode which emulates the push or pop operations of stack machines: 'memaddress = reg; reg += instr.displ'. This was first used in DEC's
PDP-11 The PDP–11 is a series of 16-bit minicomputers originally sold by Digital Equipment Corporation (DEC) from 1970 into the late 1990s, one of a set of products in the Programmed Data Processor (PDP) series. In total, around 600,000 PDP-11s of a ...
minicomputer. This feature was carried forward in VAX computers and in
Motorola 6809 The Motorola 6809 ("''sixty-eight-oh-nine''") is an 8-bit microprocessor with some 16-bit features. It was designed by Motorola's Terry Ritter and Joel Boney and introduced in 1978. Although source compatible with the earlier Motorola 6800, the ...
and M68000 microprocessors. This allowed the use of simpler stack methods in early compilers. It also efficiently supported virtual machines using stack interpreters or threaded code. However, this feature did not help the register machine's own code to become as compact as pure stack machine code. Also, the execution speed was less than when compiling well to the register architecture. It is faster to change the top-of-stack pointer only occasionally (once per call or return) rather than constantly stepping it up and down throughout each program statement, and it is even faster to avoid memory references entirely. More recently, so-called second-generation stack machines have adopted a dedicated collection of registers to serve as address registers, off-loading the task of memory addressing from the data stack. For example, MuP21 relies on a register called "A", while the more recent GreenArrays processors relies on two registers: A and B. The Intel x86 family of microprocessors have a register-style (accumulator) instruction set for most operations, but use stack instructions for its x87,
Intel 8087 The Intel 8087, announced in 1980, was the first floating-point coprocessor for the 8086 line of microprocessors. The purpose of the chip was to speed up floating-point arithmetic operations, such as addition, subtraction, multiplication, div ...
floating point arithmetic, dating back to the iAPX87 (8087) coprocessor for the 8086 and 8088. That is, there are no programmer-accessible floating point registers, but only an 80-bit wide, 8-level deep stack. The x87 relies heavily on the x86 CPU for assistance in performing its operations.


Computers using call stacks and stack frames

Most current computers (of any instruction set style) and most compilers use a large call-return stack in memory to organize the short-lived local variables and return links for all currently active procedures or functions. Each nested call creates a new stack frame in memory, which persists until that call completes. This call-return stack may be entirely managed by the hardware via specialized address registers and special address modes in the instructions. Or it may be merely a set of conventions followed by the compilers, using generic registers and register+offset address modes. Or it may be something in between. Since this technique is now nearly universal, even on register machines, it is not helpful to refer to all these machines as stack machines. That term is commonly reserved for machines which also use an expression stack and stack-only arithmetic instructions to evaluate the pieces of a single statement. Computers commonly provide direct, efficient access to the program's
global variable In computer programming, a global variable is a variable with global scope, meaning that it is visible (hence accessible) throughout the program, unless shadowed. The set of all global variables is known as the ''global environment'' or ''global ...
s and to the local variables of only the current innermost procedure or function, the topmost stack frame. 'Up level' addressing of the contents of callers' stack frames is usually not needed and not supported as directly by the hardware. If needed, compilers support this by passing in frame pointers as additional, hidden parameters. Some Burroughs stack machines do support up-level refs directly in the hardware, with specialized address modes and a special 'display' register file holding the frame addresses of all outer scopes. Currently, only MCST Elbrus has done this in hardware. When
Niklaus Wirth Niklaus Emil Wirth ( IPA: ) (15 February 1934 – 1 January 2024) was a Swiss computer scientist. He designed several programming languages, including Pascal, and pioneered several classic topics in software engineering. In 1984, he won the Tu ...
developed the first Pascal compiler for the CDC 6000, he found that it was faster overall to pass in the frame pointers as a chain, rather than constantly updating complete arrays of frame pointers. This software method also adds no overhead for common languages like C which lack up-level refs. The same Burroughs machines also supported nesting of tasks or threads. The task and its creator share the stack frames that existed at the time of task creation, but not the creator's subsequent frames nor the task's own frames. This was supported by a cactus stack, whose layout diagram resembled the trunk and arms of a
Saguaro The saguaro ( , ; ''Carnegiea gigantea'') is a tree-like cactus species in the monotypic genus ''Carnegiea'' that can grow to be over tall. It is native to the Sonoran Desert in Arizona, the Mexican state of Sonora, and the Whipple Mountains ...
cactus. Each task had its own memory segment holding its stack and the frames that it owns. The base of this stack is linked to the middle of its creator's stack. In machines with a conventional flat address space, the creator stack and task stacks would be separate heap objects in one heap. In some programming languages, the outer-scope data environments are not always nested in time. These languages organize their procedure 'activation records' as separate heap objects rather than as stack frames appended to a linear stack. In simple languages like Forth that lack local variables and naming of parameters, stack frames would contain nothing more than return branch addresses and frame management overhead. So their return stack holds bare return addresses rather than frames. The return stack is separate from the data value stack, to improve the flow of call setup and returns.


Comparison with register machines

Stack machines are often compared to register machines, which hold values in an array of
register Register or registration may refer to: Arts, entertainment, and media Music * Register (music), the relative "height" or range of a note, melody, part, instrument, etc. * ''Register'', a 2017 album by Travis Miller * Registration (organ), ...
s. Register machines may store stack-like structures in this array, but a register machine has instructions which circumvent the stack interface. Register machines routinely outperform stack machines, and stack machines have remained a niche player in hardware systems. But stack machines are often used in implementing
virtual machine In computing, a virtual machine (VM) is the virtualization or emulator, emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve ...
s because of their simplicity and ease of implementation.


Instructions

Stack machines have higher code density. In contrast to common stack machine instructions which can easily fit in 6 bits or less, register machines require two or three register-number fields per ALU instruction to select operands; the densest register machines average about 16 bits per instruction plus the operands. Register machines also use a wider offset field for load-store opcodes. A stack machine's compact code naturally fits more instructions in cache, and therefore could achieve better cache efficiency, reducing memory costs or permitting faster memory systems for a given cost. In addition, most stack-machine instructions are very simple, made from only one opcode field or one operand field. Thus, stack-machines require very little electronic resources to decode each instruction. A program has to execute more instructions when compiled to a stack machine than when compiled to a register machine or memory-to-memory machine. Every variable load or constant requires its own separate Load instruction, instead of being bundled within the instruction which uses that value. The separated instructions may be simple and faster running, but the total instruction count is still higher. Most register interpreters specify their registers by number. But a host machine's registers can't be accessed in an indexed array, so a memory array is allotted for virtual registers. Therefore, the instructions of a register interpreter must use memory for passing generated data to the next instruction. This forces register interpreters to be much slower on microprocessors made with a fine process rule (i.e. faster transistors without improving circuit speeds, such as the Haswell x86). These require several clocks for memory access, but only one clock for register access. In the case of a stack machine with a data forwarding circuit instead of a register file, stack interpreters can allot the host machine's registers for the top several operands of the stack instead of the host machine's memory In a stack machine, the operands used in the instructions are always at a known offset (set in the stack pointer), from a fixed location (the bottom of the stack, which in a hardware design might always be at memory location zero), saving precious in- cache or in- CPU storage from being used to store quite so many
memory address In computing, a memory address is a reference to a specific memory location in memory used by both software and hardware. These addresses are fixed-length sequences of digits, typically displayed and handled as unsigned integers. This numeric ...
es or index numbers. This may preserve such registers and cache for use in non-flow computation.


Temporary / local values

Some in the industry believe that stack machines execute more
data cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which ...
cycles for temporary values and local variables than do register machines. On stack machines, temporary values often get spilled into memory, whereas on machines with many registers these temps usually remain in registers. (However, these values often need to be spilled into "activation frames" at the end of a procedure's definition, basic block, or at the very least, into a memory buffer during interrupt processing). Values spilled to memory add more cache cycles. This spilling effect depends on the number of hidden registers used to buffer top-of-stack values, upon the frequency of nested procedure calls, and upon host computer interrupt processing rates. On register machines using optimizing compilers, it is very common for the most-used local variables to remain in registers rather than in stack frame memory cells. This eliminates most data cache cycles for reading and writing those values. The development of "stack scheduling" for performing live-variable analysis, and thus retaining key variables on the stack for extended periods, helps this concern. On the other hand, register machines must spill many of their registers to memory across nested procedure calls. The decision of which registers to spill, and when, is made statically at compile time rather than on the dynamic depth of the calls. This can lead to more data cache traffic than in an advanced stack machine implementation.


Common subexpressions

In register machines, a common subexpression (a subexpression which is used multiple times with the same result value) can be evaluated just once and its result saved in a fast register. The subsequent reuses have no time or code cost, just a register reference. This optimization speeds simple expressions (for example, loading variable X or pointer P) as well as less-common complex expressions. With stack machines, in contrast, results can be stored in one of two ways. Firstly, results can be stored using a temporary variable in memory. Storing and subsequent retrievals cost additional instructions and additional data cache cycles. Doing this is only a win if the subexpression computation costs more in time than fetching from memory, which in most stack CPUs, almost always is the case. It is never worthwhile for simple variables and pointer fetches, because those already have the same cost of one data cache cycle per access. It is only marginally worthwhile for expressions such as . These simpler expressions make up the majority of redundant, optimizable expressions in programs written in languages other than concatenative languages. An optimizing compiler can only win on redundancies that the programmer could have avoided in the source code. The second way leaves a computed value on the data stack, duplicating it as needed. This uses operations to copy stack entries. The stack must be depth shallow enough for the CPU's available copy instructions. Hand-written stack code often uses this approach, and achieves speeds like general-purpose register machines. Unfortunately, algorithms for optimal "stack scheduling" are not in wide use by programming languages.


Pipelining

In modern machines, the time to fetch a variable from the data cache is often several times longer than the time needed for basic ALU operations. A program runs faster without stalls if its memory loads can be started several cycles before the instruction that needs that variable. Complex machines can do this with a deep pipeline and "out-of-order execution" that examines and runs many instructions at once. Register machines can even do this with much simpler "in-order" hardware, a shallow pipeline, and slightly smarter compilers. The load step becomes a separate instruction, and that instruction is statically scheduled much earlier in the code sequence. The compiler puts independent steps in between. Scheduling memory accesses requires explicit, spare registers. It is not possible on stack machines without exposing some aspect of the micro-architecture to the programmer. For the expression A B -, B must be evaluated and pushed immediately prior to the Minus step. Without stack permutation or hardware multithreading, relatively little useful code can be put in between while waiting for the Load B to finish. Stack machines can work around the memory delay by either having a deep out-of-order execution pipeline covering many instructions at once, or more likely, they can permute the stack such that they can work on other workloads while the load completes, or they can interlace the execution of different program threads, as in the Unisys A9 system. Today's increasingly parallel computational loads suggests, however, this might not be the disadvantage it's been made out to be in the past. Stack machines can omit the operand fetching stage of a register machine. For example, in the Java Optimized Processor (JOP) microprocessor the top 2 operands of stack directly enter a data forwarding circuit that is faster than the register file.


Out-of-order execution

The Tomasulo algorithm finds
instruction-level parallelism Instruction-level parallelism (ILP) is the Parallel computing, parallel or simultaneous execution of a sequence of Instruction set, instructions in a computer program. More specifically, ILP refers to the average number of instructions run per st ...
by issuing instructions as their data becomes available. Conceptually, the addresses of positions in a stack are no different than the register indexes of a register file. This view permits the
out-of-order execution In computer engineering, out-of-order execution (or more formally dynamic execution) is an instruction scheduling paradigm used in high-performance central processing units to make use of instruction cycles that would otherwise be wasted. In t ...
of the Tomasulo algorithm to be used with stack machines. Out-of-order execution in stack machines seems to reduce or avoid many theoretical and practical difficulties. The cited research shows that such a stack machine can exploit instruction-level parallelism, and the resulting hardware must cache data for the instructions. Such machines effectively bypass most memory accesses to the stack. The result achieves throughput (instructions per
clock A clock or chronometer is a device that measures and displays time. The clock is one of the oldest Invention, human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, a ...
) comparable to load–store architecture machines, with much higher code densities (because operand addresses are implicit). One issue brought up in the research was that it takes about 1.88 stack-machine instructions to do the work of one instruction on a load-store architecture machine. Competitive out-of-order stack machines therefore require about twice as many electronic resources to track instructions ("issue stations"). This might be compensated by savings in instruction cache and memory and instruction decoding circuits.


Hides a faster register machine inside

Some simple stack machines have a chip design which is fully customized all the way down to the level of individual registers. The top of stack address register and the N top of stack data buffers are built from separate individual register circuits, with separate adders and ad hoc connections. However, most stack machines are built from larger circuit components where the N data buffers are stored together within a register file and share read/write buses. The decoded stack instructions are mapped into one or more sequential actions on that hidden register file. Loads and ALU ops act on a few topmost registers, and implicit spills and fills act on the bottommost registers. The decoder allows the instruction stream to be compact. But if the code stream instead had explicit register-select fields which directly manipulated the underlying register file, the compiler could make better use of all registers and the program would run faster. Microprogrammed stack machines are an example of this. The inner microcode engine is some kind of RISC-like register machine or a
VLIW Very long instruction word (VLIW) refers to instruction set architectures that are designed to exploit instruction-level parallelism (ILP). A VLIW processor allows programs to explicitly specify instructions to execute in parallel computing, para ...
-like machine using multiple register files. When controlled directly by task-specific microcode, that engine gets much more work completed per cycle than when controlled indirectly by equivalent stack code for that same task. The object code translators for the
HP 3000 The HP 3000 series is a family of 16-bit computing, 16-bit and 32-bit computing, 32-bit minicomputers from Hewlett-Packard. It was designed to be the first minicomputer with full support for time-sharing in the hardware and the operating system, ...
and
Tandem Tandem, or in tandem, is an arrangement in which two or more animals, machines, or people are lined up one behind another, all facing in the same direction. ''Tandem'' can also be used more generally to refer to any group of persons or objects w ...
T/16 are another example. They translated stack code sequences into equivalent sequences of RISC code. Minor 'local' optimizations removed much of the overhead of a stack architecture. Spare registers were used to factor out repeated address calculations. The translated code still retained plenty of emulation overhead from the mismatch between original and target machines. Despite that burden, the cycle efficiency of the translated code matched the cycle efficiency of the original stack code. And when the source code was recompiled directly to the register machine via optimizing compilers, the efficiency doubled. This shows that the stack architecture and its non-optimizing compilers were wasting over half of the power of the underlying hardware. Register files are good tools for computing because they have high bandwidth and very low latency, compared to memory references via data caches. In a simple machine, the register file allows reading two independent registers and writing of a third, all in one ALU cycle with one-cycle or less latency. Whereas the corresponding data cache can start only one read or one write (not both) per cycle, and the read typically has a latency of two ALU cycles. That's one third of the throughput at twice the pipeline delay. In a complex machine like
Athlon AMD Athlon is the brand name applied to a series of x86, x86-compatible microprocessors designed and manufactured by AMD, Advanced Micro Devices. The original Athlon (now called Athlon Classic) was the first seventh-generation x86 processor a ...
that completes two or more instructions per cycle, the register file allows reading of four or more independent registers and writing of two others, all in one ALU cycle with one-cycle latency. Whereas the corresponding dual-ported data cache can start only two reads or writes per cycle, with multiple cycles of latency. Again, that's one third of the throughput of registers. It is very expensive to build a cache with additional ports. Since a stack is a component of most software programs, even when the software used is not strictly a stack machine, a hardware stack machine might more closely mimic the inner workings of its programs. Processor registers have a high thermal cost, and a stack machine might claim higher energy efficiency.


Interrupts

Responding to an interrupt involves saving the registers to a stack, and then branching to the interrupt handler code. Often stack machines respond more quickly to interrupts, because most parameters are already on a stack and there is no need to push them there. Some register machines deal with this by having multiple register files that can be instantly swapped but this increases costs and slows down the register file.


Interpreters

Interpreters for virtual stack machines are easier to build than interpreters for register machines; the logic for handling memory address modes is in just one place rather than repeated in many instructions. Stack machines also tend to have fewer variations of an opcode; one generalized opcode will handle both frequent cases and obscure corner cases of memory references or function call setup. (But code density is often improved by adding short and long forms for the same operation.) Interpreters for virtual stack machines are often slower than interpreters for other styles of virtual machine. This slowdown is worst when running on host machines with deep execution pipelines, such as current x86 chips. In some interpreters, the interpreter must execute a N-way switch jump to decode the next opcode and branch to its steps for that particular opcode. Another method for selecting opcodes is threaded code. The host machine's prefetch mechanisms are unable to predict and fetch the target of that indexed or indirect jump. So the host machine's execution pipeline must restart each time the hosted interpreter decodes another virtual instruction. This happens more often for virtual stack machines than for other styles of virtual machine. One example is the
Java Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
programming language. Its canonical
virtual machine In computing, a virtual machine (VM) is the virtualization or emulator, emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve ...
is specified as an 8-bit stack machine. However, the Dalvik virtual machine for Java used on Android
smartphones A smartphone is a mobile phone with advanced computing capabilities. It typically has a touchscreen interface, allowing users to access a wide range of applications and services, such as web browsing, email, and social media, as well as mult ...
is a 16-bit virtual-register machine - a choice made for efficiency reasons. Arithmetic instructions directly fetch or store local variables via 4-bit (or larger) instruction fields. Similarly version 5.0 of Lua replaced its virtual stack machine with a faster virtual register machine. Since Java virtual machine became popular, microprocessors have employed advanced
branch predictor In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g., an if–then–else structure) will go before this is known definitively. The purpose of the branch predictor is to improve the flow ...
s for indirect jumps. This advance avoids most of pipeline restarts from N-way jumps and eliminates much of the instruction count costs that affect stack interpreters.


See also

* Stack-oriented programming language * Concatenative programming language * Comparison of application virtual machines * SECD machine * Accumulator machine * Belt machine *
Random-access machine In computer science, random-access machine (RAM or RA-machine) is a model of computation that describes an abstract machine in the general class of register machines. The RA-machine is very similar to the counter machine but with the added capab ...


References

{{reflist, refs= {{cite book , title=Computer architecture: Concepts and evolution , author-first1=Gerrit Anne , author-last1=Blaauw , author-link1=Gerrit Anne Blaauw , author-first2=Frederick Phillips , author-last2=Brooks, Jr. , author-link2=Frederick Phillips Brooks , publisher= Addison-Wesley Longman Publishing Co., Inc. , publication-place=Boston, Massachusetts, USA , date=1997 {{cite web , title=ZPU - the world's smallest 32-bit CPU with a GCC tool-chain: Overview , url=http://opencores.org/project,zpu , publisher=opencores.org , access-date=7 February 2015 {{cite web , url=https://www.greenarraychips.com/home/documents/index.php#F18A , title=Documents , at=F18A Technology , website=GreenArrays, Inc. , access-date=7 July 2022 {{cite web , url=http://www.colorforth.com/inst.htm , title=colorForth Instructions , website=Colorforth.com , access-date=8 October 2017 , archive-url=https://web.archive.org/web/20160310112802/http://colorforth.com/inst.htm , archive-date=10 March 2016 , url-status=dead (Instruction set of the F18A cores, named colorForth for historical reasons.) {{cite web , author-last=Koopman, Jr. , author-first=Philip John , url=http://www.ece.cmu.edu/~koopman/stack_computers/ , title=Stack Computers: the new wave , website=Ece.cmu.edu , access-date=8 October 2017 {{cite web , author-first1=Steve , author-last1=Sinha , author-first2=Satrajit , author-last2=Chatterji , author-first3=Kaushik , author-last3=Ravindran , title=BOOST: Berkeley's Out of Order Stack Thingy , url=https://www.researchgate.net/publication/228556746 , website=Research Gate , access-date=11 November 2023 {{cite magazine , first=Bob , last=Beard , magazine=Computer RESURRECTION , date=Autumn 1997 , url=http://www.cs.man.ac.uk/CCS/res/res18.htm#c , title=The KDF9 Computer - 30 Years On {{cite journal , author-last=Koopman, Jr. , author-first=Philip John , title=A Preliminary Exploration of Optimized Stack Code Generation , journal=Journal of Forth Applications and Research , date=1994 , volume=6 , issue=3 , url=http://www.ece.cmu.edu/~koopman/stack_compiler/stack_co.pdf {{cite conference , author-last=Bailey , author-first=Chris , title=Inter-Boundary Scheduling of Stack Operands: A preliminary Study , book-title=Proceedings of Euroforth 2000 Conference , date=2000 , url=http://www.complang.tuwien.ac.at/anton/euroforth/ef00/bailey00.pdf {{cite conference , author-last1=Shannon , author-first1=Mark , author-last2=Bailey , author-first2=Chris , title=Global Stack Allocation: Register Allocation for Stack Machines , book-title=Proceedings of Euroforth Conference 2006 , date=2006 , url=http://www.complang.tuwien.ac.at/anton/euroforth2006/papers/shannon.pdf {{cite conference , conference=1961 Western Joint IRE-AIEE-ACM Computer Conference , title=A new approach to the functional design of a digital computer , author-last=Barton , author-first=Robert S. , author-link=Robert S. Barton , date=9 May 1961 , book-title=Papers Presented at the 9–11 May 1961, Western Joint IRE-AIEE-ACM Computer Conference , pages=393–396 , doi=10.1145/1460690.1460736 , isbn=978-1-45037872-7 , s2cid=29044652 , url=https://dl.acm.org/doi/10.1145/1460690.1460736, url-access=subscription {{cite journal , journal=IEEE Annals of the History of Computing , title=A new approach to the functional design of a digital computer , author-last=Barton , author-first=Robert S. , author-link=Robert S. Barton , date=1987 , volume=9 , pages=11–15 , doi=10.1109/MAHC.1987.10002 , url=http://doi.ieeecomputersociety.org/10.1109/MAHC.1987.10002, url-access=subscription {{cite journal , url=http://hokiepokie.org/docs/EETimes.ps , title=The World's First Java Processor , author-first1=David A. , author-last1=Greve , author-first2=Matthew M. , author-last2=Wilding , journal=Electronic Engineering Times , date=12 January 1998 {{cite web , title=Mesa Processor Principles of Operation , url=https://digibarn.com/friends/alanfreier/princops/00yTableOfContents.html , website=DigiBarn Computer Museum , publisher=Xerox , access-date=20 September 2023 , archive-url=https://web.archive.org/web/20240514165724/https://digibarn.com/friends/alanfreier/princops/00yTableOfContents.html , archive-date=14 May 2024 , url-status=dead {{cite web , title=DigiBarn: The Xerox Star 8010 "Dandelion" , url=https://digibarn.com/collections/systems/xerox-8010/index.html , publisher=DigiBarn Computer Museum , access-date=20 September 2023 , archive-url=https://web.archive.org/web/20240503063200/https://digibarn.com/collections/systems/xerox-8010/index.html , archive-date=3 May 2024 , url-status=dead {{cite manual , url=https://en.wikichip.org/w/images/4/44/MARC4_4-bit_Microcontrollers_Programmer%27s_Guide.pdf , title=MARC4 4-bit Microcontrollers Programmer's Guide , publisher= Atmel {{cite web , url=http://www.colorforth.com/chips.html , title=Forth chips , website=Colorforth.com , access-date=8 October 2017 , url-status=dead , archive-url=https://web.archive.org/web/20060215200605/http://www.colorforth.com/chips.html , archive-date=15 February 2006 {{cite web , url=http://www.ultratechnology.com/f21.html , title=F21 Microprocessor Overview , website=Ultratechnology.com , access-date=8 October 2017 {{cite web , url=https://github.com/ForthHub/ForthFreak , title=ForthFreak wiki , date=25 August 2017 , access-date=8 October 2017 , website=GitHub.com {{cite web , url=https://www.developer.com/guides/a-java-chip-available-now/ , title=A Java chip available -- now! , website=Developer.com , date=8 April 1999 , access-date=7 July 2022 {{cite web , url=http://lundqvist.dyndns.org/Publications/thesis95/ThorGCC.pdf , title=Porting the GNU C Compiler to the Thor Microprocessor , date=4 December 1995 , access-date=30 March 2011 , url-status=dead , archive-url=https://web.archive.org/web/20110820085702/http://lundqvist.dyndns.org/Publications/thesis95/ThorGCC.pdf , archive-date=20 August 2011 {{cite book , author-last1=Randell , author-first1=Brian , author-link1=Brian Randell , author-last2=Russell , author-first2=Lawford John , url=http://www.softwarepreservation.org/projects/ALGOL/book/Randell_ALGOL_60_Implementation_1964.pdf , title=Algol 60 Implementation , location=London, UK , publisher=
Academic Press Academic Press (AP) is an academic book publisher founded in 1941. It launched a British division in the 1950s. Academic Press was acquired by Harcourt, Brace & World in 1969. Reed Elsevier said in 2000 it would buy Harcourt, a deal complete ...
, date=1964 , isbn=0-12-578150-4
{{cite conference , author-last1=Shi , author-first1=Yunhe , author-last2=Gregg , author-first2=David , author-last3=Beatty , author-first3=Andrew , author-last4=Ertl , author-first4=M. Anton , book-title=Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments , title=Virtual machine showdown: Stack versus registers , date=2005 , pages=153–163 , doi=10.1145/1064979.1065001 , isbn=1595930477 , s2cid=811512 {{cite book , author-last=Hyde , author-first=Randall , author-link=Randall Hyde , title=Write Great Code, Vol. 2: Thinking Low-Level, Writing High-Level , date=2004 , volume=2 , publisher= No Starch Press , isbn=978-1-59327-065-0 , page=391 , url=https://books.google.com/books?id=mM58oD4LATUC&dq=stack%20machines%20simplicity&pg=PA391 , access-date=30 June 2021 , language=en "Computer Architecture: A Quantitative Approach", John L. Hennessy, David Andrew Patterson; See the discussion of stack machines. {{cite book , title=Second-Generation Stack Computer Architecture , chapter=2.1 Lukasiewicz and the First Generation: 2.1.2 Germany: Konrad Zuse (1910–1995); 2.2 The First Generation of Stack Computers: 2.2.1 Zuse Z4 , author-first=Charles Eric , author-last=LaForest , type=thesis , publisher=
University of Waterloo The University of Waterloo (UWaterloo, UW, or Waterloo) is a Public university, public research university located in Waterloo, Ontario, Canada. The main campus is on of land adjacent to uptown Waterloo and Waterloo Park. The university also op ...
, location=Waterloo, Canada , date=April 2007 , page=8, 11, etc. , url=http://fpgacpu.ca/publications/Second-Generation_Stack_Computer_Architecture.pdf , access-date=2 July 2022 , url-status=live , archive-url=https://web.archive.org/web/20220120155616/http://fpgacpu.ca/publications/Second-Generation_Stack_Computer_Architecture.pdf , archive-date=20 January 2022 (178 pages

/ref> {{cite manual , url=http://www.bitsavers.org/pdf/burroughs/LargeSystems/A-Series/MCP_3.6/1170057_Introduction_to_A_Series_Systems_3.6_Apr86.pdf , title=Introduction to A Series Systems , date=April 1986 , publisher=
Burroughs Corporation The Burroughs Corporation was a major American manufacturer of business equipment. The company was founded in 1886 as the American Arithmometer Company by William Seward Burroughs I, William Seward Burroughs. The company's history paralleled many ...
, access-date=20 September 2023
{{cite web , url=http://www.jopdesign.com/doc/stack.pdf , title=Design and Implementation of an Efficient Stack Machine , website=Jopdesign.com , access-date=8 October 2017 {{cite journal , title=HP3000 Emulation on HP Precision Architecture Computers , author-first1=Arndt , author-last1=Bergh , author-first2=Keith , author-last2=Keilman , author-first3=Daniel , author-last3=Magenheimer , author-first4=James , author-last4=Miller , journal= Hewlett-Packard Journal , publisher=
Hewlett-Packard The Hewlett-Packard Company, commonly shortened to Hewlett-Packard ( ) or HP, was an American multinational information technology company. It was founded by Bill Hewlett and David Packard in 1939 in a one-car garage in Palo Alto, California ...
, date=December 1987 , pages=87–89 , url=https://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1987-12.pdf , access-date=20 September 2023
{{cite conference , title=Migrating a CISC Computer Family onto RISC via Object Code Translation , author1=Kristy Andrews , author2=Duane Sand , book-title=Proceedings of ASPLOS-V , date=October 1992 8051 CPU Manual, Intel, 1980 {{cite web , title=Virtual Machine Showdown: Stack vs. Register Machine , author-first1=Yunhe , author-last1=Shi , author-first2=David , author-last2=Gregg , author-first3=Andrew , author-last3=Beatty , author-first4=M. Anton , author-last4=Ertle , url=http://usenix.org/events/vee05/full_papers/p153-yunhe.pdf , website=Usenix.org , access-date=8 October 2017 {{cite web , title=The Case for Virtual Register Machines , author-first1=Brian , author-last1=Davis , author-first2=Andrew , author-last2=Beatty , author-first3=Kevin , author-last3=Casey , author-first4=David , author-last4=Gregg , author-first5=John , author-last5=Waldron , url=https://www.scss.tcd.ie/David.Gregg/papers/Gregg-SoCP-2005.pdf , website=Scss.tcd.ie , access-date=20 September 2023 {{cite web , url=http://sites.google.com/site/io/dalvik-vm-internals/2008-05-29-Presentation-Of-Dalvik-VM-Internals.pdf?attredirects=0 , title=Presentation of Dalvik VM Internals , author-first=Dan , author-last=Bornstein , date=29 May 2008 , access-date=16 August 2010 , format=PDF , page=22 {{cite web , url=http://www.lua.org/doc/jucs05.pdf , title=The Implementation of Lua 5.0 , website=Lua.org , access-date=8 October 2017 {{cite web , url=http://www.inf.puc-rio.br/~roberto/talks/lua-ll3.pdf , title=The Virtual Machine of Lua 5.0 , website=Inf.puc-rio.br , access-date=8 October 2017 {{cite web , url=https://inria.hal.science/hal-01100647/document , title=Branch Prediction and the Performance of Interpreters - Don't Trust Folklore , website=Hal.inria.fr , access-date=20 September 2023 {{cite magazine , title=Stack Machine Development: Australia, Great Britain, and Europe , author-first=Fraser George , author-last=Duncan , location=University of Bristol, Bristol, Virginia, USA , magazine=
Computer A computer is a machine that can be Computer programming, programmed to automatically Execution (computing), carry out sequences of arithmetic or logical operations (''computation''). Modern digital electronic computers can perform generic set ...
, id={{CODEN, CPTRB4 , s2cid=17013010 , doi=10.1109/MC.1977.315873 , issn=0018-9162 , eissn=1558-0814 , publisher= , volume=10 , date=1 May 1977 , issue=5 , pages=50–52 , url=https://csdl-downloads.ieeecomputer.org/mags/co/1977/05/01646485.pdf?Expires=1697369097&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jc2RsLWRvd25sb2Fkcy5pZWVlY29tcHV0ZXIub3JnL21hZ3MvY28vMTk3Ny8wNS8wMTY0NjQ4NS5wZGYiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2OTczNjkwOTd9fX1dfQ__&Signature=xUP0yvim4Anf0nWqRYKhw7EINRBgqttNgyV0fOBmg4jGQU~Uo1eP91Mw2CL34gK18qbzYjWRKwqifo7aVUL2hgxz~ZplAiqNXRqbLpbB4bYfoPiJNJ3x0AJmfERxcIG058YoTI8~uiEhmUNgjJkrfSMbqHwUoqit~4p7xFLfFBqiPau56WqdEngihf8OXuDeUxkMvCPgo2tGnN5GCoGY9-ALYc99IxqY8-ltGpsyauyASyerp42tY7E6r7T~6x75q8mjilSfo~tTpJMTdX2DpGepaobjf9D7MAXWv7iko038yLn8Kp8WxQceX6VX8fM85pPPYapXGK4HrPNnUIGeiw__&Key-Pair-Id=K12PMWTCQBDMDT , access-date=15 October 2023 , url-status=dead , archive-url=https://web.archive.org/web/20231015112418/https://csdl-downloads.ieeecomputer.org/mags/co/1977/05/01646485.pdf?Expires=1697369097&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jc2RsLWRvd25sb2Fkcy5pZWVlY29tcHV0ZXIub3JnL21hZ3MvY28vMTk3Ny8wNS8wMTY0NjQ4NS5wZGYiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2OTczNjkwOTd9fX1dfQ__&Signature=xUP0yvim4Anf0nWqRYKhw7EINRBgqttNgyV0fOBmg4jGQU~Uo1eP91Mw2CL34gK18qbzYjWRKwqifo7aVUL2hgxz~ZplAiqNXRqbLpbB4bYfoPiJNJ3x0AJmfERxcIG058YoTI8~uiEhmUNgjJkrfSMbqHwUoqit~4p7xFLfFBqiPau56WqdEngihf8OXuDeUxkMvCPgo2tGnN5GCoGY9-ALYc99IxqY8-ltGpsyauyASyerp42tY7E6r7T~6x75q8mjilSfo~tTpJMTdX2DpGepaobjf9D7MAXWv7iko038yLn8Kp8WxQceX6VX8fM85pPPYapXGK4HrPNnUIGeiw__&Key-Pair-Id=K12PMWTCQBDMDT , archive-date=15 October 2023 (3 pages)


External links


Homebrew CPU in an FPGA
— homebrew stack machine using FPGA

— homebrew stack machine using discrete logical circuits

— homebrew stack machine using bitslice/PLD
Second-Generation Stack Computer Architecture
— Thesis about the history and design of stack machines. Models of computation Stack machines Microprocessors