In
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, an instruction set architecture (ISA) is an
abstract model that generally defines how
software
Software consists of computer programs that instruct the Execution (computing), execution of a computer. Software also includes design documents and specifications.
The history of software is closely tied to the development of digital comput ...
controls the
CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an ''
implementation
Implementation is the realization of an application, execution of a plan, idea, scientific modelling, model, design, specification, Standardization, standard, algorithm, policy, or the Management, administration or management of a process or Goal ...
'' of that ISA.
In general, an ISA defines the supported
instructions,
data type
In computer science and computer programming, a data type (or simply type) is a collection or grouping of data values, usually specified by a set of possible values, a set of allowed operations on these values, and/or a representation of these ...
s,
registers, the hardware support for managing
main memory, fundamental features (such as the
memory consistency,
addressing modes,
virtual memory
In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a ver ...
), and the
input/output
In computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs a ...
model of implementations of the ISA.
An ISA specifies the behavior of
machine code running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing
binary compatibility between implementations. This enables multiple implementations of an ISA that differ in characteristics such as
performance
A performance is an act or process of staging or presenting a play, concert, or other form of entertainment. It is also defined as the action or process of carrying out or accomplishing an action, task, or function.
Performance has evolved glo ...
, physical size, and monetary cost (among other things), but that are capable of running the same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the
microarchitectures of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations.
If an
operating system
An operating system (OS) is system software that manages computer hardware and software resources, and provides common daemon (computing), services for computer programs.
Time-sharing operating systems scheduler (computing), schedule tasks for ...
maintains a standard and compatible
application binary interface
An application binary interface (ABI) is an interface exposed by software that is defined for in-process machine code access. Often, the exposing software is a library, and the consumer is a program.
An ABI is at a relatively low-level of a ...
(ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for the other operating system.
An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute
machine code for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions.
The binary compatibility that they provide makes ISAs one of the most fundamental abstractions in
computing
Computing is any goal-oriented activity requiring, benefiting from, or creating computer, computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both computer hardware, hardware and softw ...
.
Overview
An instruction set architecture is distinguished from a
microarchitecture, which is the set of
processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the
Intel
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and Delaware General Corporation Law, incorporated in Delaware. Intel designs, manufactures, and sells computer compo ...
Pentium and the
AMD Athlon
AMD Athlon is the brand name applied to a series of x86, x86-compatible microprocessors designed and manufactured by AMD, Advanced Micro Devices. The original Athlon (now called Athlon Classic) was the first seventh-generation x86 processor a ...
implement nearly identical versions of the
x86 instruction set, but they have radically different internal designs.
The concept of an ''architecture'', distinct from the design of a specific machine, was developed by
Fred Brooks at IBM during the design phase of
System/360.
Some
virtual machine
In computing, a virtual machine (VM) is the virtualization or emulator, emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve ...
s that support
bytecode
Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normal ...
as their ISA such as
Smalltalk
Smalltalk is a purely object oriented programming language (OOP) that was originally created in the 1970s for educational use, specifically for constructionist learning, but later found use in business. It was created at Xerox PARC by Learni ...
, the
Java virtual machine
A Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally descr ...
, and
Microsoft
Microsoft Corporation is an American multinational corporation and technology company, technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the History of personal computers#The ear ...
's
Common Language Runtime, implement this by translating the bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see:
Just-in-time compilation).
Transmeta implemented the x86 instruction set atop
very long instruction word (VLIW) processors in this fashion.
Classification of ISAs
An ISA may be classified in a number of different ways. A common classification is by architectural ''complexity''. A
complex instruction set computer (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A
reduced instruction set computer (RISC) simplifies the processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use.
Other types include
VLIW architectures, and the closely related and ''
explicitly parallel instruction computing'' (EPIC) architectures. These architectures seek to exploit
instruction-level parallelism with less hardware than RISC and CISC by making the
compiler
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
responsible for instruction issue and scheduling.
Architectures with even less complexity have been studied, such as the
minimal instruction set computer (MISC) and
one-instruction set computer (OISC). These are theoretically important types, but have not been commercialized.
Instructions
Machine language is built up from discrete ''statements'' or ''instructions''. On the processing architecture, a given instruction may specify:
*
opcode
In computing, an opcode (abbreviated from operation code) is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs), central processing units (CPUs), and ...
(the instruction to be performed) e.g. add, copy, test
*any explicit operands:
::
registers
::literal/constant values
::
addressing modes used to access memory
More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by
control flow instructions.
Instruction types
Examples of operations common to many instruction sets include:
Data handling and memory operations
*''Set'' a
register to a fixed constant value.
*''Copy'' data from a memory location or a register to a memory location or a register (a machine instruction is often called ''move''; however, the term is misleading). They are used to store the contents of a register, the contents of another memory location or the result of a computation, or to retrieve stored data to perform a computation on it later. They are often called
''load'' or ''store'' operations.
*''Read'' or ''write'' data from hardware devices.
Arithmetic and logic operations
*''Add'', ''subtract'', ''multiply'', or ''divide'' the values of two registers, placing the result in a register, possibly setting one or more
condition codes in a
status register.
**', ' in some ISAs, saving operand fetch in trivial cases.
*Perform
bitwise operation
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operatio ...
s, e.g., taking the ''
conjunction'' and ''
disjunction
In logic, disjunction (also known as logical disjunction, logical or, logical addition, or inclusive disjunction) is a logical connective typically notated as \lor and read aloud as "or". For instance, the English language sentence "it is ...
'' of corresponding bits in a pair of registers, taking the ''
negation
In logic, negation, also called the logical not or logical complement, is an operation (mathematics), operation that takes a Proposition (mathematics), proposition P to another proposition "not P", written \neg P, \mathord P, P^\prime or \over ...
'' of each bit in a register.
*''Compare'' two values in registers (for example, to see if one is less, or if they are equal).
*''s'' for arithmetic on floating-point numbers.
Control flow operations
*''
Branch'' to another location in the program and execute instructions there.
*''
Conditionally branch'' to another location if a certain condition holds.
*''
Indirectly branch'' to another location.
*''Skip'' one of more instructions, depending on conditions
*''Trap'' Explicitly cause an
interrupt
In digital computers, an interrupt (sometimes referred to as a trap) is a request for the processor to ''interrupt'' currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted ...
, either conditionally or unconditionally.
*''
Call'' another block of code, while saving, e.g., the location of the next instruction, as a point to return to.
Coprocessor instructions
*Load/store data to and from a coprocessor or exchanging with CPU registers.
*Perform coprocessor operations.
Complex instructions
Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are
typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:
*transferring multiple registers to or from memory (especially the
stack) at once
*moving large blocks of memory (e.g.
string copy or
DMA transfer)
*complicated integer and
floating-point arithmetic
In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a Sign (mathematics), signed sequence of a fixed number of digits in some Radix, base) multiplied by an integer power of that ba ...
(e.g.
square root, or
transcendental functions such as
logarithm
In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
,
sine,
cosine
In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side opposite that ...
, etc.)
*''s'', a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated
SIMD registers
*performing an atomic
test-and-set instruction or other
read–modify–write atomic instruction
*instructions that perform
ALU operations with an operand from memory rather than a register
Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include
SIMD or
vector instructions that perform the same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy
parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as
MMX,
3DNow!, and
AltiVec.
Instruction encoding

On traditional architectures, an instruction includes an
opcode
In computing, an opcode (abbreviated from operation code) is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs), central processing units (CPUs), and ...
that specifies the operation to perform, such as ''add contents of memory to register''—and zero or more
operand specifiers, which may specify
registers, memory locations, or literal data. The operand specifiers may have
addressing modes determining their meaning or may be in fixed fields. In
very long instruction word (VLIW) architectures, which include many
microcode
In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
architectures, multiple simultaneous opcodes and operands are specified in a single instruction.
Some exotic instruction sets do not have an opcode field, such as
transport triggered architectures (TTA), only operand(s).
Most
stack machine
In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a Virtual machine#Process virtual machines, process virtual machine in which the primary interaction is moving short- ...
s have "
0-operand" instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix (
reverse Polish notation) operations that work only on the expression
stack, not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation.
Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example, a conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and the target location not modified, if the condition is false. Similarly, IBM
z/Architecture has a conditional store instruction. A few instruction sets include a predicate field in every instruction. Having predicates for non-branch instructions is called
predication.
Number of operands
Instruction sets may be categorized by the maximum number of operands ''explicitly'' specified in instructions.
(In the examples that follow, ''a'', ''b'', and ''c'' are (direct or calculated) addresses referring to memory cells, while ''reg1'' and so on refer to machine registers.)
C = A+B
*0-operand (''zero-address machines''), so called
stack machine
In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a Virtual machine#Process virtual machines, process virtual machine in which the primary interaction is moving short- ...
s: All arithmetic operations take place using the top one or two positions on the stack:
push a
,
push b
,
add
,
pop c
.
**
C = A+B
needs ''four instructions''. For stack machines, the terms "0-operand" and "zero-address" apply to arithmetic instructions, but not to all instructions, as 1-operand push and pop instructions are used to access memory.
*1-operand (''one-address machines''), so called
accumulator machines, include early computers and many small
microcontroller
A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Pro ...
s: most instructions specify a single right operand (that is, constant, a register, or a memory location), with the implicit
accumulator as the left operand (and the destination if there is one):
load a
,
add b
,
store c
.
**
C = A+B
needs ''three instructions''.
*2-operand — many CISC and RISC machines fall under this category:
**CISC —
move A
to ''C''; then
add B
to ''C''.
***
C = A+B
needs ''two instructions''. This effectively 'stores' the result without an explicit ''store'' instruction.
**CISC — Often machines ar
limited to one memory operandper instruction:
load a,reg1
;
add b,reg1
;
store reg1,c
; This requires a load/store pair for any memory movement regardless of whether the
add
result is an augmentation stored to a different place, as in
C = A+B
, or the same memory location:
A = A+B
.
***
C = A+B
needs ''three instructions''.
**RISC — Requiring explicit memory loads, the instructions would be:
load a,reg1
;
load b,reg2
;
add reg1,reg2
;
store reg2,c
.
***
C = A+B
needs ''four instructions''.
*3-operand, allowing better reuse of data:
[
]
**CISC — It becomes either a single instruction:
add a,b,c
***
C = A+B
needs ''one instruction''.
**CISC — Or, on machines limited to two memory operands per instruction,
move a,reg1
;
add reg1,b,c
;
***
C = A+B
needs ''two instructions''.
**RISC — arithmetic instructions use registers only, so explicit 2-operand load/store instructions are needed:
load a,reg1
;
load b,reg2
;
add reg1+reg2->reg3
;
store reg3,c
;
***
C = A+B
needs ''four instructions''.
***Unlike 2-operand or 1-operand, this leaves all three values a, b, and c in registers available for further reuse.
[
*more operands—some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.
Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM Thumb. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.
Each instruction specifies some number of operands (registers, memory locations, or immediate values) ''explicitly''. Some instructions give one or both operands implicitly, such as by being stored on top of the stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the ]arity
In logic, mathematics, and computer science, arity () is the number of arguments or operands taken by a function, operation or relation. In mathematics, arity may also be called rank, but this word can have many other meanings. In logic and ...
). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode.
Register pressure
''Register pressure'' measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.
While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha
Alpha (uppercase , lowercase ) is the first letter of the Greek alphabet. In the system of Greek numerals, it has a value of one. Alpha is derived from the Phoenician letter ''aleph'' , whose name comes from the West Semitic word for ' ...
enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer.
Instruction length
The size or length of an instruction varies widely, from as little as four bits in some microcontroller
A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Pro ...
s to many hundreds of bits in some VLIW systems. Processors used in personal computer
A personal computer, commonly referred to as PC or computer, is a computer designed for individual use. It is typically used for tasks such as Word processor, word processing, web browser, internet browsing, email, multimedia playback, and PC ...
s, mainframes, and supercomputer
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instruc ...
s have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits). Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), , typically corresponding with that architecture's word size. In other architectures, instructions have variable length, typically integral multiples of a byte or a halfword. Some, such as the ARM with ''Thumb-extension'' have ''mixed'' variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8).
Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary,[ for instance), and are therefore somewhat easier to optimize for speed.
]
Code density
In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the ''code density'', was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch.
Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named ''Complex Instruction Set Computers'', CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.
''Reduced instruction-set computers'', RISC
In electronics and computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a comp ...
, were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length, whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.
Minimal instruction set computers (MISC) are commonly a form of stack machine
In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a Virtual machine#Process virtual machines, process virtual machine in which the primary interaction is moving short- ...
, where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA ( field-programmable gate array) or in a multi-core form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.
There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.
In practice, code density is also dependent on the compiler
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
. Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has the option -Os
to optimize for small machine code size, and -O3
to optimize for execution speed at the cost of larger machine code.
Representation
The instructions constituting a program are rarely specified using their internal, numeric form ( machine code); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming language
A high-level programming language is a programming language with strong Abstraction (computer science), abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language ''elements'', be ea ...
s by compiler
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
s.
Design
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (complex instruction set computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (reduced instruction set computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming.
Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt. For example, MOS Technology 6502
The MOS Technology 6502 (typically pronounced "sixty-five-oh-two" or "six-five-oh-two") William Mensch and the moderator both pronounce the 6502 microprocessor as ''"sixty-five-oh-two"''. is an 8-bit computing, 8-bit microprocessor that was desi ...
uses 00H, Zilog Z80
The Zilog Z80 is an 8-bit computing, 8-bit microprocessor designed by Zilog that played an important role in the evolution of early personal computing. Launched in 1976, it was designed to be Backward compatibility, software-compatible with the ...
uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH while Motorola 68000 use codes in the range 4E40H-4E4FH.
Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.
The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.
On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something such as " fetch-and-add", " load-link/store-conditional" (LL/SC), or "atomic compare-and-swap".
Instruction set implementation
A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model
A programming model is an execution model coupled to an API or a particular pattern of code. In this style, there are actually two execution models in play: the execution model of the base programming language and the execution model of the p ...
, and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.
When designing the microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture.
There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):
# Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
# Other designs employ microcode
In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
routines or tables (or both) to do this, using ROMs or writable RAMs ( writable control store), PLAs, or both.
Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip).
CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs).
An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.
Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline
In computer engineering, instruction pipelining is a technique for implementing instruction-level parallelism within a single processor. Pipelining attempts to keep every part of the processor busy with some instruction by dividing incoming Mac ...
only allow a single memory load or memory store per instruction, leading to a load–store architecture (RISC). For another example, some early ways of implementing the instruction pipeline
In computer engineering, instruction pipelining is a technique for implementing instruction-level parallelism within a single processor. Pipelining attempts to keep every part of the processor busy with some instruction by dividing incoming Mac ...
led to a delay slot.
The demands of high-speed digital signal processing have pushed in the opposite direction—forcing instructions to be implemented in a particular way. For example, to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must use a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply–accumulate multiplier.
See also
* Comparison of instruction set architectures
* Compressed instruction set
* Computer architecture
* Emulator
* Instruction set simulator
* Micro-operation
* No instruction set computing
* OVPsim – full systems simulator providing ability to create/model/emulate any instruction set using C and standard APIs
* Processor design
*Simulation
A simulation is an imitative representation of a process or system that could exist in the real world. In this broad sense, simulation can often be used interchangeably with model. Sometimes a clear distinction between the two terms is made, in ...
* Register transfer language (RTL)
References
Further reading
*
*
External links
*
Programming Textfiles: Bowen's Instruction Summary Cards
{{Authority control
Central processing unit
Set architectures
*Instruction set