Concurrent computing is a form of
computing in which several
computation
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm).
Mechanical or electronic devices (or, historically, people) that perform computations are known as ''computers''. An es ...
s are executed ''
concurrently''—during overlapping time periods—instead of ''sequentially—''with one completing before the next starts.
This is a property of a system—whether a
program,
computer
A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
, or a
network—where there is a separate execution point or "thread of control" for each process. A ''concurrent system'' is one where a computation can advance without waiting for all other computations to complete.
Concurrent computing is a form of
modular programming. In its
paradigm
In science and philosophy, a paradigm () is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitute legitimate contributions to a field.
Etymology
''Paradigm'' comes f ...
an overall computation is
factored
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several ''factors'', usually smaller or simpler objects of the same kind ...
into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include
Edsger Dijkstra,
Per Brinch Hansen, and
C.A.R. Hoare
Sir Charles Antony Richard Hoare (Tony Hoare or C. A. R. Hoare) (born 11 January 1934) is a British computer scientist who has made foundational contributions to programming languages, algorithms, operating systems, formal verification, and c ...
.
Introduction
The concept of concurrent computing is frequently confused with the related but distinct concept of
parallel computing
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
,
[ Pike, Rob (2012-01-11). "Concurrency is not Parallelism". ''Waza conference'', 11 January 2012. Retrieved from http://talks.golang.org/2012/waza.slide (slides) and http://vimeo.com/49718712 (video).] although both can be described as "multiple processes executing ''during the same period of time''". In parallel computing, execution occurs at the same physical instant: for example, on separate
processors of a
multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (
one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process ''lifetimes'' overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via
time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is ''paused'', another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.
Concurrent computations ''may'' be executed in parallel,
for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network. In general, however, the languages, tools, and techniques for parallel programming might not be suitable for concurrent programming, and vice versa.
The exact timing of when tasks in a concurrent system are executed depends on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:
* T1 may be executed and finished before T2 or ''vice versa'' (serial ''and'' sequential)
* T1 and T2 may be executed alternately (serial ''and'' concurrent)
* T1 and T2 may be executed simultaneously at the same instant of time (parallel ''and'' concurrent)
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, ''concurrent/sequential'' and ''parallel/serial'' are used as opposing pairs. A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a ''serial schedule''. A set of tasks that can be scheduled serially is '' serializable'', which simplifies concurrency control.
Coordinating access to shared resources
The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[ Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource ]balance
:
bool withdraw(int withdrawal)
Suppose balance = 500
, and two concurrent ''threads'' make the calls withdraw(300)
and withdraw(350)
. If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal
evaluates to true
, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms.
Advantages
The advantages of concurrent computing include:
* Increased program throughput—parallel execution of a concurrent program allows the number of tasks completed in a given time to increase proportionally to the number of processors according to Gustafson's law
* High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
* More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes.
Models
Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies.
Input/output automata were introduced in 1987.
Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses.
Consistency models
Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory occur and how results are produced.
One of the first consistency models was Leslie Lamport's sequential consistency
Sequential consistency is a consistency model used in the domain of concurrent computing (e.g. in distributed shared memory, distributed transactions, etc.).
It is the property that "... the result of any execution is the same as if the operatio ...
model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".
Implementation
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads
Thread may refer to:
Objects
* Thread (yarn), a kind of thin yarn used for sewing
** Thread (unit of measurement), a cotton yarn measure
* Screw thread, a helical ridge on a cylindrical fastener
Arts and entertainment
* ''Thread'' (film), 2016 ...
within a single operating system process.
Interaction and communication
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures
Futures may mean:
Finance
*Futures contract, a tradable financial derivatives contract
*Futures exchange, a financial market where futures contracts are traded
* ''Futures'' (magazine), an American finance magazine
Music
* ''Futures'' (album), a ...
), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
;Shared memory communication: Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors
Monitor or monitor may refer to:
Places
* Monitor, Alberta
* Monitor, Indiana, town in the United States
* Monitor, Kentucky
* Monitor, Oregon, unincorporated community in the United States
* Monitor, Washington
* Monitor, Logan County, West Vir ...
) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
;Message passing communication: Concurrent components communicate by exchanging messages (exemplified by MPI
MPI or Mpi may refer to:
Science and technology Biology and medicine
* Magnetic particle imaging, an emerging non-invasive tomographic technique
* Myocardial perfusion imaging, a nuclear medicine procedure that illustrates the function of the hear ...
, Go, Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming. A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model
The actor model in computer science is a mathematical model of concurrent computation that treats ''actor'' as the universal primitive of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more ...
, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing
Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all ...
, with or without shared memory cache coherence
In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, whi ...
.
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
History
Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing
Time-division multiplexing (TDM) is a method of transmitting and receiving independent signals over a common signal path by means of synchronized switches at each end of the transmission line so that each signal appears on the line only a fracti ...
(1870s).
The academic study of concurrent algorithms started in the 1960s, with credited with being the first paper in this field, identifying and solving mutual exclusion.
Prevalence
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
* Channel
Channel, channels, channeling, etc., may refer to:
Geography
* Channel (geography), in physical geography, a landform consisting of the outline (banks) of the path of a narrow body of water.
Australia
* Channel Country, region of outback Austral ...
* Coroutine
* Futures and promises
At the operating system level:
* Computer multitasking, including both cooperative multitasking and preemptive multitasking
** Time-sharing, which replaced sequential batch processing of jobs with concurrent use of a system
* Process
* Thread
Thread may refer to:
Objects
* Thread (yarn), a kind of thin yarn used for sewing
** Thread (unit of measurement), a cotton yarn measure
* Screw thread, a helical ridge on a cylindrical fastener
Arts and entertainment
* ''Thread'' (film), 2016 ...
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Languages supporting concurrent programming
Concurrent programming languages are programming languages that use language constructs for concurrency
Concurrent means happening at the same time. Concurrency, concurrent, or concurrence may refer to:
Law
* Concurrence, in jurisprudence, the need to prove both ''actus reus'' and ''mens rea''
* Concurring opinion (also called a "concurrence"), a ...
. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as ''concurrency-oriented languages'' or ''concurrency-oriented programming languages'' (COPL).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors
Monitor or monitor may refer to:
Places
* Monitor, Alberta
* Monitor, Indiana, town in the United States
* Monitor, Kentucky
* Monitor, Oregon, unincorporated community in the United States
* Monitor, Washington
* Monitor, Logan County, West Vir ...
(although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
* Ada—general purpose, with native support for message passing and monitor based concurrency
* Alef
Aleph (or alef or alif, transliterated ʾ) is the first letter of the Semitic abjads, including Phoenician , Hebrew , Aramaic , Syriac , Arabic ʾ and North Arabian 𐪑. It also appears as South Arabian 𐩱 and Ge'ez .
These letter ...
—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs
* Alice
Alice may refer to:
* Alice (name), most often a feminine given name, but also used as a surname
Literature
* Alice (''Alice's Adventures in Wonderland''), a character in books by Lewis Carroll
* ''Alice'' series, children's and teen books by ...
—extension to Standard ML
Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular among compiler writers and programming language researchers, as well as in the development of the ...
, adds support for concurrency via futures
* Ateji PX
Ateji PX is an object-oriented programming language extension for Java. It is intended to facilliate parallel computing on multi-core processors, GPU, Grid and Cloud.
Ateji PX can be integrated with the Eclipse IDE, requires minimal learning of ...
—extension to Java with parallel primitives inspired from π-calculus
* Axum
Axum, or Aksum (pronounced: ), is a town in the Tigray Region of Ethiopia with a population of 66,900 residents (as of 2015).
It is the site of the historic capital of the Aksumite Empire, a naval and trading power that ruled the whole region ...
—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax
* BMDFM
Binary Modular Dataflow Machine (BMDFM) is a software package that enables running an application in parallel on shared memory symmetric multiprocessing (SMP) computers using the multiple processors to speed up the execution of single application ...
—Binary Modular DataFlow Machine
* C++—std::thread
* Cω (C omega)—for research, extends C#, uses asynchronous communication
* C#—supports concurrent computing using lock, yield, also since version 5.0 async and await keywords introduced
* Clojure—modern, functional
Functional may refer to:
* Movements in architecture:
** Functionalism (architecture)
** Form follows function
* Functional group, combination of atoms within molecules
* Medical conditions without currently visible organic basis:
** Functional sy ...
dialect of Lisp
A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech.
Types
* A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lisping ...
on the Java platform
* Concurrent Clean
Clean is a general-purpose purely functional computer programming language. It was called the Concurrent Clean System, then the Clean System, later just Clean. Clean has been developed by a group of researchers from the Radboud University in Nij ...
—functional programming, similar to Haskell
* Concurrent Collections Concurrent Collections (known as CnC) is a programming model for software frameworks to expose parallelism in applications. The Concurrent Collections conception originated from tagged stream processing development with HP TStreams.
TStreams
Arou ...
(CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
* Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory
* Concurrent ML—concurrent extension of Standard ML
Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular among compiler writers and programming language researchers, as well as in the development of the ...
* Concurrent Pascal—by Per Brinch Hansen
* Curry
* D—multi-paradigm
Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms.
Some paradigms are concerned mainly with implications for the execution model of the language, suc ...
system programming language with explicit support for concurrent programming (actor model
The actor model in computer science is a mathematical model of concurrent computation that treats ''actor'' as the universal primitive of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more ...
)
* E—uses promises to preclude deadlocks
* ECMAScript—uses promises for asynchronous operations
* Eiffel
Eiffel may refer to:
Places
* Eiffel Peak, a summit in Alberta, Canada
* Champ de Mars – Tour Eiffel station, Paris, France; a transit station
Structures
* Eiffel Tower, in Paris, France, designed by Gustave Eiffel
* Eiffel Bridge, Ungheni, M ...
—through its SCOOP mechanism based on the concepts of Design by Contract
* Elixir—dynamic and functional meta-programming aware language running on the Erlang VM.
* Erlang—uses asynchronous message passing with nothing shared
* FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler
* Fortran— coarrays and ''do concurrent'' are part of Fortran 2008 standard
* Go—for system programming, with a concurrent programming model based on CSP
CSP may refer to:
Education
* College Student Personnel, an academic discipline
* Commonwealth Supported Place, a category in Australian education
* Concordia University (Saint Paul, Minnesota), US
Organizations
* Caledonian Steam Packet Compa ...
* Haskell—concurrent, and parallel functional programming language
* Hume
Hume most commonly refers to:
* David Hume (1711–1776), Scottish philosopher
Hume may also refer to:
People
* Hume (surname)
* Hume (given name)
* James Hume Nisbet (1849–1923), Scottish-born novelist and artist
In fiction
* Hume, the ...
—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
* Io—actor-based concurrency
* Janus
In ancient Roman religion and myth, Janus ( ; la, Ianvs ) is the god of beginnings, gates, transitions, time, duality, doorways, passages, frames, and endings. He is usually depicted as having two faces. The month of January is named for Janu ...
—features distinct ''askers'' and ''tellers'' to logical variables, bag channels; is purely declarative
* Java—thread class or Runnable interface
* Julia—"concurrent programming primitives: Tasks, async-wait, Channels."
* JavaScript—via web workers, in a browser environment, promises
A promise is a transaction whereby a person makes a vow or the suggestion of a guarantee.
Promise(s) may also refer to:
Places
* Promise, Oregon
*Promise, South Dakota
*Promise City, Iowa
*Promise Land, Tennessee or Promise
Film and TV
* ''Pro ...
, and callbacks
In computer programming, a callback or callback function is any reference to executable code that is passed as an argument to another piece of code; that code is expected to ''call back'' (execute) the callback function as part of its job. Thi ...
.
* JoCaml
JoCaml is an experimental functional programming language derived from OCaml. It integrates the primitives of the join-calculus to enable flexible, type-checked concurrent and distributed programming. The current version of JoCaml is a re-imple ...
—concurrent and distributed channel based, extension of OCaml
OCaml ( , formerly Objective Caml) is a general-purpose programming language, general-purpose, multi-paradigm programming language which extends the Caml dialect of ML (programming language), ML with object-oriented programming, object-oriented ...
, implements the join-calculus of processes
* Join Java
Join Java is a programming language based on the join-pattern that extends the standard Java programming language with the join semantics of the join-calculus. It was written at the University of South Australia
The University of South Aus ...
—concurrent, based on Java language
* Joule—dataflow-based, communicates by message passing
* Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP
CSP may refer to:
Education
* College Student Personnel, an academic discipline
* Commonwealth Supported Place, a category in Australian education
* Concordia University (Saint Paul, Minnesota), US
Organizations
* Caledonian Steam Packet Compa ...
by Per Brinch Hansen
* LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language
* Limbo—relative of Alef
Aleph (or alef or alif, transliterated ʾ) is the first letter of the Semitic abjads, including Phoenician , Hebrew , Aramaic , Syriac , Arabic ʾ and North Arabian 𐪑. It also appears as South Arabian 𐩱 and Ge'ez .
These letter ...
, for system programming in Inferno (operating system)
* MultiLisp—Scheme A scheme is a systematic plan for the implementation of a certain idea.
Scheme or schemer may refer to:
Arts and entertainment
* ''The Scheme'' (TV series), a BBC Scotland documentary series
* The Scheme (band), an English pop band
* ''The Schem ...
variant extended to support parallelism
* Modula-2
Modula-2 is a structured, procedural programming language developed between 1977 and 1985/8 by Niklaus Wirth at ETH Zurich. It was created as the language for the operating system and application software of the Lilith personal workstation. It w ...
—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines
* Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables
* Newsqueak—for research, with channels as first-class values; predecessor of Alef
Aleph (or alef or alif, transliterated ʾ) is the first letter of the Semitic abjads, including Phoenician , Hebrew , Aramaic , Syriac , Arabic ʾ and North Arabian 𐪑. It also appears as South Arabian 𐩱 and Ge'ez .
These letter ...
* occam—influenced heavily by communicating sequential processes (CSP)
** occam-π
In computer science, occam-π (or occam-pi) is the name of a variant of the programming language occam (programming language), occam developed by the Kent Retargetable occam Compiler (KRoC) team at the University of Kent. The name reflects the in ...
—a modern variant of occam, which incorporates ideas from Milner's π-calculus
* Orc—heavily concurrent, nondeterministic, based on Kleene algebra
* Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures
* ParaSail
Parasailing, also known as parascending, paraskiing or parakiting, is a recreational kiting activity where a person is towed behind a vehicle while attached to a specially designed canopy wing that resembles a parachute, known as a parasail w ...
—object-oriented, parallel, free of pointers, race conditions
* Pict—essentially an executable implementation of Milner's π-calculus
* Raku includes classes for threads, promises and channels by default
* Python — uses thread-based parallelism and process-based parallelism
* Reia—uses asynchronous message passing between shared-nothing objects
* Red/System—for system programming, based on Rebol
* Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory.
* Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
* SequenceL
SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform porta ...
—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions
* SR—for research
* SuperPascal
SuperPascal is an imperative, concurrent computing programming language developed by Per Brinch Hansen. It was designed as a ''publication language'': a thinking tool to enable the clear and concise expression of concepts in parallel programmin ...
—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen
* Swift—built-in support for writing asynchronous and parallel code in a structured way
* Unicon—for research
* TNSDL—for developing telecommunication exchanges, uses asynchronous message passing
* VHSIC Hardware Description Language ( VHDL)—IEEE STD-1076
* XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.
See also
* Asynchronous I/O
* Chu space
* Flow-based programming
* Java ConcurrentMap
The Java programming language's Java Collections Framework version 1.5 and later defines and implements the original regular single-threaded Maps, and
also new thread-safe Maps implementing the interface among other concurrent interfaces.
In Java ...
* List of important publications in concurrent, parallel, and distributed computing
* Ptolemy Project
The Ptolemy Project is an ongoing project aimed at modeling, simulating, and designing concurrent, real-time, embedded systems. The focus of the Ptolemy Project is on assembling concurrent components. The principal product of the project is the Pto ...
*
* Sheaf (mathematics)
In mathematics, a sheaf is a tool for systematically tracking data (such as sets, abelian groups, rings) attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could ...
* Structured concurrency
* Transaction processing
Notes
References
Sources
*
Further reading
*
*
*
*
*
*
External links
*
Concurrent Systems Virtual Library
{{Types of programming languages
Operating system technology