Concurrent computing is a form of
computing
Computing is any goal-oriented activity requiring, benefiting from, or creating computer, computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both computer hardware, hardware and softw ...
in which several
computations are executed ''
concurrently''—during overlapping time periods—instead of ''sequentially—''with one completing before the next starts.
This is a property of a system—whether a
program,
computer
A computer is a machine that can be Computer programming, programmed to automatically Execution (computing), carry out sequences of arithmetic or logical operations (''computation''). Modern digital electronic computers can perform generic set ...
, or a
network—where there is a separate execution point or "thread of control" for each process. A ''concurrent system'' is one where a computation can advance without waiting for all other computations to complete.
Concurrent computing is a form of
modular programming. In its
paradigm an overall computation is
factored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include
Edsger Dijkstra,
Per Brinch Hansen, and
C.A.R. Hoare.
Introduction
The concept of concurrent computing is frequently confused with the related but distinct concept of
parallel computing
Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ...
,
[ Pike, Rob (2012-01-11). "Concurrency is not Parallelism". ''Waza conference'', 11 January 2012. Retrieved from http://talks.golang.org/2012/waza.slide (slides) and http://vimeo.com/49718712 (video).] although both can be described as "multiple processes executing ''during the same period of time''". In parallel computing, execution occurs at the same physical instant: for example, on separate
processors of a
multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (
one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process ''lifetimes'' overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via
time-sharing
In computing, time-sharing is the Concurrency (computer science), concurrent sharing of a computing resource among many tasks or users by giving each Process (computing), task or User (computing), user a small slice of CPU time, processing time. ...
slices: only one process runs at a time, and if it does not complete during its time slice, it is ''paused'', another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.
Concurrent computations ''may'' be executed in parallel,
for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network.
The exact timing of when tasks in a concurrent system are executed depends on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:
* T1 may be executed and finished before T2 or ''vice versa'' (serial ''and'' sequential)
* T1 and T2 may be executed alternately (serial ''and'' concurrent)
* T1 and T2 may be executed simultaneously at the same instant of time (parallel ''and'' concurrent)
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, ''concurrent/sequential'' and ''parallel/serial'' are used as opposing pairs. A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a ''serial schedule''. A set of tasks that can be scheduled serially is '' serializable'', which simplifies concurrency control.
Coordinating access to shared resources
The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[ Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource ]balance
:
bool withdraw(int withdrawal)
Suppose balance = 500
, and two concurrent ''threads'' make the calls withdraw(300)
and withdraw(350)
. If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal
evaluates to true
, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms.
Advantages
There are advantages of concurrent computing:
* Increased program throughput—parallel execution of a concurrent algorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to Gustafson's law.
* High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
* More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes. For example MVCC.
Models
Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies.
Input/output automata were introduced in 1987.
Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses.
Consistency models
Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory
Computer memory stores information, such as data and programs, for immediate use in the computer. The term ''memory'' is often synonymous with the terms ''RAM,'' ''main memory,'' or ''primary storage.'' Archaic synonyms for main memory include ...
occur and how results are produced.
One of the first consistency models was Leslie Lamport's sequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".
Implementation
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.
Interaction and communication
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
;Shared memory communication: Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
;Message passing communication: Concurrent components communicate by exchanging messages (exemplified by MPI, Go, Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming. A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence.
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
History
Concurrent computing developed out of earlier work on railroads and telegraphy
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pi ...
, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).
The academic study of concurrent algorithms started in the 1960s, with credited with being the first paper in this field, identifying and solving mutual exclusion.
Prevalence
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
* Channel
* Coroutine
* Futures and promises
At the operating system level:
* Computer multitasking, including both cooperative multitasking and preemptive multitasking
** Time-sharing
In computing, time-sharing is the Concurrency (computer science), concurrent sharing of a computing resource among many tasks or users by giving each Process (computing), task or User (computing), user a small slice of CPU time, processing time. ...
, which replaced sequential batch processing of jobs with concurrent use of a system
* Process
* Thread
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Languages supporting concurrent programming
*
Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as ''concurrency-oriented languages'' or ''concurrency-oriented programming languages'' (COPL).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
* Ada—general purpose, with native support for message passing and monitor based concurrency
* Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs
* Alice—extension to Standard ML, adds support for concurrency via futures
* Ateji PX—extension to Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
with parallel primitives inspired from π-calculus
* Axum—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax
* BMDFM—Binary Modular DataFlow Machine
* C++—thread and coroutine support libraries
* Cω (C omega)—for research, extends C#, uses asynchronous communication
* C#—supports concurrent computing using , , also since version 5.0 and keywords introduced
* Clojure—modern, functional dialect of Lisp on the Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
platform
* Concurrent Clean—functional programming, similar to Haskell
* Concurrent Collections (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
* Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory
* Concurrent ML—concurrent extension of Standard ML
* Concurrent Pascal—by Per Brinch Hansen
* Curry
* D—multi-paradigm
Programming languages can be grouped by the number and types of Programming paradigm, paradigms supported.
Paradigm summaries
A concise reference for the programming paradigms listed in this article.
* Concurrent programming language, Concurrent ...
system programming language with explicit support for concurrent programming ( actor model)
* E—uses promises to preclude deadlocks
* ECMAScript—uses promises for asynchronous operations
* Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract
* Elixir
An elixir is a sweet liquid used for medical purposes, to be taken orally and intended to cure one's illness. When used as a dosage form, pharmaceutical preparation, an elixir contains at least one active ingredient designed to be taken orall ...
—dynamic and functional meta-programming aware language running on the Erlang VM.
* Erlang—uses synchronous or asynchronous message passing with no shared memory
* FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler
* Fortran— coarrays and ''do concurrent'' are part of Fortran 2008 standard
* Go—for system programming, with a concurrent programming model based on CSP
* Haskell—concurrent, and parallel functional programming language
* Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
* Io—actor-based concurrency
* Janus—features distinct ''askers'' and ''tellers'' to logical variables, bag channels; is purely declarative
* Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
—thread class or Runnable interface
* Julia—"concurrent programming primitives: Tasks, async-wait, Channels."
* JavaScript—via web workers, in a browser environment, promises, and callbacks.
* JoCaml—concurrent and distributed channel based, extension of OCaml, implements the join-calculus of processes
* Join Java—concurrent, based on Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
language
* Joule
The joule ( , or ; symbol: J) is the unit of energy in the International System of Units (SI). In terms of SI base units, one joule corresponds to one kilogram- metre squared per second squared One joule is equal to the amount of work d ...
—dataflow-based, communicates by message passing
* Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP by Per Brinch Hansen
* LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language
* Limbo—relative of Alef, for system programming in Inferno (operating system)
* Locomotive BASIC—Amstrad variant of BASIC contains EVERY and AFTER commands for concurrent subroutines
* MultiLisp— Scheme variant extended to support parallelism
* Modula-2—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines
* Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables
* Newsqueak
Newsqueak is a concurrent programming language for writing application software with interactive graphical user interfaces.
Newsqueak's syntax and semantics are influenced by the C (programming language), C language, but its approach to concurr ...
—for research, with channels as first-class values; predecessor of Alef
* occam—influenced heavily by communicating sequential processes (CSP)
** occam-π—a modern variant of occam, which incorporates ideas from Milner's π-calculus
* ooRexx—object-based, message exchange for communication and synchronization
* Orc—heavily concurrent, nondeterministic, based on Kleene algebra
* Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures
* ParaSail—object-oriented, parallel, free of pointers, race conditions
* PHP—multithreading support with parallel extension implementing message passing inspired from Go
* Pict—essentially an executable implementation of Milner's π-calculus
* Python — uses thread-based parallelism and process-based parallelism
* Raku includes classes for threads, promises and channels by default
* Reia—uses asynchronous message passing between shared-nothing objects
* Red/System—for system programming, based on Rebol
* Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory.
* Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
* SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race condition
A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent ...
s
* SR—for research
* SuperPascal—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen
* Swift—built-in support for writing asynchronous and parallel code in a structured way
* Unicon
Unicon, previously known as UNICON, is the World unicycle, Unicycling Convention and Championships sanctioned by the International Unicycling Federation (IUF).
The IUF sanctions a biennial world unicycling convention and competition, the major e ...
—for research
* TNSDL—for developing telecommunication exchanges, uses asynchronous message passing
* VHSIC Hardware Description Language (VHDL
VHDL (Very High Speed Integrated Circuit Program, VHSIC Hardware Description Language) is a hardware description language that can model the behavior and structure of Digital electronics, digital systems at multiple levels of abstraction, ran ...
)—IEEE STD-1076
* XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.
See also
* Asynchronous I/O
* Chu space
* Flow-based programming
* Java ConcurrentMap
* Ptolemy Project
*
* Structured concurrency
* Transaction processing
Notes
References
Sources
*
Further reading
*
*
*
*
*
*
External links
*
Concurrent Systems Virtual Library
{{Types of programming languages
Operating system technology