Massively parallel is the term for using a large number of
computer processors (or separate computers) to simultaneously perform a set of coordinated computations
in parallel
Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an ...
. GPUs are massively parallel architecture with tens of thousands of threads.
One approach is
grid computing, where the
processing power of many computers in distributed, diverse
administrative domains is opportunistically used whenever a computer is available.
[''Grid computing: experiment management, tool integration, and scientific workflows'' by Radu Prodan, Thomas Fahringer 2007 pages 1–4] An example is
BOINC, a
volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.
[''Parallel and Distributed Computational Intelligence'' by Francisco Fernández de Vega 2010 pages 65–68]
Another approach is grouping many processors in close proximity to each other, as in a
computer cluster. In such a centralized system the speed and flexibility of the
interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced
InfiniBand systems to three-dimensional
torus interconnects.
[Knight, Will: "IBM creates world's most powerful computer", ''NewScientist.com news service'', June 2007]
The term also applies to
massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of
central processing units
A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
(CPUs) and
random-access memory
Random-access memory (RAM; ) is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code. A random-access memory device allows data items to be read or written in almost t ...
(RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing many processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.
Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common
supercomputer implementations after clusters, as of November 2013.
Data warehouse appliances such as
Teradata,
Netezza or
Microsoft
Microsoft Corporation is an American multinational corporation, multinational technology company, technology corporation producing Software, computer software, consumer electronics, personal computers, and related services headquartered at th ...
's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.
See also
*
Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There ar ...
*
Embarrassingly parallel
*
Parallel computing
*
Process-oriented programming
*
Shared-nothing architecture (SN)
*
Symmetric multiprocessing (SMP)
*
Connection Machine
*
Cellular automaton
A cellular automaton (pl. cellular automata, abbrev. CA) is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tess ...
*
CUDA framework
*
Manycore processor
*
Vector processor
In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called ...
References
{{Parallel computing
Parallel computing
Supercomputing