Heterogenous Unified Memory Access
   HOME

TheInfoList



OR:

Uniform memory access (UMA) is a
shared memory In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between progr ...
architecture used in
parallel computer Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
s. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with
non-uniform memory access Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non ...
(NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and
time sharing In computing, time-sharing is the sharing of a computing resource among many users at the same time by means of multiprogramming and multi-tasking.DEC Timesharing (1965), by Peter Clark, The DEC Professional, Volume 1, Number 1 Its emergence ...
applications by multiple users. It can be used to speed up the execution of a single large program in
time-critical A window of opportunity (also called a margin of opportunity or critical window) is a period of time during which some action can be taken that will achieve a desired outcome. Once this period is over, or the "window is closed", the specified ...
applications.


Types of architectures

There are three types of UMA architectures: * UMA using bus-based symmetric multiprocessing (SMP) architectures; * UMA using crossbar switches; * UMA using multistage interconnection networks.


hUMA

In April 2013, the term hUMA (''heterogeneous uniform memory access'') began to appear in
AMD Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets. While it initially manufactur ...
promotional material to refer to CPU and
GPU A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
sharing the same system memory via cache coherent views. Advantages include an easier programming model and less copying of data between separate memory pools.Peter Bright
AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri
Ars Technica, April 30, 2013.


See also

*
Non-uniform memory access Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non ...
*
Cache-only memory architecture Cache only memory architecture (COMA) is a computer memory organization for use in multiprocessors in which the local memories (typically DRAM) at each node are used as cache. This is in contrast to using the local memories as actual main memory ...
*
Heterogeneous System Architecture Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks. The HSA is being developed by the HSA ...


References

{{Parallel computing Computer memory Parallel computing