SUPS
   HOME

TheInfoList



OR:

In
computational neuroscience Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, computer simulations, theoretical analysis and abstractions of the brain to u ...
, SUPS (for Synaptic Updates Per Second) or formerly CUPS (Connections Updates Per Second) is a measure of a neuronal network performance, useful in fields of
neuroscience Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions and disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developme ...
, cognitive science,
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
, and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
.


Computing

For a processor or computer designed to simulate a neural network SUPS is measured as the product of simulated neurons N and average connectivity c(synapses) per neuron per second: SUPS = c \times N Depending on the type of simulation it is usually equal to the total number of synapses simulated. In an "asynchronous" dynamic simulation if a neuron spikes at \upsilon Hz, the average rate of synaptic updates provoked by the activity of that neuron is \upsilon cN. In a synchronous simulation with step \Delta t the number of synaptic updates per second would be \frac. As \Delta t has to be chosen much smaller than the average interval between two successive afferent spikes, which implies \Delta t < \frac, giving an average of synaptic updates equal to \upsilon c N^2. Therefore, spike-driven synaptic dynamics leads to a linear scaling of computational complexity O(N) per neuron, compared with the O(N2) in the "synchronous" case.


Records

Developed in the 1980s Adaptive Solutions' CNAPS-1064 Digital Parallel Processor chip is a full neural network (NNW). It was designed as a coprocessor to a host and has 64 sub-processors arranged in a 1D array and operating in a
SIMD Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it shoul ...
mode. Each sub-processor can emulate one or more neurons and multiple chips can be grouped together. At 25 MHz it is capable of 1.28  GMAC. tp://ftp.cs.umass.edu/pub/osl/papers/uPSurvey-TR-95-42.ps.Z ''Real-Time Computing: Implications for General Microprocessors''Chip Weems, Steve Dropsho After the presentation of the RN-100 (12 MHz) single neuron chip at Seattle 1991
Ricoh is a Japanese Multinational corporation, multinational imaging and electronics company (law), company. It was founded by the now-defunct commercial division of the Riken, Institute of Physical and Chemical Research (Riken) known as the ''Riken ...
developed the multi-neuron chip RN-200. It had 16 neurons and 16 synapses per neuron. The chip has on-chip learning ability using a proprietary backdrop algorithm. It came in a 257-pin PGA encapsulation and drew 3.0 W at a maximum. It was capable of 3  GCPS (1 GCPS at 32 MHz). In 1991-97, Siemens developed the MA-16 chip, SYNAPSE-1 and SYNAPSE-3 Neurocomputer. The MA-16 was a fast matrix-matrix multiplier that can be combined to form
systolic array In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes. Each node or DPU independently computes a partial result as a function of the data received from i ...
s. It could process 4 patterns of 16 elements each (16-bit), with 16 neuron values (16-bit) at a rate of 800  MMAC or 400 MCPS at 50 MHz. The SYNAPSE3-PC
PCI card Peripheral Component Interconnect (PCI) is a local computer bus for attaching hardware devices in a computer and is part of the PCI Local Bus standard. The PCI bus supports the functions found on a processor bus but in a standardized format t ...
contained 2 MA-16 with a peak performance of 2560 MOPS (1.28 GMAC); 7160 MOPS (3.58 GMAC) when using three boards.''Neural Network Hardware''
Clark S. Lindsey, Bruce Denby, Thomas Lindblad, 1998
In 2013, the
K computer The K computer named for the Japanese word/numeral , meaning 10 quadrillion (1016)See Japanese numbers was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Pref ...
was used to simulate a neural network of 1.73 billion neurons with a total of 10.4 trillion synapses (1% of the human brain). The simulation ran for 40 minutes to simulate 1 s of brain activity at a normal activity level (4.4 on average). The simulation required 1 Petabyte of storage.''Fujitsu supercomputer simulates 1 second of brain activity''
Tim Hornyak, CNET, August 5, 2013


See also

*
FLOP In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate meas ...
*
SPECint SPECint is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPECint is the integer performance testing component of the SPEC test suite. The first SPEC ...
* SPECfp *
Multiply–accumulate operation In computing, especially digital signal processing, the multiply–accumulate (MAC) or multiply-add (MAD) operation is a common step that computes the product of two numbers and adds that product to an accumulator. The hardware unit that perform ...
*
Orders of magnitude (computing) This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS. Scientific E notation index: 2 , 3 , 6 , 9 , 12 , 15 , 18 , 21 , 24 , >24 __TOC__ Deciscale compu ...
* SyNAPSE


References

{{Reflist, 30em Benchmarks (computing) Units of frequency Artificial intelligence Computational neuroscience Neurotechnology