HOME





LINPACK
LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures. LINPACK makes use of the BLAS (Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations. The LINPACK benchmarks appeared initially as part of the LINPACK user's manual. The parallel LINPACK benchmark implementation called HPL (High Performance Linpack) is used to benchmark and rank supercomputers for the TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ... list. World's most powerful com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


HPL (benchmark)
The LINPACK benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense ''n'' × ''n'' system of linear equations ''Ax'' = ''b'', which is a common task in engineering. The latest version of these benchmarks is used to build the TOP500 list, ranking the world's most powerful supercomputers. The aim is to approximate how fast a computer will perform when solving real problems. It is a simplification, since no single computational task can reflect the overall performance of a computer system. Nevertheless, the LINPACK benchmark performance can provide a good correction over the peak performance provided by the manufacturer. The peak performance is the maximal theoretical performance a computer can achieve, calculated as the machine's frequency, in cycles per second, times the number of operations per cycle it can perform. The actual performance will always be lower than the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Supercomputer
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called Exascale computing, exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the TOP500, world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

TOP500
The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL (benchmark), HPL benchmarks, a portable implementation of the high-performance LINPACK benchmarks, LINPACK benchmark written in Fortran for Distributed memory, distributed-memory computers. The most recent edition of TOP500 was published in June 2025 as the 65th edition of TOP500, while the next edition of TOP500 will be published in November 2025 as the 66th edition of TOP500. As of June 2025, the United States' El Capitan (supercomputer), El ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Basic Linear Algebra Subprograms
Basic Linear Algebra Subprograms (BLAS) is a specification (technical standard), specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector space, vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the ''de facto'' standard low-level routines for linear algebra libraries; the routines have bindings for both C (programming language), C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979* and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines. LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to eff ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Jack Dongarra
Jack Joseph Dongarra (born July 18, 1950) is an American computer scientist and mathematician. He is a University Distinguished Professor Emeritus of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021. Education Dongarra received a BSc degree in mathematics from Chicago State University in 1972 and a MSc degree in Computer Science from the Illinois In ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cray-2
The Cray-2 is a supercomputer with four vector processors made by Cray Research starting in 1985. At 1.9 GFLOPS peak performance, it was the fastest machine in the world when it was released, replacing the Cray X-MP in that spot. It was, in turn, replaced in that spot by the Cray Y-MP in 1988. The Cray-2 was the first of Seymour Cray's designs to successfully use multiple CPUs. This had been attempted in the CDC 8600 in the early 1970s, but the emitter-coupled logic (ECL) transistors of the era were too difficult to package into a working machine. The Cray-2 addressed this through the use of ECL integrated circuits, packing them in a novel 3D wiring that greatly increased circuit density. The dense packaging and resulting heat loads were a major problem for the Cray-2. This was solved in a unique fashion by forcing the electrically inert Fluorinert liquid through the circuitry under pressure and then cooling it outside the processor box. The unique "waterfall" cooler s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cray X-MP
The Cray X-MP was a supercomputer designed, built and sold by Cray, Cray Research. It was announced in 1982 as the "cleaned up" successor to the 1975 Cray-1, and was the world's fastest computer from 1983 to 1985 with a quad-processor system performance of 800 MFLOPS. The principal designer was Steve Chen (computer engineer), Steve Chen. Description The X-MP's main improvement over the Cray-1 was that it was a shared-memory Parallel computing, parallel vector processor, the first such computer from Cray Research. It housed up to four CPUs in a mainframe that was nearly identical in outside appearance to the Cray-1. The X-MP CPU had a faster 9.5 nanosecond clock cycle (105 MHz), compared to 12.5 ns for the Cray-1A. It was built from Bipolar junction transistor, bipolar gate-array integrated circuits containing 16 emitter-coupled logic Logic gate, gates each. The CPU was very similar to the Cray-1 CPU in architecture, but had better memory bandwidth (with two read por ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cray Y-MP
The Cray Y-MP was a supercomputer sold by Cray, Cray Research from 1988, and the successor to the company's Cray X-MP, X-MP. The Y-MP retained software compatibility with the X-MP, but extended the address registers from 24 to 32 bits. High-density VLSI emitter-coupled logic, ECL technology was used and a new liquid-cooling system was devised. The Y-MP ran the Cray UNICOS operating system. The Y-MP could be equipped with two, four or eight vector processors, with two functional units each and a clock cycle time of 6 ns (167 MHz). Peak performance was thus 333 megaflops per processor. Main memory comprised 256, 512, or 1024 MB of static RAM, SRAM. (Memory was measured and allocated in 64 bit words, and offered in 32, 64, or 128 MWords). The original Y-MP (otherwise known as the Y-MP Model D) was housed in a chassis similar to the horseshoe-shaped X-MP, but with an extra rectangular cabinet added in the middle (containing the CPU boards), thus forming a "Y" shape in plan vie ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fujitsu VP2000
The VP2000 was the second series of vector supercomputers from Fujitsu. Announced in December 1988, they replaced Fujitsu's earlier FACOM VP Model E Series. The VP2000 was succeeded in 1995 by the VPP300, a massively parallel supercomputer with up to 256 vector processors. The VP2000 was similar in many ways to their earlier designs, and in turn to the Cray-1, using a register-based vector processor for performance. For additional performance the vector units supported a special multiply-and-add instruction that could retire two results per clock cycle. This instruction "chain" is particularly common in many supercomputer applications. Another difference is that the main scalar units of the processor ran at half the speed of the vector unit. According to Amdahl's Law computers tend to run at the speed of their slowest unit, and in this case unless the program spent most of its time in the vector units, the slower scalar performance would make it 1/2 the performance of a Cray-1 at ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cleve Moler
Cleve Barry Moler (born August 17, 1939) is an American mathematician and computer programmer specializing in numerical analysis. In the mid to late 1970s, he was one of the authors of LINPACK and EISPACK, Fortran libraries for numerical computing. He created MATLAB, a numerical computing package, to give his students at the University of New Mexico easy access to these libraries without writing Fortran. In 1984, he co-founded MathWorks with Jack Little to commercialize this program. Biography He received his bachelor's degree from California Institute of Technology in 1961, and a Ph.D. in 1965 from Stanford University, both in mathematics. He worked for Charles Lawson at the Jet Propulsion Laboratory in 1961 and 1962. He was a professor of mathematics and computer science for almost 20 years at the University of Michigan, Stanford University, and the University of New Mexico. Before joining MathWorks full-time in 1989, he also worked for Intel Hypercube, where he coi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]