HOME





Basic Linear Algebra Subprograms
Basic Linear Algebra Subprograms (BLAS) is a specification (technical standard), specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector space, vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the ''de facto'' standard low-level routines for linear algebra libraries; the routines have bindings for both C (programming language), C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979* and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cross-platform
Within computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several Computing platform, computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the Interpreter (computing), interpreters or run-time packages are common or standard components of all supported platforms. For example, a cross-platform application software, application may run on Linux, macOS and Microsoft Windows. Cross-platform software may run on many platforms, or as few as two. Some frameworks for cross-platform development are Codename One, ArkUI-X, Kivy (framework), Kivy, Qt (software), Qt, GTK, Flutter (software), Flutter, NativeScript, Xamarin, Apache Cordova, Ionic (mobile app framework ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


ROCm
ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains, including general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), and heterogeneous computing. It offers several programming models: HIP ( GPU-kernel-based programming), OpenMP ( directive-based programming), and OpenCL. ROCm is free, libre and open-source software (except the GPU firmware blobs), and it is distributed under various licenses. ROCm initially stood for Radeon Open Compute platfor''m''; however, due to Open Compute being a registered trademark, ROCm is no longer an acronym — it is simply AMD's open-source stack designed for GPU compute. Background The first GPGPU software stack from ATI/AMD was Close to Metal, which became Stream. ROCm was launched around 2016 with the Boltzmann Initiative. ROCm stack builds upon previous AMD GPU stacks; some tools trace back to GPUOpen and others ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Mathematica
Wolfram (previously known as Mathematica and Wolfram Mathematica) is a software system with built-in libraries for several areas of technical computing that allows machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimization, plotting functions and various types of data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other programming languages. It was conceived by Stephen Wolfram, and is developed by Wolfram Research of Champaign, Illinois. The Wolfram Language is the programming language used in ''Mathematica''. Mathematica 1.0 was released on June 23, 1988 in Champaign, Illinois and Santa Clara, California. Mathematica's Wolfram Language is fundamentally based on Lisp; for example, the Mathematica command Most is identically equal to the Lisp command butlast. There is a substantial literature on the development of computer algebra systems (CAS). __TOC_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

GNU Octave
GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a Batch processing, batch-oriented language. As part of the GNU Project, it is free software under the terms of the GNU General Public License. History The project was conceived around 1988. At first it was intended to be a companion to a chemical reactor design course. Full development was started by John W. Eaton in 1992. The first alpha release dates back to 4 January 1993 and on 17 February 1994 version 1.0 was released. Version 9.2.0 was released on 7 June 2024. The program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel was known for his ability to perform quick back-of-the-envelope calculations. Development history Developments In addition ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Armadillo (C++ Library)
Armadillos () are New World placental mammals in the order Cingulata. They form part of the superorder Xenarthra, along with the anteaters and sloths. 21 extant species of armadillo have been described, some of which are distinguished by the number of bands on their armor. All species are native to the Americas, where they inhabit a variety of environments. Living armadillos are characterized by a leathery armor shell and long, sharp claws for digging. They have short legs, but can move quite quickly. The average length of an armadillo is about , including its tail. The giant armadillo grows up to and weighs up to , while the pink fairy armadillo has a length of only . When threatened by a predator, '' Tolypeutes'' species frequently roll up into a ball; they are the only species of armadillo capable of this. Recent genetic research has shown that the megafaunal glyptodonts (up to tall with maximum body masses of around 2 tonnes), which became extinct around 12,000 yea ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LINPACK
LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures. LINPACK makes use of the BLAS (Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations. The LINPACK benchmarks appeared initially as part of the LINPACK user's manual. The parallel LINPACK benchmark implementation called HPL (High Performance Linpack) is used to benchmark and rank supercomputers for the TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ... list. World's most powerful com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines. LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to eff ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


General Matrix Multiply
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the '' de facto'' standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979* and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the ''reference implementation'' (sometimes c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LINPACK Benchmarks
The LINPACK benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense ''n'' × ''n'' system of linear equations ''Ax'' = ''b'', which is a common task in engineering. The latest version of these benchmarks is used to build the TOP500 list, ranking the world's most powerful supercomputers. The aim is to approximate how fast a computer will perform when solving real problems. It is a simplification, since no single computational task can reflect the overall performance of a computer system. Nevertheless, the LINPACK benchmark performance can provide a good correction over the peak performance provided by the manufacturer. The peak performance is the maximal theoretical performance a computer can achieve, calculated as the machine's frequency, in cycles per second, times the number of operations per cycle it can perform. The actual performance will always be lower than the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Intel
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and Delaware General Corporation Law, incorporated in Delaware. Intel designs, manufactures, and sells computer components such as central processing units (CPUs) and related products for business and consumer markets. It is one of the world's List of largest semiconductor chip manufacturers, largest semiconductor chip manufacturers by revenue, and ranked in the Fortune 500, ''Fortune'' 500 list of the List of largest companies in the United States by revenue, largest United States corporations by revenue for nearly a decade, from 2007 to 2016 Fiscal year, fiscal years, until it was removed from the ranking in 2018. In 2020, it was reinstated and ranked 45th, being the List of Fortune 500 computer software and information companies, 7th-largest technology company in the ranking. It was one of the first companies listed on Nasdaq. Intel supplies List of I ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Advanced Micro Devices
Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and technology company headquartered in Santa Clara, California and maintains significant operations in Austin, Texas. AMD is a Information technology, hardware and Fabless manufacturing, fabless company that designs and develops List of AMD processors, central processing units (CPUs), List of AMD graphics processing units, graphics processing units (GPUs), field-programmable gate arrays (FPGAs), System on a chip, system-on-chip (SoC), and high-performance computing, high-performance computer solutions. AMD serves a wide range of business and consumer markets, including gaming, data centers, artificial intelligence (AI), and embedded systems. AMD's main products include List of AMD microprocessors, microprocessors, motherboard chipsets, embedded processors, and List of AMD graphics processing units, graphics processors for Server (computing), servers, workstations, personal computers, and embedded syst ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Intel Math Kernel Library
Intel oneAPI Math Kernel Library (Intel oneMKL), formerly known as Intel Math Kernel Library, is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math. The library supports x86 CPUs and Intel GPUs and is available for Windows and Linux operating systems. ''Intel oneAPI Math Kernel Library'' is not to be confused with oneMKL Interfaces, an open-source wrapper library that allows DPC++ applications to call oneMKL routines that can be offloaded to multiple hardware architectures and vendors defined during runtime. History and licensing Intel launched the oneAPI Math Kernel Library in November 1994, and called it Intel BLAS Library. In 1996, the library was renamed to Intel Math Kernel Library until April 2020, when intel oneMKL has become part of oneAPI initiative to support multiple hardware architectures, holding the current name Inte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]