Cerebras Systems Inc. is an American
artificial intelligence
Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
(AI) company with offices in
Sunnyvale
Sunnyvale () is a city located in the Santa Clara Valley in northwestern Santa Clara County, California, United States.
Sunnyvale lies along the historic El Camino Real and Highway 101 and is bordered by portions of San Jose to the north, ...
,
San Diego
San Diego ( , ) is a city on the Pacific coast of Southern California, adjacent to the Mexico–United States border. With a population of over 1.4 million, it is the List of United States cities by population, eighth-most populous city in t ...
,
Toronto
Toronto ( , locally pronounced or ) is the List of the largest municipalities in Canada by population, most populous city in Canada. It is the capital city of the Provinces and territories of Canada, Canadian province of Ontario. With a p ...
, and
Bangalore, India
Bengaluru, also known as Bangalore (List of renamed places in India#Karnataka, its official name until 1 November 2014), is the Capital city, capital and largest city of the southern States and union territories of India, Indian state of Kar ...
.
Cerebras builds computer systems for complex AI
deep learning
Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
applications.
History
Cerebras was founded in 2015 by Andrew Feldman, Gary Lauterbach, Michael James, Sean Lie and Jean-Philippe Fricker.
These five founders worked together at
SeaMicro, which was started in 2007 by Feldman and Lauterbach and was later sold to AMD in 2012 for $334 million.
In May 2016, Cerebras secured $27 million in
series A funding
A series A is the name typically given to a company's first significant round of venture capital financing. It can be followed by the word round, investment or financing. The name refers to the class of preferred stock sold to investors in excha ...
led by
Benchmark
Benchmark may refer to:
Business and economics
* Benchmarking, evaluating performance within organizations
* Benchmark price
* Benchmark (crude oil), oil-specific practices
Science and technology
* Experimental benchmarking, the act of defining a ...
,
Foundation Capital
Foundation Capital is a venture capital firm located in Silicon Valley. The firm was founded in 1995, and in 2012 managed more than $2.4 billion in investment capital. As of 2023, the firm has over $6 billion in assets under management.
Histor ...
and Eclipse Ventures.
In December 2016,
series B funding was led by
Coatue Management
Coatue is an American technology-focused investment management firm led by founder and portfolio manager Philippe Laffont.
Coatue invests in both public and private markets, with a focus on technology, media, telecommunications, as well as the c ...
, followed in January 2017 with series C funding led by VY Capital.
In November 2018, Cerebras closed its series D round with $88 million, making the company a
unicorn
The unicorn is a legendary creature that has been described since Classical antiquity, antiquity as a beast with a single large, pointed, spiraling horn (anatomy), horn projecting from its forehead.
In European literature and art, the unico ...
. Investors in this round included
Altimeter
An altimeter or an altitude meter is an instrument used to measure the altitude of an object above a fixed level. The measurement of altitude is called altimetry, which is related to the term bathymetry, the measurement of depth under water.
Ty ...
, VY Capital, Coatue, Foundation Capital, Benchmark, and Eclipse.
On August 19, 2019, Cerebras announced its first-generation Wafer-Scale Engine (WSE).
’
In November 2019, Cerebras closed its series E round with over $270 million for a valuation of $2.4 billion.
In 2020, the company announced an office in Japan and partnership with
Tokyo Electron Devices.
In April 2021, Cerebras announced the CS-2 based on the company's Wafer Scale Engine Two (WSE-2), which has 850,000 cores.
In August 2021, the company announced its brain-scale technology that can run a
neural network
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
with over 120 trillion connections.
In November 2021, Cerebras announced that it had raised an additional $250 million in Series F funding, valuing the company at over $4 billion. The Series F financing round was led by Alpha Wave Ventures and Abu Dhabi Growth Fund (ADG).
To date, the company has raised $720 million in financing.
In August 2022, Cerebras was honored by the
Computer History Museum
The Computer History Museum (CHM) is a computer museum in Mountain View, California. The museum presents stories and artifacts of Silicon Valley and the Information Age, and explores the Digital Revolution, computing revolution and its impact ...
in
Mountain View, California
Mountain View is a city in Santa Clara County, California, United States, part of the San Francisco Bay Area. Named for its views of the Santa Cruz Mountains, the population was 82,376 at the 2020 United States census, 2020 census.
Mountain V ...
. The museum added to its permanent collection and unveiled a new display featuring the WSE-2—the biggest computer chip made so far—marking an "epochal" achievement in the history of fabricating transistors as an integrated part.
Cerebras filed its prospectus for
initial public offering
An initial public offering (IPO) or stock launch is a public offering in which shares of a company are sold to institutional investors and usually also to retail (individual) investors. An IPO is typically underwritten by one or more investm ...
(IPO) in September 2024, with the intention of listing on the
Nasdaq
The Nasdaq Stock Market (; National Association of Securities Dealers Automated Quotations) is an American stock exchange based in New York City. It is the most active stock trading venue in the U.S. by volume, and ranked second on the list ...
exchange under the ticker 'CBRS'. The prospectus indicated that most of its revenue at the time came from Emirati AI holding company
G42. A week after the filing, it was reported that the
Committee on Foreign Investment in the United States
The Committee on Foreign Investment in the United States (CFIUS, ) is an inter-agency committee in the United States government that reviews the national security implications of foreign investments in the U.S. economy.
CFIUS, led by the U.S ...
was reviewing G42's investment into the company, leading to a potential delay in its IPO. In a May 2025 interview, CEO Andrew Feldman said the company had obtained clearance from a U.S. committee to sell shares to G42 and he hopes that Cerebras will go public in 2025.
Cerebras was named to the Forbes AI 50 in April 2024 and the TIME 100 Most Influential Companies list in May 2024.
Technology
The Cerebras Wafer Scale Engine (WSE) is a single, wafer-scale integrated processor that includes compute, memory and
interconnect fabric. The WSE-1 powers the Cerebras CS-1, Cerebras’ first-generation AI computer.
It is a
19-inch rack
A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is wide. The 19 inch dimension includes the edges or ''ears'' that protrude from each side of the ...
-mounted appliance designed for AI training and inference workloads in a datacenter.
The CS-1 includes a single WSE primary processor with 400,000 processing cores, as well as twelve
100 Gigabit Ethernet
40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second (Gbit/s), respectively. These technologies offer significantly ...
connections to move data in and out.
The WSE-1 has 1.2 trillion transistors, 400,000 compute cores and 18 gigabytes of memory.
In April 2021, Cerebras announced the CS-2 AI system based on the 2nd-generation Wafer Scale Engine (WSE-2), manufactured by the
7 nm process
In semiconductor manufacturing, the "7 nm" process is a term for the MOSFET technology node following the "10 nm" node, defined by the International Roadmap for Devices and Systems (IRDS), which was preceded by the International Technology Road ...
of
TSMC
Taiwan Semiconductor Manufacturing Company Limited (TSMC or Taiwan Semiconductor) is a Taiwanese multinational semiconductor contract manufacturing and design company. It is one of the world's most valuable semiconductor companies, the world' ...
.
It is 26 inches tall and fits in one-third of a standard data center rack.
The Cerebras WSE-2 has 850,000 cores and 2.6 trillion transistors.
The WSE-2 expanded on-chip SRAM to 40 gigabytes, memory bandwidth to 20 petabytes per second and total fabric bandwidth to 220 petabits per second.
In August 2021, the company announced a system which connects multiple
integrated circuit
An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components a ...
s (commonly called "chips") into a
neural network
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
with many connections.
It enables a single system to support AI models with more than 120 trillion parameters.
In June 2022, Cerebras set a record for the largest AI models ever trained on one device.
Cerebras said that for the first time ever, a single CS-2 system with one Cerebras wafer can train models with up to 20 billion parameters.
The Cerebras CS-2 system can train multibillion-parameter natural language processing (NLP) models including GPT-3XL 1.3 billion models, as well as GPT-J 6B, GPT-3 13B and GPT-NeoX 20B with reduced software complexity and infrastructure.
In September 2022, Cerebras announced that it can patch its chips together to create what would be the largest-ever computing cluster for AI computing.
A Wafer-Scale Cluster can connect up to 192 CS-2 AI systems into a cluster, while a cluster of 16 CS-2 AI systems can create a computing system with 13.6 million cores for natural language processing.
The key to the new Cerebras Wafer-Scale Cluster is the exclusive use of data parallelism to train, which is the preferred approach for all AI work.
In November 2022, Cerebras unveiled the supercomputer, Andromeda, which combines 16 WSE-2 chips into one cluster with 13.5 million AI-optimized cores, delivering up to 1 Exaflop of AI computing horsepower, or at least one quintillion (10 to the power of 18) operations per second.
The entire system consumes 500 kW, which was a drastically lower amount than somewhat-comparable GPU-accelerated supercomputers.
In November 2022, Cerebras announced its partnership with
Cirrascale Cloud Services
Cirrascale Cloud Services is a cloud computing services company headquartered in San Diego, California. Cirrascale was formerly Verari Technologies, founded in 2010, which acquired the technology and products of Verari Systems and changed its na ...
to provide a flat-rate "pay-per-model" compute time for its ''Cerebras AI Model Studio''. The service is said to reduce the cost—compared to the similar cloud services on the market—by half while increasing speed up to eight times faster.
In July 2023, Cerebras and UAE-based
G42 unveiled the world's largest network of nine interlinked supercomputers, Condor Galaxy, for AI model training. The first supercomputer, named Condor Galaxy 1 (CG-1), boasts 4 exaFLOPs of FP16 performance and 54 million cores. In November 2023, the Condor Galaxy 2 (CG-2) was announced, also containing 4 exaFLOPs and 54 million cores.
In March 2024, the companies broke ground on the Condor Galaxy 3 (CG-3), which can hit 8 exaFLOPs of performance and contains 58 million AI-optimized cores.
In March 2024, the company also introduced WSE-3, a 5 nm-based chip hosting 4 trillion transistors and 900,000 AI-optimized cores, the basis of the CS-3 computer. Cerebras also announced a collaboration with
Dell Technologies
Dell Technologies Inc. is an American multinational technology company headquartered in Round Rock, Texas. It was formed as a result of the September 2016 merger of Dell and EMC Corporation. Dell Technologies ranked 48th on the 2024 Fortune ...
, unveiled in June 2024, for AI compute infrastructure for generative AI.
In August 2024, Cerebras unveiled its AI inference service, claiming to be the fastest in the world and, in many cases, ten to twenty times faster than systems built using the dominant technology, Nvidia's H100 "Hopper" graphics processing unit, or GPU.
As of October 2024, Cerebras' performance advantage for inference is even larger when running the latest Llama 3.2 models. The jump in AI inference performance between August and October is a big one, at a factor of 3.5X, and it opens up the gap between Cerebras CS-3 systems running on premises or in clouds operated by Cerebras.
In March 2025, Cerebras announced six new datacenters across the United States and Europe, increasing the inference capacity twentyfold to over 40 million tokens per second.
In April 2025,
Meta announced a partnership with Cerebras to power the new Llama API, offering developers inference speeds up to 18 times faster than with traditional GPU-based solutions. In May, Cerebras announced that it beats NVIDIA Blackwell in Llama 4 Inference with more than double the performance at more than 2,500 tokens per second/user on the 400B parameter Llama 4 Maverick model.
In May 2025, Cerebras unveiled Qwen3-32B, an open-weight LLM model built for smart, high speed reasoning and performance.
Deployments
Customers are reportedly using Cerebras technologies in the hyperscale, pharmaceutical, life sciences, and energy sectors, among others.
CS-1
In 2020,
GlaxoSmithKline
GSK plc (an acronym from its former name GlaxoSmithKline plc) is a British Multinational corporation, multinational Pharmaceutics, pharmaceutical and biotechnology company with headquarters in London. It was established in 2000 by a Mergers an ...
(GSK) began using the Cerebras CS-1 AI system in their London AI hub, for neural network models to accelerate genetic and genomic research and reduce the time taken in
drug discovery
In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new candidate medications are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or ...
. The GSK research team was able to increase the complexity of the encoder models they could generate, while reducing training time. Other pharmaceutical industry customers include
AstraZeneca
AstraZeneca plc () (AZ) is a British-Swedish multinational pharmaceutical and biotechnology company with its headquarters at the Cambridge Biomedical Campus in Cambridge, UK. It has a portfolio of products for major diseases in areas includi ...
, who was able to reduce training time from two weeks on a cluster of GPUs to two days using the Cerebras CS-1 system. GSK and Cerebras recently co-publishe
researchin December 2021 on epigenomic language models.
Argonne National Laboratory
Argonne National Laboratory is a Federally funded research and development centers, federally funded research and development center in Lemont, Illinois, Lemont, Illinois, United States. Founded in 1946, the laboratory is owned by the United Sta ...
has been using the CS-1 since 2020 in COVID-19 research and cancer tumor research based on the world's largest cancer treatment database.
A series of models running on the CS-1 to predict cancer drug response to tumors achieved speed-ups of many hundreds of times on the CS-1 compared to their GPU baselines.
Cerebras and the
National Energy Technology Laboratory
The National Energy Technology Laboratory (NETL) is a United States, U.S. United States Department of Energy national laboratories, national laboratory under the U.S. Department of Energy, Department of Energy Office of Fossil Energy. NETL foc ...
(NETL) demonstrated record-breaking performance of Cerebras' CS-1 system on a scientific compute workload in November 2020. The CS-1 was 200 times faster than the Joule Supercomputer on the key workload of Computational Fluid Dynamics.
The
Lawrence Livermore National Lab’s Lassen supercomputer incorporated the CS-1 in both classified and non-classified areas for physics simulations. The
Pittsburgh Supercomputing Center (PSC) has also incorporated the CS-1 in their Neocortex supercomputer for dual HPC and AI workloads.
EPCC, the supercomputing center of the University of Edinburgh, has also deployed a CS-1 system for AI-based research.
In August 2021, Cerebras announced a partnership wit
Peptilogicson the development of AI for
peptide therapeutics
Peptide therapeutics are peptides or polypeptides (oligomers or short polymers of amino acids) which are used to for the treatment of diseases. Naturally occurring peptides may serve as hormones, growth factors, neurotransmitters, ion channel ligan ...
.
CS-2
In March 2022, Cerebras announced that the Company deployed its CS-2 system in the Houston facilities of
TotalEnergies
TotalEnergies SE is a French multinational integrated energy and petroleum company founded in 1924 and is one of the seven supermajor oil companies. Its businesses cover the entire oil and gas chain, from crude oil and natural gas explorati ...
, its first publicly disclosed customer in the energy sector.
Cerebras also announced that it has deployed a CS-2 system a
nference a startup that uses natural language processing to analyze massive amounts of biomedical data. The CS-2 will be used to train transformer models that are designed to process information from piles of unstructured medical data to provide fresh insights to doctors and improve patient recovery and treatment.
In May 2022, Cerebras announced that the
National Center for Supercomputing Applications
The National Center for Supercomputing Applications (NCSA) is a unit of the University of Illinois Urbana-Champaign, and provides high-performance computing resources to researchers in the United States. NCSA is currently led by Professor Bill ...
(NCSA) has deployed the Cerebras CS-2 system in their HOLL-I supercomputer. They also announced that the
Leibniz Supercomputing Centre (LRZ) in Germany plans to deploy a new supercomputer featuring the CS-2 system along with the HPE Superdome Flex server.
The new supercomputing system is expected to be delivered to LRZ this summer. This will be the first CS-2 system deployment in Europe.
In October 2022, it was announced that the U.S. National Nuclear Security Administration would sponsor a study to investigate using Cerebras' CS-2 in nuclear stockpile stewardship computing.
The multi-year contract will be executed through
Sandia National Laboratories
Sandia National Laboratories (SNL), also known as Sandia, is one of three research and development laboratories of the United States Department of Energy's National Nuclear Security Administration (NNSA). Headquartered in Kirtland Air Force B ...
,
Lawrence Livermore National Lab, and
Los Alamos National Laboratory
Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development Laboratory, laboratories of the United States Department of Energy National Laboratories, United States Department of Energy ...
.
In November 2022, Cerebras and the
National Energy Technology Laboratory
The National Energy Technology Laboratory (NETL) is a United States, U.S. United States Department of Energy national laboratories, national laboratory under the U.S. Department of Energy, Department of Energy Office of Fossil Energy. NETL foc ...
(NETL) saw record-breaking performance on the scientific compute workload of forming and solving field equations. Cerebras demonstrated that its CS-2 system was as much as 470 times faster than NETL's Joule Supercomputer in field equation modeling.
The 2022 Gordon Bell Special Prize Winner for HPC-Based COVID-19 Research, which honors outstanding research achievement towards the understanding of the COVID-19 pandemic through the use of high-performance computing, used Cerebras' CS-2 system to conduct this award-winning research to transform large language models to analyze COVID-19 variants. The paper was authored by a 34-person team from Argonne National Laboratory, California Institute of Technology, Harvard University, Northern Illinois University, Technical University of Munich, University of Chicago, University of Illinois Chicago, Nvidia, and Cerebras. ANL noted that using the CS-2 Wafer-Scale Engine cluster, the team was able to achieve convergence when training on the full SARS-CoV-2 genomes in less than a day.
Cerebras partnered with Emirati technology group
G42 to deploy its AI supercomputers to create chatbots and to analyze genomic and preventive care data. In July 2023, G42 agreed to pay around $100 million to purchase the first of potentially nine supercomputers from Cerebras, each of which capable of 4
exaflops
Floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
For such cases, it is a more accurate measur ...
of compute. In August 2023, Cerebras, the
Mohamed bin Zayed University of Artificial Intelligence
The Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) is a graduate-level, research-based academic institution located in Abu Dhabi, United Arab Emirates. It launched its first undergraduate program in March 2025.
The current pres ...
and G42 subsidiary Inception launched
Jais
Jais, also spelled Jayas, is a city with a municipal board in Amethi district (formerly in Raebareli district) in the Indian state of Uttar Pradesh.
Geography
Jais is located at . It has an average elevation of 101 metres (331 fee ...
, a
large language model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation.
The largest and most capable LLMs are g ...
.
Mayo Clinic
Mayo Clinic () is a Nonprofit organization, private American Academic health science centre, academic Medical centers in the United States, medical center focused on integrated health care, healthcare, Mayo Clinic College of Medicine and Science ...
announced a collaboration with Cerebras at the 2024
J.P. Morgan Healthcare Conference, offering details on the first foundation model it will develop with the enablement of Cerebras's generative AI computing capability. The solution will combine genomic data with de-identified data from patient records and medical evidence to explore the ability to predict a patient's response to treatments to manage disease and will initially be applied to
rheumatoid arthritis
Rheumatoid arthritis (RA) is a long-term autoimmune disorder that primarily affects synovial joint, joints. It typically results in warm, swollen, and painful joints. Pain and stiffness often worsen following rest. Most commonly, the wrist and h ...
. The model could serve as a prototype for similar solutions to support the diagnosis and treatment of other diseases.
At the January 2025
J.P. Morgan Healthcare Conference, Cerebras and Mayo Clinic announced a new genomic foundation model aimed at harnessing the power of advanced AI and HPC to transform genomics, a field rapidly becoming central to personalized healthcare. The new genomic foundation model is designed to improve diagnostics and personalize treatment selection, with an initial focus on Rheumatoid Arthritis.
In May 2024, Cerebras in collaboration with researchers from
Sandia National Laboratories
Sandia National Laboratories (SNL), also known as Sandia, is one of three research and development laboratories of the United States Department of Energy's National Nuclear Security Administration (NNSA). Headquartered in Kirtland Air Force B ...
,
Lawrence Livermore National Laboratory
Lawrence Livermore National Laboratory (LLNL) is a Federally funded research and development centers, federally funded research and development center in Livermore, California, United States. Originally established in 1952, the laboratory now i ...
,
Los Alamos National Laboratory
Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development Laboratory, laboratories of the United States Department of Energy National Laboratories, United States Department of Energy ...
and the
National Nuclear Security Administration
The National Nuclear Security Administration (NNSA) is a United States federal agency responsible for safeguarding national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and ef ...
, for molecular dynamics simulations in which the team simulated 800,000 atoms interacting with each other, calculating the interactions in increments of one femtosecond at a time. Each step took just microseconds to compute on the Cerebras WSE-2. Although that's still 9 orders of magnitude slower than the actual interactions, it was also 179 times as fast as the
Frontier supercomputer. The achievement effectively reduced a year's worth of computation to just two days.
CS-3
In March 2024, Cerebras introduced the CS-3 and third-generation Wafer Scale Engine (WSE-3), which represents the latest development of their technology. It has 2x the performance of CS-2 and hosts 900,000 cores. A CS-3 cluster is capable of training an AI model like Llama2-70B in just one single day. The WSE-3 was recognized by TIME Magazine as a Best Invention of 2024.
In January 2025, Cerebras announced support for DeepSeek's R1 70B reasoning model, running on the latest Cerebras hardware in US-based datacenters.
In April 2025, Cerebras and Canadian photonics company Ranovus announced a contract from
DARPA
The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
to reduce compute bottleneck challenges.
Cerebras Inference
Cerebras AI Inference services claims to be the fastest in the world and, in many cases, ten to twenty times faster than systems built using the dominant technology, Nvidia's H100 "Hopper" graphics processing unit, or GPU.
In January 2025, Cerebras announced that it would support
DeepSeek
Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., Trade name, doing business as DeepSeek, is a Chinese artificial intelligence company that develops large language models (LLMs). Based in Hangzhou, Zhejiang, Deepse ...
's R1 70B reasoning model at 1,600 tokens/second, which the company claims is 57x faster than any R1 provider using GPUs.
In February 2025, Cerebras and
Mistral AI
Mistral AI SAS () is a French artificial intelligence (AI) startup, headquartered in Paris. Founded in 2023, it specializes in open-weight large language models (LLMs), with both open-source and proprietary AI models.
Namesake
The company is ...
announced a partnership and helped the French AI player achieve a speed record. Mistral released an app called Le Chat that it said can respond to user questions with 1,000 words per second. Cerebras said it is providing the computer power behind those results.
Also in February 2025, Cerebras announced a partnership with
Perplexity AI that promises to deliver near-instantaneous AI-powered search results at speeds previously thought impossible. The collaboration centers on Perplexity's new Sonar model which runs on Cerebras chips at 1,200 tokens per second making it one of the fastest AI search systems available. Cerebras appears to be leveraging this momentum to establish itself as the go-to provider for high-speed AI inference.
See also
*
Wafer-scale integration
Wafer-scale integration (WSI) is a system of building very-large integrated circuit (commonly called a "chip") networks from an entire wafer (electronics), silicon wafer to produce a single "super-chip". Combining large size and reduced packaging ...
*
Wafer-level packaging
Wafer-level packaging (WLP) is a process in integrated circuit manufacturing where packaging components are attached to an integrated circuit (IC) ''before'' the wafer – on which the IC is fabricated – is diced. In WLP, the top and bottom ...
*
Semiconductor device fabrication
Semiconductor device fabrication is the process used to manufacture semiconductor devices, typically integrated circuits (ICs) such as microprocessors, microcontrollers, and memories (such as Random-access memory, RAM and flash memory). It is a ...
*
Transistor count
The transistor count is the number of transistors in an electronic device (typically on a single substrate or silicon die). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microproc ...
*
Deep learning processor
References
External links
* {{Official website, https://cerebras.net/
Cerebras' presentation at Hot Chips 34 (2022)
Computer companies of the United States
Companies based in California
Companies based in Sunnyvale, California
Companies based in Silicon Valley
Computer hardware companies
Semiconductor companies of the United States
Fabless semiconductor companies
Computer companies established in 2016
2016 establishments in California