Theoretical Computer Science
   HOME

TheInfoList



OR:

Theoretical computer science (TCS) is a subset of general
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
and mathematics that focuses on mathematical aspects of
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
such as the
theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., ...
,
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation t ...
, and
type theory In mathematics, logic, and computer science, a type theory is the formal system, formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theor ...
. It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:


History

While logical inference and mathematical proof had existed previously, in 1931
Kurt Gödel Kurt Friedrich Gödel ( , ; April 28, 1906 – January 14, 1978) was a logician, mathematician, and philosopher. Considered along with Aristotle and Gottlob Frege to be one of the most significant logicians in history, Gödel had an imm ...
proved with his
incompleteness theorem Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
that there are fundamental limitations on what statements could be proved or disproved.
Information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
was added to the field with a 1948 mathematical theory of communication by
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as a "father of information theory". As a 21-year-old master's degree student at the Massachusetts In ...
. In the same decade,
Donald Hebb Donald Olding Hebb (July 22, 1904 – August 20, 1985) was a Canadian psychologist who was influential in the area of neuropsychology, where he sought to understand how the function of neurons contributed to psychological processes such as lear ...
introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of
neural network A neural network is a network or neural circuit, circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up ...
s and
parallel distributed processing Connectionism refers to both an approach in the field of cognitive science that hopes to explain mental phenomena using artificial neural networks (ANN) and to a wide range of techniques and algorithms using ANNs in the context of artificial int ...
were established. In 1971, Stephen Cook and, working independently,
Leonid Levin Leonid Anatolievich Levin ( ; russian: Леони́д Анато́льевич Ле́вин; uk, Леоні́д Анато́лійович Ле́він; born November 2, 1948) is a Soviet-American mathematician and computer scientist. He is kn ...
, proved that there exist practically relevant problems that are
NP-complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryin ...
– a landmark result in
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
. With the development of
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, q ...
in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a
quantum computer Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Thoug ...
in the latter half of the 20th century that took off in the 1990s when
Peter Shor Peter Williston Shor (born August 14, 1959) is an American professor of applied mathematics at MIT. He is known for his work on quantum computation, in particular for devising Shor's algorithm, a quantum algorithm for factoring exponentially ...
showed that such methods could be used to factor large numbers in
polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
, which, if implemented, would render some modern
public key cryptography Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic alg ...
algorithms like
RSA RSA may refer to: Organizations Academia and education * Rabbinical Seminary of America, a yeshiva in New York City *Regional Science Association International (formerly the Regional Science Association), a US-based learned society *Renaissance S ...
insecure. Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:


Topics


Algorithms

An
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
is a step-by-step procedure for calculations. Algorithms are used for
calculation A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or ''results''. The term is used in a variety of senses, from the very definite arithmetical calculation of using an algorithm, to t ...
,
data processing Data processing is the collection and manipulation of digital data to produce meaningful information. Data processing is a form of '' information processing'', which is the modification (processing) of information in any manner detectable by ...
, and
automated reasoning In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer progr ...
. An algorithm is an
effective method In logic, mathematics and computer science, especially metalogic and computability theory, an effective method Hunter, Geoffrey, ''Metalogic: An Introduction to the Metatheory of Standard First-Order Logic'', University of California Press, 1971 ...
expressed as a
finite Finite is the opposite of infinite. It may refer to: * Finite number (disambiguation) * Finite set, a set whose cardinality (number of elements) is some natural number * Finite verb Traditionally, a finite verb (from la, fīnītus, past partici ...
list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a
computation Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
that, when
executed Capital punishment, also known as the death penalty, is the State (polity), state-sanctioned practice of deliberately killing a person as a punishment for an actual or supposed crime, usually following an authorized, rule-governed process to ...
, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily
deterministic Determinism is a philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and consi ...
; some algorithms, known as
randomized algorithms A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
, incorporate random input.


Automata theory

Automata theory Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science. The word ''automata'' comes from the Greek word αὐτόματο ...
is the study of ''
abstract machine An abstract machine is a computer science theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is analogous to a mathematical function in that it receives inputs and produces outputs based on p ...
s'' and ''
automata An automaton (; plural: automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions.Automaton – Definition and Mor ...
'', as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under
discrete mathematics Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continu ...
(a section of mathematics and also of
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
). ''Automata'' comes from the Greek word αὐτόματα meaning "self-acting". Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of
computation Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
(or any function/process).


Coding theory

Coding theory Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are stud ...
is the study of the properties of codes and their fitness for a specific application. Codes are used for
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
,
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
, error-correction and more recently also for network coding. Codes are studied by various scientific disciplines—such as
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
, electrical engineering, mathematics, and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
—for the purpose of designing efficient and reliable
data transmission Data transmission and data reception or, more broadly, data communication or digital communications is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point o ...
methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.


Computational biology

Computational biology Computational biology refers to the use of data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has fo ...
involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The field is broadly defined and includes foundations in computer science,
applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathemat ...
, animation, statistics,
biochemistry Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology ...
, chemistry,
biophysics Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. ...
,
molecular biology Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions. The study of chemical and phys ...
,
genetics Genetics is the study of genes, genetic variation, and heredity in organisms.Hartl D, Jones E (2005) It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar worki ...
,
genomics Genomics is an interdisciplinary field of biology focusing on the structure, function, evolution, mapping, and editing of genomes. A genome is an organism's complete set of DNA, including all of its genes as well as its hierarchical, three-dim ...
,
ecology Ecology () is the study of the relationships between living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere level. Ecology overl ...
,
evolution Evolution is change in the heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation ...
,
anatomy Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its ...
,
neuroscience Neuroscience is the science, scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions and disorders. It is a Multidisciplinary approach, multidisciplinary science that combines physiology, an ...
, and
visualization Visualization or visualisation may refer to: * Visualization (graphics), the physical or imagining creation of images, diagrams, or animations to communicate a message * Data visualization, the graphic representation of data * Information visualiz ...
. Computational biology is different from
biological computation The concept of biological computation proposes that living organisms perform computations, and that as such, abstract ideas of information and computation may be key to understanding biology. As a field, biological computation can include the stud ...
, which is a subfield of computer science and
computer engineering Computer engineering (CoE or CpE) is a branch of electrical engineering and computer science that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer enginee ...
using
bioengineering Biological engineering or bioengineering is the application of principles of biology and the tools of engineering to create usable, tangible, economically-viable products. Biological engineering employs knowledge and expertise from a number o ...
and
biology Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditar ...
to build computers, but is similar to
bioinformatics Bioinformatics () is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combin ...
, which is an interdisciplinary science using computers to store and process biological data.


Computational complexity theory

Computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
is a branch of the
theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., ...
that focuses on classifying
computational problems In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational probl ...
according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
used. The theory formalizes this intuition, by introducing mathematical
models of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes ho ...
to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other
complexity Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, leading to nonlinearity, randomness, collective dynamics, hierarchy, and emergence. The term is generally used to c ...
measures are also used, such as the amount of communication (used in
communication complexity In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
), the number of
gates Gates is the plural of gate, a point of entry to a space which is enclosed by walls. It may also refer to: People * Gates (surname), various people with the last name * Gates Brown (1939-2013), American Major League Baseball player * Gates McFa ...
in a circuit (used in
circuit complexity In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the ci ...
) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.


Computational geometry

Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of
geometry Geometry (; ) is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space such as the distance, shape, size, and relative position of figures. A mathematician who works in the field of geometry is c ...
. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. The main impetus for the development of computational geometry as a discipline was progress in
computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great deal ...
and computer-aided design and manufacturing ( CAD/ CAM), but many problems in computational geometry are classical in nature, and may come from
mathematical visualization Mathematical phenomena can be understood and explored via visualization. Classically this consisted of two-dimensional drawings or building three-dimensional models (particularly plaster models in the 19th and early 20th century), while today it ...
. Other important applications of computational geometry include
robotics Robotics is an interdisciplinarity, interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist human ...
(motion planning and visibility problems),
geographic information system A geographic information system (GIS) is a type of database containing geographic data (that is, descriptions of phenomena for which location is relevant), combined with software tools for managing, analyzing, and visualizing those data. In a ...
s (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification),
computer-aided engineering Computer-aided engineering (CAE) is the broad usage of computer software to aid in engineering analysis tasks. It includes , , , durability and optimization. It is included with computer-aided design (CAD) and computer-aided manufacturing (CAM) ...
(CAE) (mesh generation),
computer vision Computer vision is an Interdisciplinarity, interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate t ...
(3D reconstruction).


Computational learning theory

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.


Computational number theory

Computational number theory In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorith ...
, also known as algorithmic number theory, is the study of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s for performing
number theoretic Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Math ...
computation Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
s. The best known problem in the field is integer factorization.


Cryptography

Cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
is the practice and study of techniques for
secure communication Secure communication is when two entities are communicating and do not want a third party to listen in. For this to be the case, the entities need to communicate in a way that is unsusceptible to eavesdropping or interception. Secure communication ...
in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in
information security Information security, sometimes shortened to InfoSec, is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or reducing the probability of unauthori ...
such as data
confidentiality Confidentiality involves a set of rules or a promise usually executed through confidentiality agreements that limits the access or places restrictions on certain types of information. Legal confidentiality By law, lawyers are often required ...
,
data integrity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The ter ...
,
authentication Authentication (from ''authentikos'', "real, genuine", from αὐθέντης ''authentes'', "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicat ...
, and
non-repudiation Non-repudiation refers to a situation where a statement's author cannot successfully dispute its authorship or the validity of an associated contract. The term is often seen in a legal setting when the authenticity of a signature is being challenged ...
. Modern cryptography intersects the disciplines of mathematics,
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and
electronic commerce E-commerce (electronic commerce) is the activity of electronically buying or selling of products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain manag ...
. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around
computational hardness assumption In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in polynomial time"). It is not known how to prove (uncondition ...
s, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist
information-theoretically secure A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computatio ...
schemes that cannot be broken even with unlimited computing power—an example is the
one-time pad In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is not smaller than the message being sent. In this technique, a plaintext is paired with a r ...
—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.


Data structures

A
data structure In computer science, a data structure is a data organization, management, and storage format that is usually chosen for Efficiency, efficient Data access, access to data. More precisely, a data structure is a collection of data values, the rel ...
is a particular way of organizing
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use
B-tree In computer science, a B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for ...
indexes for small percentages of data retrieval and
compiler In computing, a compiler is a computer program that translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that ...
s and databases use dynamic
hash table In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an ''index'', ...
s as look up tables. Data structures provide a means to manage large amounts of data efficiently for uses such as large
database In computing, a database is an organized collection of data stored and accessed electronically. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spa ...
s and internet indexing services. Usually, efficient data structures are key to designing efficient
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s. Some formal design methods and
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming l ...
s emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both
main memory Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a comput ...
and in
secondary memory Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a compute ...
.


Distributed computation

Distributed computing A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. Distributed computing is a field of computer sci ...
studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to
massively multiplayer online game A massively multiplayer online game (MMOG or more commonly MMO) is an online video game with a large number of players, often hundreds or thousands, on the same server. MMOs usually feature a huge, persistent open world, although there are ...
s to peer-to-peer applications, and blockchain networks like
Bitcoin Bitcoin ( abbreviation: BTC; sign: ₿) is a decentralized digital currency that can be transferred on the peer-to-peer bitcoin network. Bitcoin transactions are verified by network nodes through cryptography and recorded in a public di ...
. A
computer program A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components. A computer progra ...
that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is
location transparency In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered aro ...
.


Information-based complexity

Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration.


Formal methods

Formal methods In computer science, formal methods are mathematically rigorous techniques for the specification, development, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the exp ...
are a particular kind of mathematics based techniques for the
specification A specification often refers to a set of documented requirements to be satisfied by a material, design, product, or service. A specification is often a type of technical standard. There are different types of technical or engineering specificat ...
, development and verification of
software Software is a set of computer programs and associated software documentation, documentation and data (computing), data. This is in contrast to Computer hardware, hardware, from which the system is built and which actually performs the work. ...
and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premis ...
calculi,
formal language In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. The alphabet of a formal language consists of s ...
s,
automata theory Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science. The word ''automata'' comes from the Greek word αὐτόματο ...
, and program semantics, but also
type systems In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every "term" (a word, phrase, or other set of symbols). Usually the terms are various constructs of a computer progra ...
and algebraic data types to problems in software and hardware specification and verification.


Information theory

Information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
is a branch of
applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathemat ...
, electrical engineering, and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
involving the quantification of
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
. Information theory was developed by Claude E. Shannon to find fundamental limits on
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference,
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
,
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
,
neurobiology Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions and disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmen ...
, the evolution and function of molecular codes,
model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
in statistics, thermal physics,
quantum computing Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Thou ...
,
linguistics Linguistics is the scientific study of human language. It is called a scientific study because it entails a comprehensive, systematic, objective, and precise analysis of all aspects of language, particularly its nature and structure. Lingu ...
, plagiarism detection,
pattern recognition Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphic ...
,
anomaly detection In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority o ...
and other forms of
data analysis Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, en ...
. Applications of fundamental topics of information theory include
lossless data compression Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statist ...
(e.g. ZIP files),
lossy data compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
(e.g.
MP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany, with support from other digital scientists in the United States and elsewhere. Orig ...
s and
JPEG JPEG ( ) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and im ...
s), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics,
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
,
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which rel ...
,
neurobiology Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions and disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmen ...
, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the
Internet The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a ''internetworking, network of networks'' that consists ...
, the study of
linguistics Linguistics is the scientific study of human language. It is called a scientific study because it entails a comprehensive, systematic, objective, and precise analysis of all aspects of language, particularly its nature and structure. Lingu ...
and of human perception, the understanding of
black hole A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape it. The theory of general relativity predicts that a sufficiently compact mass can defo ...
s, and numerous other fields. Important sub-fields of information theory are
source coding In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressio ...
, channel coding,
algorithmic complexity theory In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that prod ...
,
algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...
,
information-theoretic security A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computatio ...
, and measures of information.


Machine learning

Machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
is a
scientific discipline The branches of science, also referred to as sciences, scientific fields or scientific disciplines, are commonly divided into three major groups: * Formal sciences: the study of formal systems, such as those under the branches of logic and ma ...
that deals with the construction and study of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s that can
learn Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; there is also evidence for some kind of learn ...
from data. Such algorithms operate by building a
model A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure. Models c ...
based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions. Machine learning can be considered a subfield of computer science and statistics. It has strong ties to
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
and
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s is infeasible. Example applications include
spam filter Email filtering is the processing of email to organize it according to specified criteria. The term can apply to the intervention of human intelligence, but most often refers to the automatic processing of messages at an SMTP server, possibly app ...
ing,
optical character recognition Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a sc ...
(OCR),Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, ''
IEEE Signal Processing Magazine The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operat ...
'', vol. 27, no. 4, July 2010, pp. 25–38
search engines A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a l ...
and
computer vision Computer vision is an Interdisciplinarity, interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate t ...
. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and
pattern recognition Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphic ...
"can be viewed as two facets of the same field."


Parallel computation

Parallel computing is a form of
computation Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level,
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
, and
task parallelism Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurre ...
. Parallelism has been employed for many years, mainly in
high-performance computing High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a multi ...
, but interest in it has grown lately due to the physical constraints preventing
frequency scaling In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the d ...
. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in
computer architecture In computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the ...
, mainly in the form of
multi-core processor A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores, each of which reads and executes program instructions. The instructions are ordinary CPU instructions (suc ...
s.Asanovic, Krste et al. (December 18, 2006)
"The Landscape of Parallel Computing Research: A View from Berkeley"
(PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old onventional wisdom Increasing clock frequency is the primary method of improving processor performance. New onventional wisdom Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."
Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of t ...
s are the most common.
Communication Communication (from la, communicare, meaning "to share" or "to be in relation with") is usually defined as the transmission of information. The term may also refer to the message communicated through such transmissions or the field of inqu ...
and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible speed-up of a single program as a result of parallelization is known as
Amdahl's law In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that ...
.


Program semantics

In
programming language theory Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages. Programming language theory is clo ...
, semantics is the field concerned with the rigorous mathematical study of the meaning of
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming l ...
s. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain
platform Platform may refer to: Technology * Computing platform, a framework on which applications may be run * Platform game, a genre of video games * Car platform, a set of components shared by several vehicle models * Weapons platform, a system ...
, hence creating a
model of computation In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes h ...
.


Quantum computation

A
quantum computer Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Thoug ...
is a
computation Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm). Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
system that makes direct use of quantum-mechanical
phenomena A phenomenon ( : phenomena) is an observable event. The term came into its modern philosophical usage through Immanuel Kant, who contrasted it with the noumenon, which ''cannot'' be directly observed. Kant was heavily influenced by Gottfried ...
, such as superposition and entanglement, to perform
operations Operation or Operations may refer to: Arts, entertainment and media * ''Operation'' (game), a battery-operated board game that challenges dexterity * Operation (music), a term used in musical set theory * ''Operations'' (magazine), Multi-Man ...
on
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
. Quantum computers are different from digital computers based on
transistor upright=1.4, gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink). A transistor is a semiconductor device used to Electronic amplifier, amplify or electronic switch, switch ...
s. Whereas digital computers require data to be encoded into binary digits (
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented a ...
s), each of which is always in one of two definite states (0 or 1), quantum computation uses
qubits In quantum computing, a qubit () or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, ...
(quantum bits), which can be in superpositions of states. A theoretical model is the
quantum Turing machine A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algorith ...
, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by
Yuri Manin Yuri Ivanovich Manin (russian: Ю́рий Ива́нович Ма́нин; born 16 February 1937) is a Russian mathematician, known for work in algebraic geometry and diophantine geometry, and many expository works ranging from mathematical lo ...
in 1980 and
Richard Feynman Richard Phillips Feynman (; May 11, 1918 – February 15, 1988) was an American theoretical physicist, known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, the physics of the superf ...
in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum
space–time In physics, spacetime is a mathematical model that combines the three dimensions of space and one dimension of time into a single four-dimensional manifold. Spacetime diagrams can be used to visualize relativistic effects, such as why differen ...
in 1968. , quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.


Symbolic computation

Computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expression ...
, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s and
software Software is a set of computer programs and associated software documentation, documentation and data (computing), data. This is in contrast to Computer hardware, hardware, from which the system is built and which actually performs the work. ...
for manipulating mathematical expressions and other
mathematical object A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an ''object'' is anything that has been (or could be) formally defined, and with which one may do deductive reasoning and mathematical ...
s. Although, properly speaking, computer algebra should be a subfield of
scientific computing Computational science, also known as scientific computing or scientific computation (SC), is a field in mathematics that uses advanced computing capabilities to understand and solve complex problems. It is an area of science that spans many disc ...
, they are generally considered as distinct fields because scientific computing is usually based on
numerical computation Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods th ...
with approximate
floating point number In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be r ...
s, while symbolic computation emphasizes ''exact'' computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of ''symbolic computation'').
Software Software is a set of computer programs and associated software documentation, documentation and data (computing), data. This is in contrast to Computer hardware, hardware, from which the system is built and which actually performs the work. ...
applications that perform symbolic calculations are called ''
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The ...
s'', with the term ''system'' alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a
user interface In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine f ...
for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x) ...
,
polynomial factorization In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same d ...
, indefinite integration, etc.


Very-large-scale integration

Very-large-scale integration Very large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (Metal Oxide Semiconductor) ...
(VLSI) is the process of creating an integrated circuit (IC) by combining thousands of
transistors upright=1.4, gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink). A transistor is a semiconductor device used to Electronic amplifier, amplify or electronic switch, switch e ...
into a single chip. VLSI began in the 1970s when complex
semiconductor A semiconductor is a material which has an electrical conductivity value falling between that of a conductor, such as copper, and an insulator, such as glass. Its resistivity falls as its temperature rises; metals behave in the opposite way. ...
and
communication Communication (from la, communicare, meaning "to share" or "to be in relation with") is usually defined as the transmission of information. The term may also refer to the message communicated through such transmissions or the field of inqu ...
technologies were being developed. The
microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circu ...
is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An
electronic circuit An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. It is a type of electri ...
might consist of a
CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
, ROM, RAM and other
glue logic In electronics, glue logic is the custom logic circuitry used to interface a number of off-the-shelf integrated circuits. This is often achieved using common, inexpensive 7400- or 4000-series components. In more complex cases, a programmable lo ...
. VLSI allows IC makers to add all of these circuits into one chip.


Organizations

*
European Association for Theoretical Computer Science The European Association for Theoretical Computer Science (EATCS) is an international organization with a European focus, founded in 1972. Its aim is to facilitate the exchange of ideas and results among theoretical computer scientists as well as ...
* SIGACT *
Simons Institute for the Theory of Computing The Simons Institute for the Theory of Computing at the University of California, Berkeley is an institute for collaborative research in theoretical computer science. History Established on July 1, 2012 with a grant of $60 million from the Simon ...


Journals and newsletters

* "
Discrete Mathematics and Theoretical Computer Science ''Discrete Mathematics & Theoretical Computer Science'' is a peer-reviewed open access scientific journal covering discrete mathematics and theoretical computer science. It was established in 1997 by Daniel Krob (Paris Diderot University). Since 2 ...
" * ''
Information and Computation ''Information and Computation'' is a closed-access computer science journal published by Elsevier (formerly Academic Press). The journal was founded in 1957 under its former name ''Information and Control'' and given its current title in 1987. , t ...
'' * ''
Theory of Computing ''Theory of Computing'' is a peer-reviewed open access scientific journal covering theoretical computer science. The journal was established in 2005 and is published by the Department of Computer Science of the University of Chicago. The editor- ...
'' (
open access Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers. With open access strictly defined (according to the 2001 definition), or libre o ...
journal) * '' Formal Aspects of Computing'' * ''
Journal of the ACM The ''Journal of the ACM'' is a peer-reviewed scientific journal covering computer science in general, especially theoretical aspects. It is an official journal of the Association for Computing Machinery. Its current editor-in-chief An editor-in-c ...
'' * ''
SIAM Journal on Computing The ''SIAM Journal on Computing'' is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is ...
'' (SICOMP) * '' SIGACT News'' * ''
Theoretical Computer Science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...
'' * '' Theory of Computing Systems'' * ''
International Journal of Foundations of Computer Science The ''International Journal of Foundations of Computer Science'' is a computer science journal published by World Scientific. It was founded in 1990, covering the field of theoretical computer science, from algebraic theory and algorithms, to qu ...
'' * ''
Chicago Journal of Theoretical Computer Science (''City in a Garden''); I Will , image_map = , map_caption = Interactive Map of Chicago , coordinates = , coordinates_footnotes = , subdivision_type = Country , subdivision_name ...
'' (
open access Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers. With open access strictly defined (according to the 2001 definition), or libre o ...
journal) * '' Foundations and Trends in Theoretical Computer Science'' * '' Journal of Automata, Languages and Combinatorics'' * ''
Acta Informatica ''Acta Informatica'' is a peer-reviewed scientific journal publishing original research papers in computer science. The journal is known mostly for publications in theoretical computer science. One of the two 1988 papers awarded the Gödel Prize ...
'' * '' Fundamenta Informaticae'' * ''
ACM Transactions on Computation Theory ACM or A.C.M. may refer to: Aviation * AGM-129 ACM, 1990–2012 USAF cruise missile * Air chief marshal * Air combat manoeuvring or dogfighting * Air cycle machine * Arica Airport (Colombia) (IATA: ACM), in Arica, Amazonas, Colombia Computing ...
'' * ''
Computational Complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
'' * '' Journal of Complexity'' * ACM Transactions on Algorithms * Information Processing Letters
Open Computer Science
(
open access Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers. With open access strictly defined (according to the 2001 definition), or libre o ...
journal)


Conferences

* Annual ACM
Symposium on Theory of Computing The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for ...
(STOC)Th
2007 Australian Ranking of ICT Conferences
: tier A+.
* Annual IEEE
Symposium on Foundations of Computer Science The IEEE Annual Symposium on Foundations of Computer Science (FOCS) is an academic conference in the field of theoretical computer science. FOCS is sponsored by the IEEE Computer Society. As writes, FOCS and its annual Association for Computing ...
(FOCS) *
Innovations in Theoretical Computer Science The Conference on Innovations in Theoretical Computer Science is an academic conference about theoretical computer science. The conference was initiated by Andrew Yao in 2010, and was originally called ''Innovations in Computer Science''. The pr ...
(ITCS) * Mathematical Foundations of Computer Science (MFCS) *
International Computer Science Symposium in Russia International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * ''International'' (Kevin Michael album), 2011 * ''International'' (New Order album), 2002 * ''International'' (The T ...
(CSR) * ACM–SIAM Symposium on Discrete Algorithms (SODA) * IEEE
Symposium on Logic in Computer Science The ACM–IEEE Symposium on Logic in Computer Science (LICS) is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Extended versions of selected papers of each year's conference appear i ...
(LICS) * Computational Complexity Conference (CCC) *
International Colloquium on Automata, Languages and Programming ICALP, the International Colloquium on Automata, Languages, and Programming is an academic conference organized annually by the European Association for Theoretical Computer Science and held in different locations around Europe. Like most theoret ...
(ICALP) * Annual
Symposium on Computational Geometry The International Symposium on Computational Geometry (SoCG) is an academic conference in computational geometry. It was founded in 1985, and was originally sponsored by the SIGACT and SIGGRAPH Special Interest Groups of the Association for Compu ...
(SoCG)Th
2007 Australian Ranking of ICT Conferences
: tier A.
* ACM
Symposium on Principles of Distributed Computing The Symposium on Principles of Distributed Computing (PODC) is an academic conference in the field of distributed computing organised annually by the Association for Computing Machinery (special interest groups SIGACT and SIGOPS). Scope and re ...
(PODC) * ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) * Annual Conference on Learning Theory (COLT) * Symposium on Theoretical Aspects of Computer Science (STACS) * European Symposium on Algorithms (ESA) * Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) *
Workshop on Randomization and Computation Beginning with the Industrial Revolution era, a workshop may be a room, rooms or building which provides both the area and tools (or machinery) that may be required for the manufacture or repair of manufactured goods. Workshops were the onl ...
(RANDOM) * International Symposium on Algorithms and Computation (ISAAC) * International Symposium on Fundamentals of Computation Theory (FCT)FCT 2011
(retrieved 2013-06-03)
*
International Workshop on Graph-Theoretic Concepts in Computer Science International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * ''International'' (Kevin Michael album), 2011 * ''International'' (New Order album), 2002 * ''International'' (The T ...
(WG)


See also

*
Formal science Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, gam ...
* Unsolved problems in computer science * List of important publications in theoretical computer science *
Sun–Ni law Within theoretical computer science, Sun–Ni's Law (or Sun and Ni's Law, also known as memory-bounded speedup), is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrain ...


Notes


Further reading

*
Martin Davis Martin Davis may refer to: * Martin Davis (Australian footballer) (born 1936), Australian rules footballer * Martin Davis (Jamaican footballer) (born 1996), Jamaican footballer * Martin Davis (mathematician) Martin David Davis (March 8, 1928 ...
, Ron Sigal, Elaine J. Weyuker, ''Computability, complexity, and languages: fundamentals of theoretical computer science'', 2nd ed., Academic Press, 1994, . Covers
theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., ...
, but also program semantics and quantification theory. Aimed at graduate students.


External links


SIGACT directory of additional theory links

Theory Matters Wiki
Theoretical Computer Science (TCS) Advocacy Wiki
List of academic conferences in the area of theoretical computer science
a
confsearch

Theoretical Computer Science - StackExchange
a Question and Answer site for researchers in theoretical computer science
Computer Science Animated

Theory of computation
@
Massachusetts Institute of Technology The Massachusetts Institute of Technology (MIT) is a Private university, private Land-grant university, land-grant research university in Cambridge, Massachusetts. Established in 1861, MIT has played a key role in the development of modern t ...
{{DEFAULTSORT:Theoretical Computer Science Formal sciences