Theory Of Computation
   HOME

TheInfoList



OR:

In
theoretical computer science Theoretical computer science is a subfield of computer science and mathematics that focuses on the Abstraction, abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The Associati ...
and
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages,
computability theory Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since ex ...
, and
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
, which are linked by the question: ''"What are the fundamental capabilities and limitations of computers?".'' In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see
Church–Turing thesis In Computability theory (computation), computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) ...
). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory.


History

The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore, mathematics and logic are used. In the last century, it separated from mathematics and became an independent academic discipline with its own conferences such as FOCS in 1960 and STOC in 1969, and its own awards such as the IMU Abacus Medal (established in 1981 as the Rolf Nevanlinna Prize), the Gödel Prize, established in 1993, and the Knuth Prize, established in 1996. Some pioneers of the theory of computation were Ramon Llull,
Alonzo Church Alonzo Church (June 14, 1903 – August 11, 1995) was an American computer scientist, mathematician, logician, and philosopher who made major contributions to mathematical logic and the foundations of theoretical computer science. He is bes ...
, Kurt Gödel,
Alan Turing Alan Mathison Turing (; 23 June 1912 â€“ 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
, Stephen Kleene, Rózsa Péter,
John von Neumann John von Neumann ( ; ; December 28, 1903 â€“ February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, in ...
and
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of th ...
.


Branches


Automata theory

Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (Αυτόματα) which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability.


Formal language theory

Formal language theory is a branch of mathematics concerned with describing languages as a set of operations over an
alphabet An alphabet is a standard set of letter (alphabet), letters written to represent particular sounds in a spoken language. Specifically, letters largely correspond to phonemes as the smallest sound segments that can distinguish one word from a ...
. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. Chomsky hierarchy, and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed.


Computability theory

Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the
halting problem In computability theory (computer science), computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run for ...
cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of
mathematical logic Mathematical logic is the study of Logic#Formal logic, formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory (also known as computability theory). Research in mathematical logic com ...
called
recursion theory Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since ex ...
, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory.


Computational complexity theory

Computational complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps it takes to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are ''n'' numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grow linearly in the size of the problem. To simplify this problem, computer scientists have adopted big ''O'' notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires O(n) steps to solve. Perhaps the most important open problem in all of
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven
Millennium Prize Problems The Millennium Prize Problems are seven well-known complex mathematics, mathematical problems selected by the Clay Mathematics Institute in 2000. The Clay Institute has pledged a US $1 million prize for the first correct solution to each problem ...
stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook.


Models of computation

Aside from a
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
, other equivalent (see
Church–Turing thesis In Computability theory (computation), computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) ...
) models of computation are in use. ;
Lambda calculus In mathematical logic, the lambda calculus (also written as ''λ''-calculus) is a formal system for expressing computability, computation based on function Abstraction (computer science), abstraction and function application, application using var ...
: A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction. ; Combinatory logic :is a concept which has many similarities to \lambda-calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in \lambda-calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics). ; μ-recursive functions: a computation consists of a mu-recursive function, ''i.e.'' its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function f(x) the functions g(x) and h(x,y) appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ recursion. For instance if f(x)=h(x,g(x)), then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs. ;
Markov algorithm In theoretical computer science, a Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a gen ...
: a string rewriting system that uses
grammar In linguistics, grammar is the set of rules for how a natural language is structured, as demonstrated by its speakers or writers. Grammar rules may concern the use of clauses, phrases, and words. The term may also refer to the study of such rul ...
-like rules to operate on strings of symbols. ; Register machine :is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriately huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques. In addition to the general computational models, some simpler computational models are useful for special, restricted applications.
Regular expressions A regular expression (shortened as regex or regexp), sometimes referred to as rational expression, is a sequence of character (computing), characters that specifies a pattern matching, match pattern in string (computer science), text. Usually ...
, for example, specify string patterns in many contexts, from office productivity software to
programming language A programming language is a system of notation for writing computer programs. Programming languages are described in terms of their Syntax (programming languages), syntax (form) and semantics (computer science), semantics (meaning), usually def ...
s. Another formalism mathematically equivalent to regular expressions, finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions. Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained.


References


Further reading

;Textbooks aimed at computer scientists (There are many textbooks in this area; this list is by necessity incomplete.) * — One of the standard references in the field. * * * * Hein, James L. (1996) ''Theory of Computation.'' Sudbury, MA: Jones & Bartlett. A gentle introduction to the field, appropriate for second-year undergraduate computer science students. * Taylor, R. Gregory (1998). ''Models of Computation and Formal Languages.'' New York: Oxford University Press. An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students. * Jon Kleinberg, and Éva Tardos (2006): ''Algorithm Design'', Pearson/Addison-Wesley, ISBN 978-0-32129535-4 * Lewis, F. D. (2007).
Essentials of theoretical computer science
' A textbook covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results. * Martin Davis, Ron Sigal, Elaine J. Weyuker, ''Computability, complexity, and languages: fundamentals of theoretical computer science'', 2nd ed., Academic Press, 1994, . Covers a wider range of topics than most other introductory books, including program semantics and quantification theory. Aimed at graduate students. ;Books on computability theory from the (wider) mathematical perspective * Hartley Rogers, Jr (1987). ''Theory of Recursive Functions and Effective Computability'', MIT Press. * . * Carl H. Smith, ''A recursive introduction to the theory of computation'', Springer, 1994, . A shorter textbook suitable for graduate students in Computer Science. ;Historical perspective * .


External links


Theory of Computation
at MIT
Theory of Computation
at
Harvard Harvard University is a private Ivy League research university in Cambridge, Massachusetts, United States. Founded in 1636 and named for its first benefactor, the Puritan clergyman John Harvard, it is the oldest institution of higher lear ...

Computability Logic
- A theory of interactive computation. The main web source on this subject. {{Computer science