HOME

TheInfoList



OR:

In
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premises ...
and
proof theory Proof theory is a major branchAccording to Wang (1981), pp. 3–4, proof theory is one of four domains mathematical logic, together with model theory, axiomatic set theory, and recursion theory. Barwise (1978) consists of four corresponding parts, ...
, natural deduction is a kind of
proof calculus In mathematical logic, a proof calculus or a proof system is built to prove statements. Overview A proof system includes the components: * Language: The set ''L'' of formulas admitted by the system, for example, propositional logic or first-order ...
in which
logical reasoning Two kinds of logical reasoning are often distinguished in addition to formal deduction: induction and abduction. Given a precondition or ''premise'', a conclusion or ''logical consequence'' and a rule or ''material conditional'' that implies the ...
is expressed by
inference rules In the philosophy of logic, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of in ...
closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use
axiom An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or f ...
s as much as possible to express the logical laws of
deductive reasoning Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. if it is impossible for the premises to be true and the conclusion to be false ...
.


Motivation

Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of
Hilbert David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician, one of the most influential mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many a ...
,
Frege Friedrich Ludwig Gottlob Frege (; ; 8 November 1848 – 26 July 1925) was a German philosopher, logician, and mathematician. He was a mathematics professor at the University of Jena, and is understood by many to be the father of analytic philo ...
, and
Russell Russell may refer to: People * Russell (given name) * Russell (surname) * Lady Russell (disambiguation) * Lord Russell (disambiguation) Places Australia * Russell, Australian Capital Territory * Russell Island, Queensland (disambiguation) ...
(see, e.g., Hilbert system). Such axiomatizations were most famously used by
Russell Russell may refer to: People * Russell (given name) * Russell (surname) * Lady Russell (disambiguation) * Lord Russell (disambiguation) Places Australia * Russell, Australian Capital Territory * Russell Island, Queensland (disambiguation) ...
and Whitehead in their mathematical treatise ''
Principia Mathematica The ''Principia Mathematica'' (often abbreviated ''PM'') is a three-volume work on the foundations of mathematics written by mathematician–philosophers Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. ...
''. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals led to different notations such as
Fitch-style calculus Fitch notation, also known as Fitch diagrams (named after Frederic Fitch), is a notational system for constructing formal proofs used in sentential logics and predicate logics. Fitch-style proofs arrange the sequence of sentences that make up the ...
(or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant called system L. Natural deduction in its modern form was independently proposed by the German mathematician
Gerhard Gentzen Gerhard Karl Erich Gentzen (24 November 1909 – 4 August 1945) was a German mathematician and logician. He made major contributions to the foundations of mathematics, proof theory, especially on natural deduction and sequent calculus. He died o ...
in 1934, in a dissertation delivered to the faculty of mathematical sciences of the
University of Göttingen The University of Göttingen, officially the Georg August University of Göttingen, (german: Georg-August-Universität Göttingen, known informally as Georgia Augusta) is a public research university in the city of Göttingen, Germany. Founded i ...
. The term ''natural deduction'' (or rather, its German equivalent ''natürliches Schließen'') was coined in that paper: Gentzen was motivated by a desire to establish the consistency of
number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathe ...
. He was unable to prove the main result required for the consistency result, the
cut elimination theorem The cut-elimination theorem (or Gentzen's ''Hauptsatz'') is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in his landmark 1934 paper "Investigations in Logical Deduction" for ...
—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the
sequent calculus In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology i ...
, for which he proved the Hauptsatz both for classical and
intuitionistic logic Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems o ...
. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph ''Natural deduction: a proof-theoretical study'', . was to become a reference work on natural deduction, and included applications for modal and second-order logic. In natural deduction, a
proposition In logic and linguistics, a proposition is the meaning of a declarative sentence. In philosophy, " meaning" is understood to be a non-linguistic entity which is shared by all sentences with the same meaning. Equivalently, a proposition is the no ...
is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives.


Judgments and propositions

A ''
judgment Judgement (or US spelling judgment) is also known as ''adjudication'', which means the evaluation of evidence to make a decision. Judgement is also the ability to make considered decisions. The term has at least five distinct uses. Aristotle ...
'' is something that is knowable, that is, an object of knowledge. It is ''evident'' if one in fact knows it. Thus "''it is raining''" is a judgment, which is evident for the one who knows that it is actually raining; in this case one may readily find evidence for the judgment by looking outside the window or stepping out of the house. In mathematical logic however, evidence is often not as directly observable, but rather deduced from more basic evident judgments. The process of deduction is what constitutes a ''proof''; in other words, a judgment is evident if one has a proof for it. The most important judgments in logic are of the form "''A is true''". The letter ''A'' stands for any expression representing a ''proposition''; the truth judgments thus require a more primitive judgment: "''A is a proposition''". Many other judgments have been studied; for example, "''A is false''" (see
classical logic Classical logic (or standard logic or Frege-Russell logic) is the intensively studied and most widely used class of deductive logic. Classical logic has had much influence on analytic philosophy. Characteristics Each logical system in this class ...
), "''A is true at time t''" (see
temporal logic In logic, temporal logic is any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time (for example, "I am ''always'' hungry", "I will ''eventually'' be hungry", or "I will be hungry ''until'' I ...
), "''A is necessarily true''" or "''A is possibly true''" (see
modal logic Modal logic is a collection of formal systems developed to represent statements about necessity and possibility. It plays a major role in philosophy of language, epistemology, metaphysics, and natural language semantics. Modal logics extend other ...
), "''the program M has type τ''" (see
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming ...
s and
type theory In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a found ...
), "''A is achievable from the available resources''" (see
linear logic Linear logic is a substructural logic proposed by Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. Although the logic has also b ...
), and many others. To start with, we shall concern ourselves with the simplest two judgments "''A is a proposition''" and "''A is true''", abbreviated as "''A'' prop" and "''A'' true" respectively. The judgment "''A'' prop" defines the structure of valid proofs of ''A'', which in turn defines the structure of propositions. For this reason, the inference rules for this judgment are sometimes known as ''formation rules''. To illustrate, if we have two propositions ''A'' and ''B'' (that is, the judgments "''A'' prop" and "''B'' prop" are evident), then we form the compound proposition ''A and B'', written symbolically as "A \wedge B". We can write this in the form of an inference rule: :\frac\ \wedge_F where the parentheses are omitted to make the inference rule more succinct: :\frac\ \wedge_F This inference rule is ''schematic'': ''A'' and ''B'' can be instantiated with any expression. The general form of an inference rule is: :\frac\ \hbox where each J_i is a judgment and the inference rule is named "name". The judgments above the line are known as ''premises'', and those below the line are ''conclusions''. Other common logical propositions are disjunction (A \vee B), negation (\neg A), implication (A \supset B), and the logical constants truth (\top) and falsehood (\bot). Their formation rules are below. : \frac\ \vee_F \qquad \frac\ \supset_F : \frac\ \top_F \qquad \frac\ \bot_F \qquad \frac\ \neg_F


Introduction and elimination

Now we discuss the "''A'' true" judgment. Inference rules that introduce a
logical connective In logic, a logical connective (also called a logical operator, sentential connective, or sentential operator) is a logical constant. They can be used to connect logical formulas. For instance in the syntax of propositional logic, the binary co ...
in the conclusion are known as ''introduction rules''. To introduce conjunctions, ''i.e.'', to conclude "''A and B'' true" for propositions ''A'' and ''B'', one requires evidence for "''A'' true" and "''B'' true". As an inference rule:
\frac\ \wedge_I
It must be understood that in such rules the objects are propositions. That is, the above rule is really an abbreviation for:
\frac\ \wedge_I
This can also be written:
\frac\ \wedge_I
In this form, the first premise can be satisfied by the \wedge_F formation rule, giving the first two premises of the previous form. In this article we shall elide the "prop" judgments where they are understood. In the nullary case, one can derive truth from no premises.
\frac\ \top_I
If the truth of a proposition can be established in more than one way, the corresponding connective has multiple introduction rules.
\frac\ \vee_ \qquad \frac\ \vee_
Note that in the nullary case, ''i.e.'', for falsehood, there are ''no'' introduction rules. Thus one can never infer falsehood from simpler judgments. Dual to introduction rules are ''elimination rules'' to describe how to deconstruct information about a compound proposition into information about its constituents. Thus, from "''A ∧ B'' true", we can conclude "''A'' true" and "''B'' true":
\frac\ \wedge_ \qquad \frac\ \wedge_
As an example of the use of inference rules, consider commutativity of conjunction. If ''A'' ∧ ''B'' is true, then ''B'' ∧ ''A'' is true; this derivation can be drawn by composing inference rules in such a fashion that premises of a lower inference match the conclusion of the next higher inference.
\cfrac \ \wedge_I
The inference figures we have seen so far are not sufficient to state the rules of implication introduction or
disjunction elimination In propositional logic, disjunction elimination (sometimes named proof by cases, case analysis, or or elimination), is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It ...
; for these, we need a more general notion of ''hypothetical derivation''.


Hypothetical derivations

A pervasive operation in mathematical logic is ''reasoning from assumptions''. For example, consider the following derivation:
\cfrac\ \wedge_
This derivation does not establish the truth of ''B'' as such; rather, it establishes the following fact: :If ''A ∧ (B ∧ C) is true'' then ''B is true''. In logic, one says "''assuming A ∧ (B ∧ C) is true, we show that B is true''"; in other words, the judgment "''B'' true" depends on the assumed judgment "''A ∧ (B ∧ C)'' true". This is a ''hypothetical derivation'', which we write as follows:
\begin A \wedge \left ( B \wedge C \right ) \hbox \\ \vdots \\ B \hbox \end
The interpretation is: "''B true'' is derivable from ''A ∧ (B ∧ C) true''". Of course, in this specific example we actually know the derivation of "''B'' true" from "''A ∧ (B ∧ C)'' true", but in general we may not ''a priori'' know the derivation. The general form of a hypothetical derivation is:
\begin D_1 \quad D_2 \ \cdots \ D_n \\ \vdots \\ J \end
Each hypothetical derivation has a collection of ''antecedent'' derivations (the ''Di'') written on the top line, and a ''succedent'' judgment (''J'') written on the bottom line. Each of the premises may itself be a hypothetical derivation. (For simplicity, we treat a judgment as a premise-less derivation.) The notion of hypothetical judgment is ''internalised'' as the connective of implication. The introduction and elimination rules are as follows.
\cfrac\ \supset_ \qquad \cfrac\ \supset_E
In the introduction rule, the antecedent named ''u'' is ''discharged'' in the conclusion. This is a mechanism for delimiting the ''scope'' of the hypothesis: its sole reason for existence is to establish "''B'' true"; it cannot be used for any other purpose, and in particular, it cannot be used below the introduction. As an example, consider the derivation of "''A ⊃ (B ⊃ (A ∧ B))'' true":
\cfrac\ \supset_
This full derivation has no unsatisfied premises; however, sub-derivations ''are'' hypothetical. For instance, the derivation of "''B ⊃ (A ∧ B)'' true" is hypothetical with antecedent "''A'' true" (named ''u''). With hypothetical derivations, we can now write the elimination rule for disjunction:
\cfrac\ \vee_
In words, if ''A ∨ B'' is true, and we can derive "''C'' true" both from "''A'' true" and from "''B'' true", then ''C'' is indeed true. Note that this rule does not commit to either "''A'' true" or "''B'' true". In the zero-ary case, ''i.e.'' for falsehood, we obtain the following elimination rule:
\frac\ \perp_E
This is read as: if falsehood is true, then any proposition ''C'' is true. Negation is similar to implication.
\cfrac\ \lnot_ \qquad \cfrac\ \lnot_E
The introduction rule discharges both the name of the hypothesis ''u'', and the succedent ''p'', ''i.e.'', the proposition ''p'' must not occur in the conclusion ''A''. Since these rules are schematic, the interpretation of the introduction rule is: if from "''A'' true" we can derive for every proposition ''p'' that "''p'' true", then ''A'' must be false, ''i.e.'', "''not A'' true". For the elimination, if both ''A'' and ''not A'' are shown to be true, then there is a contradiction, in which case every proposition ''C'' is true. Because the rules for implication and negation are so similar, it should be fairly easy to see that ''not A'' and ''A ⊃ ⊥'' are equivalent, i.e., each is derivable from the other.


Consistency, completeness, and normal forms

A
theory A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be s ...
is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a
model A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure. Models c ...
. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the ''strength'' of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions. :\begin\cfrac\wedge_\end\quad\Rightarrow\quad\cfracu Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions: :\cfracu \quad \Rightarrow \quad \begin\cfrac\wedge_I\end These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation th ...
, using the Curry–Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be ''normal''. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a '' normal form''. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by
Dag Prawitz Dag Prawitz (born 1936, Stockholm) is a Swedish philosopher and logician. He is best known for his work on proof theory and the foundations of natural deduction. Prawitz is a member of the Norwegian Academy of Science and Letters, of the Royal Sw ...
in 1961. It is much easier to show this indirectly by means of a cut-free
sequent calculus In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology i ...
presentation.


First and higher-order extensions

The logic of the earlier section is an example of a ''single-sorted'' logic, ''i.e.'', a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of ''individuals'' or '' terms''. More precisely, we will add a new kind of judgment, "''t is a term''" (or "''t term''") where ''t'' is schematic. We shall fix a
countable In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is ''countable'' if there exists an injective function from it into the natural numbers ...
set ''V'' of ''variables'', another countable set ''F'' of ''function symbols'', and construct terms with the following formation rules: : \frac \hbox_F and : \frac \hbox_F For propositions, we consider a third countable set ''P'' of ''
predicates Predicate or predication may refer to: * Predicate (grammar), in linguistics * Predication (philosophy) * several closely related uses in mathematics and formal logic: **Predicate (mathematical logic) ** Propositional function **Finitary relation, ...
'', and define ''atomic predicates over terms'' with the following formation rule: : \frac \hbox_F The first two rules of formation provide a definition of a term that is effectively the same as that defined in
term algebra In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set ''X'' of variables is exa ...
and
model theory In mathematical logic, model theory is the study of the relationship between formal theories (a collection of sentences in a formal language expressing statements about a mathematical structure), and their models (those structures in which the st ...
, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an
atomic formula In mathematical logic, an atomic formula (also known as an atom or a prime formula) is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformul ...
, as in
first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantifi ...
, and again in model theory. To these are added a pair of formation rules, defining the notation for '' quantified'' propositions; one for universal (∀) and existential (∃) quantification: : \frac \;\forall_F \qquad\qquad \frac \;\exists_F The
universal quantifier In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any" or "for all". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other ...
has the introduction and elimination rules: : \cfrac\;\forall_ \qquad \qquad \frac\;\forall_ The
existential quantifier In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, whe ...
has the introduction and elimination rules: : \frac\;\exists_ \qquad\qquad \cfrac \exists_ In these rules, the notation 't''/''x''''A'' stands for the substitution of ''t'' for every (visible) instance of ''x'' in ''A'', avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term ''a'' cannot occur in the conclusion of ∀I (such terms are known as ''eigenvariables'' or ''parameters''), and the hypotheses named ''u'' and ''v'' in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable. So far, the quantified extensions are ''first-order'': they distinguish propositions from the kinds of objects quantified over.
Higher-order logic mathematics and logic, a higher-order logic is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expres ...
takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules: : \cfrac \;\forall_ \qquad\qquad \cfrac \;\exists_ A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.


Different presentations of natural deduction


Tree-like presentations

Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of
sequent In mathematical logic, a sequent is a very general kind of conditional assertion. : A_1,\,\dots,A_m \,\vdash\, B_1,\,\dots,B_n. A sequent may have any number ''m'' of condition formulas ''Ai'' (called " antecedents") and any number ''n'' of ass ...
s ''Γ ⊢A'' instead of a tree of ''A true'' judgments.


Sequential presentations

Jaśkowski's representations of natural deduction led to different notations such as
Fitch-style calculus Fitch notation, also known as Fitch diagrams (named after Frederic Fitch), is a notational system for constructing formal proofs used in sentential logics and predicate logics. Fitch-style proofs arrange the sequence of sentences that make up the ...
(or Fitch's diagrams) or Suppes' method, of which Lemmon gave a variant called system L. Such presentation systems, which are more accurately described as tabular, include the following. * 1940: In a textbook, Quine indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation. * 1950: In a textbook, demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.) * 1957: An introduction to practical logic theorem proving in a textbook by . This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line. * 1963: uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules. * 1965: The entire textbook by is an introduction to logic proofs using a method based on that of Suppes. * 1967: In a textbook, briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies.


Proofs and type theory

The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a ''proof''. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with ''proof variables'' (from some countable set ''V'' of variables), and decorate the succedent with the actual proof. The antecedents or ''hypotheses'' are separated from the succedent by means of a ''
turnstile A turnstile (also called a turnpike, gateline, baffle gate, automated gate, turn gate in some regions) is a form of gate which allows one person to pass at a time. A turnstile can be configured to enforce one-way human traffic. In addition, a t ...
'' (⊢). This modification sometimes goes under the name of ''localised hypotheses''. The following diagram summarises the change. The collection of hypotheses will be written as Γ when their exact composition is not relevant. To make proofs explicit, we move from the proof-less judgment "''A true''" to a judgment: "π ''is a proof of (A true)''", which is written symbolically as "π : ''A true''". Following the standard approach, proofs are specified with their own formation rules for the judgment "π ''proof''". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself. For brevity, we shall leave off the judgmental label ''true'' in the rest of this article, ''i.e.'', write "Γ ⊢ π : ''A''". Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus: The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections—first (fst) and second (snd). For implication, the introduction form localises or ''binds'' the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, ''u'':''A''" stands for the collection of hypotheses Γ, together with the additional hypothesis ''u''. With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a ''substitution theorem'', and can be proved by induction on the depth (or structure) of the second judgment. ; Substitution theorem : ''If'' Γ ⊢ π1 : ''A'' ''and'' Γ, ''u'':''A'' ⊢ π2 : ''B'', ''then'' Γ ⊢ 1/''u''π2 : B. So far the judgment "Γ ⊢ π : ''A''" has had a purely logical interpretation. In
type theory In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a found ...
, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as ''types'', and proofs as programs in the
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation th ...
. Thus the interpretation of "π : ''A''" is "''the program'' π has type ''A''". The logical connectives are also given a different reading: conjunction is viewed as
product Product may refer to: Business * Product (business), an item that serves as a solution to a specific consumer problem. * Product (project management), a deliverable or set of deliverables that contribute to a business solution Mathematics * Produ ...
(×), implication as the function
arrow An arrow is a fin-stabilized projectile launched by a bow. A typical arrow usually consists of a long, stiff, straight shaft with a weighty (and usually sharp and pointed) arrowhead attached to the front end, multiple fin-like stabilizers c ...
(→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as ''simple type theory'' from the previous sections. The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as ''canonical forms'' or ''values''. If every program can be reduced to a canonical form, then the type theory is said to be '' normalising'' (or ''weakly normalising''). If the canonical form is unique, then the theory is said to be ''strongly normalising''. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥ ''true''". For this reason, the ''propositions as types; proofs as programs'' paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.


Example: Dependent Type Theory

Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory, is used in a number of
computer-assisted proof A computer-assisted proof is a mathematical proof that has been at least partially generated by computer. Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a ...
systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules: These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules. Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable (
extensional type theory Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory) is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and p ...
), or extensional reasoning is more difficult (
intensional type theory Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory) is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and p ...
). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable ''index domain'', for example integers, strings, or linear programs. Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as '' parametric polymorphism''; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called '' predicative polymorphism''; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as '' impredicative polymorphism''. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the
lambda cube In mathematical logic and type theory, the λ-cube (also written lambda cube) is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ- ...
of
Henk Barendregt Hendrik Pieter (Henk) Barendregt (born 18 December 1947, Amsterdam) is a Dutch logician, known for his work in lambda calculus and type theory. Life and work Barendregt studied mathematical logic at Utrecht University, obtaining his master's d ...
. The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a
logical framework In logic, a logical framework provides a means to define (or present) a logic as a signature in a higher-order type theory in such a way that provability of a formula in the original logic reduces to a type inhabitation problem in the framework typ ...
. Popular modern logical frameworks such as the
calculus of constructions In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason ...
and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.


Classical and modal logics

For simplicity, the logics presented so far have been
intuitionistic In the philosophy of mathematics, intuitionism, or neointuitionism (opposed to preintuitionism), is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of f ...
.
Classical logic Classical logic (or standard logic or Frege-Russell logic) is the intensively studied and most widely used class of deductive logic. Classical logic has had much influence on analytic philosophy. Characteristics Each logical system in this class ...
extends intuitionistic logic with an additional
axiom An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or f ...
or principle of
excluded middle In logic, the law of excluded middle (or the principle of excluded middle) states that for every proposition, either this proposition or its negation is true. It is one of the so-called three laws of thought, along with the law of noncontradi ...
: :''For any proposition p, the proposition p ∨ ¬p is true.'' This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of
Hilbert David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician, one of the most influential mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many a ...
and Heyting: (XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms. A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical
lambda calculus Lambda calculus (also written as ''λ''-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation th ...
called λμ. The key insight of his approach was to replace a truth-centric judgment ''A true'' with a more classical notion, reminiscent of the
sequent calculus In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology i ...
: in localised form, instead of Γ ⊢ ''A'', he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lisping ...
and its descendants. (See also: first class control.) Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, "''A valid''", that is categorical with respect to truth: :''If "A true" under no assumptions of the form "B true", then "A valid".'' This categorical judgment is internalised as a unary connective ◻''A'' (read "''necessarily A''") with the following introduction and elimination rules: Note that the premise "''A valid''" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ ''A true''" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment "''A true''"; validity is not needed here since "Ω ⊢ ''A valid''" is by definition the same as "Ω;⋅ ⊢ ''A true''". The introduction and elimination forms are then: The modal hypotheses have their own version of the hypothesis rule and substitution theorem. ; Modal substitution theorem : ''If'' Ω;⋅ ⊢ π1 : ''A true'' ''and'' Ω, ''u'': (''A valid'') ; Γ ⊢ π2 : ''C true'', ''then'' Ω;Γ ⊢ 1/''u''π2 : ''C true''. This framework of separating judgments into distinct collections of hypotheses, also known as ''multi-zoned'' or ''polyadic'' contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear r ...
and other
substructural logic In logic, a substructural logic is a logic lacking one of the usual structural rules (e.g. of classical and intuitionistic logic), such as weakening, contraction, exchange or associativity. Two of the more significant substructural logics are ...
s, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference. The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of
analytic tableau In proof theory, the semantic tableau (; plural: tableaux, also called truth tree) is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure compute ...
x to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. surveys the application of many proof theories, such as Avron and Pottinger's
hypersequent In mathematical logic, the hypersequent framework is an extension of the proof-theoretical framework of sequent calculi used in structural proof theory to provide analytic calculi for logics that are not captured in the sequent framework. A hyper ...
s and Belnap's
display logic In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formali ...
to such modal logics as S5 and B.


Comparison with other foundational approaches


Sequent calculus

The sequent calculus is the chief alternative to natural deduction as a foundation of
mathematical logic Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal sy ...
. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact,
Gentzen Gerhard Karl Erich Gentzen (24 November 1909 – 4 August 1945) was a German mathematician and logician. He made major contributions to the foundations of mathematics, proof theory, especially on natural deduction and sequent calculus. He died o ...
in 1935 proposed his
sequent calculus In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology i ...
, though he initially intended it as a technical device for clarifying the consistency of
predicate logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantifi ...
.
Kleene Stephen Cole Kleene ( ; January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of ...
, in his seminal 1952 book ''Introduction to Metamathematics'', gave the first formulation of the sequent calculus in the modern style.. See also . In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the
turnstile A turnstile (also called a turnpike, gateline, baffle gate, automated gate, turn gate in some regions) is a form of gate which allows one person to pass at a time. A turnstile can be configured to enforce one-way human traffic. In addition, a t ...
. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as ''right rules'' in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into ''left rules'' in the sequent calculus. To give an example, consider disjunction; the right rules are familiar: On the left: Recall the ∨E rule of natural deduction in localised form: The proposition ''A ∨ B'', which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows: In the sequent calculus, the left and right rules are performed in lock-step until one reaches the ''initial sequent'', which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a ''transposition'' or a ''handshake'' of a left and a right proposition: The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument. ; Soundness of ⇒ wrt. ⊢ : ''If'' Γ ⇒ ''A'', ''then'' Γ ⊢ ''A''. ; Completeness of ⇒ wrt. ⊢ : ''If'' Γ ⊢ ''A'', ''then'' Γ ⇒ ''A''. It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules. The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the ''β-normal η-long'' form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the ''intercalation calculus'' (first described by John Byrnes), which can be used to formally define the notion of a ''normal form'' for natural deduction. The substitution theorem of natural deduction takes the form of a
structural rule In proof theory, a structural rule is an inference rule that does not refer to any logical connective, but instead operates on the judgment or sequents directly. Structural rules often mimic intended meta-theoretic properties of the logic. Logics ...
or structural theorem known as ''cut'' in the sequent calculus. ; Cut (substitution) : ''If'' Γ ⇒ π1 : ''A'' ''and'' Γ, ''u'':''A'' ⇒ π2 : ''C'', ''then'' Γ ⇒ 1/uπ2 : ''C''. In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a
meta-theorem In logic, a metatheorem is a statement about a formal system proven in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory ...
; the superfluousness of the cut rule is usually presented as a computational process, known as ''cut elimination''. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is ''not'' provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the ''global consistency'' theorem: "⋅ ⊢ ⊥ ''true''" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.


See also


Notes


References

* * * (English translation ''Investigations into Logical Deduction'' in M. E. Szabo. The Collected Works of Gerhard Gentzen. North-Holland Publishing Company, 1969.) * * Translated and with appendices by Paul Taylor and Yves Lafont. * Reprinted in ''Polish logic 1920–39'', ed. Storrs McCall. * * * * * Lecture notes to a short course at Università degli Studi di Siena, April 1983. * * * * * * PhD thesis. * * MSc thesis. *


External links

* Laboreo, Daniel Clemente,
Introduction to natural deduction.

Domino On Acid.
Natural deduction visualized as a game of dominoes. * Pelletier, Jeff,
A History of Natural Deduction and Elementary Logic Textbooks.
* Levy, Michel
A Propositional Prover.
{{DEFAULTSORT:Natural Deduction Logical calculi Deductive reasoning Proof theory Methods of proof