HOME





Cut-elimination Theorem
The cut-elimination theorem (or Gentzen's ''Hauptsatz'') is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in part I of his landmark 1935 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule. The Natural Deduction version of cut-elimination, known as ''normalization theorem'', has been first proved for a variety of logics by Dag Prawitz in 1965 (a similar but less general proof was given the same year by Andrès Raggio). The cut rule A sequent is a logical expression relating multiple formulas, in the form , which is to be read as "If all of hold, then at least one of must hold", or (as Gentzen glossed): "If (A_1 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Sequent Calculus
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems of a first-order theory rather than conditional tautologies. Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments. * Hilbert style. Every line is an unconditional tautology (or theorem). * Gentzen s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Analytic Proof
{{Short description, Fundamental theory of logical analysis In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and that does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817). Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gentzen's Consistency Proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called " primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but no ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Deduction Theorem
In mathematical logic, a deduction theorem is a metatheorem that justifies doing conditional proofs from a hypothesis in systems that do not explicitly axiomatize that hypothesis, i.e. to prove an implication A \to B, it is sufficient to assume A as a hypothesis and then proceed to derive B. Deduction theorems exist for both propositional logic and first-order logic. The deduction theorem is an important tool in Hilbert-style deduction systems because it permits one to write more comprehensible and usually much shorter proofs than would be possible without it. In certain other formal proof systems the same conveniency is provided by an explicit inference rule; for example natural deduction calls it implication introduction. In more detail, the propositional logic deduction theorem states that if a formula B is deducible from a set of assumptions \Delta \cup \ then the implication A \to B is deducible from \Delta ; in symbols, \Delta \cup \ \vdash B implies \Delta \vdash A \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Normal Form (abstract Rewriting)
In abstract rewriting, an object is in normal form if it cannot be rewritten any further, i.e. it is irreducible. Depending on the rewriting system, an object may rewrite to several normal forms or none at all. Many properties of rewriting systems relate to normal forms. Definitions Stated formally, if (''A'',→) is an abstract rewriting system, ''x''∈''A'' is in normal form if no ''y''∈''A'' exists such that ''x''→''y'', i.e. ''x'' is an irreducible term. An object ''a'' is weakly normalizing if there exists at least one particular sequence of rewrites starting from ''a'' that eventually yields a normal form. A rewriting system has the weak normalization property or is ''(weakly) normalizing'' (WN) if every object is weakly normalizing. An object ''a'' is strongly normalizing if every sequence of rewrites starting from ''a'' eventually terminates with a normal form. A rewriting system is ''strongly normalizing'', ''terminating'', ''noetherian'', or has the (strong) norma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Curry–Howard Correspondence
In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs. It is also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation. It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and the logician William Alvin Howard. It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation) and Stephen Kleene (see Realizability). The relationship has been extended to include category theory as the three-way Curry–Howard–Lambe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Typed Lambda Calculus
A typed lambda calculus is a typed formalism that uses the lambda symbol (\lambda) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus, but from another point of view, they can also be considered the more fundamental theory and ''untyped lambda calculus'' a special case with only one type. Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here, typability usually captures desirable properties of the program (e.g., the program will not cause a memory acce ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Prolog
Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving, and computational linguistics. Prolog has its roots in first-order logic, a formal logic. Unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program is a set of facts and Horn clause, rules, which define Finitary relation, relations. A computation is initiated by running a ''query'' over the program. Prolog was one of the first logic programming languages and remains the most popular such language today, with several free and commercial implementations available. The language has been used for automated theorem proving, theorem proving, expert systems, term rewriting, type systems, and automated planning, as well as its original intended field of use, natural language processing. See also Watson (computer). Prolog is a Turing-complete, general-purpose programming language, which is well-suited for inte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


First-order Resolution
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem. The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Craig Interpolation
In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theory (mathematical logic), theories. Roughly stated, the theorem says that if a well formed formula, formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig (logician), William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem. Example In propositional logic, let ::: \varphi = \lnot(P \land Q) \to (\lnot R \land Q) ::: \psi = (S \to P) \lor (S \to \lnot R) . Then \varphi tautological implication ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Proof-theoretic Semantics
Proof-theoretic semantics is an approach to the semantics of logic that attempts to locate the meaning of propositions and logical connectives not in terms of interpretations, as in Tarskian approaches to semantics, but in the role that the proposition or logical connective plays within a system of inference. Overview Gerhard Gentzen is the founder of proof-theoretic semantics, providing the formal basis for it in his account of cut-elimination for the sequent calculus, and some provocative philosophical remarks about locating the meaning of logical connectives in their introduction rules within natural deduction. The history of proof-theoretic semantics since then has been devoted to exploring the consequences of these ideas. Dag Prawitz extended Gentzen's notion of analytic proof to natural deduction, and suggested that the value of a proof in natural deduction may be understood as its normal form. This idea lies at the basis of the Curry–Howard isomorphism, and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]