Model (model theory)

TheInfoList

In
universal algebraUniversal algebra (sometimes called general algebra) is the field of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geomet ...
and in model theory, a structure consists of a Set (mathematics), set along with a collection of finitary, finitary operations and finitary relation, relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as group (mathematics), groups, ring (mathematics), rings, field (mathematics), fields and vector spaces. The term universal algebra is used for structures with no relation symbols. Model theory has a different scope that encompasses more arbitrary theories, including foundations of mathematics, foundational structures such as models of set theory. From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic. For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a ''semantic model'' when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as interpretation (logic), interpretations. In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.

# Definition

Formally, a structure can be defined as a triple $\mathcal A=\left(A, \sigma, I\right)$ consisting of a domain ''A'', a signature (logic), signature σ, and an interpretation function ''I'' that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature σ one can refer to it as a σ-structure.

## Domain

The domain of discourse, domain of a structure is an arbitrary set; it is also called the underlying set of the structure, its carrier (especially in universal algebra), or its universe (especially in model theory). In classical first-order logic, the definition of a structure prohibits the empty domain. Sometimes the notation $\operatorname\left(\mathcal A\right)$ or $, \mathcal A,$ is used for the domain of $\mathcal A$, but often no notational distinction is made between a structure and its domain. (I.e. the same symbol $\mathcal A$ refers both to the structure and its domain.)

## Signature

The signature (logic), signature $\sigma = \left(S, \operatorname\right)$ of a structure consists of a set $S$ of function symbols and relation symbols along with a function $\text\ S \to \N_0$ that ascribes to each symbol ''s'' a natural number $n=\operatorname\left(s\right)$ which is called the arity of ''s'' because it is the arity of the interpretation of ''s''. Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.

## Interpretation function

The interpretation function ''I'' of $\mathcal A$ assigns functions and relations to the symbols of the signature. Each function symbol ''f'' of arity ''n'' is assigned an arity, ''n''-ary function $f^=I\left(f\right)$ on the domain. Each relation symbol ''R'' of arity ''n'' is assigned an ''n''-ary relation $R^=I\left(R\right)\subseteq A^$ on the domain. A nullary function symbol ''c'' is called a constant symbol, because its interpretation ''I(c)'' can be identified with a constant element of the domain. When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol ''s'' and its interpretation ''I(s)''. For example, if ''f'' is a binary function symbol of $\mathcal A$, one simply writes $f:\mathcal A^2\rightarrow\mathcal A$ rather than $f^:, \mathcal A, ^2\rightarrow, \mathcal A,$.

## Examples

The standard signature σ''f'' for field (mathematics), fields consists of two binary function symbols + and ×, where additional symbols can be derived, such as a unary function symbol − (uniquely determined by +) and the two constant symbols 0 and 1 (uniquely determined by + and × respectively). Thus a structure (algebra) for this signature consists of a set of elements ''A'' together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers ''Q'', the real numbers ''R'' and the complex numbers ''C'', like any other field, can be regarded as σ-structures in an obvious way: ::$\mathcal Q = \left(Q, \sigma_f, I_\right)$ ::$\mathcal R = \left(R, \sigma_f, I_\right)$ ::$\mathcal C = \left(C, \sigma_f, I_\right)$ In all three cases we have the standard signature given by ::$\sigma_f = \left(S_f,\operatorname_f\right)$ with :::$S_f = \$,   $\operatorname_f\left(+\right) = \operatorname_f\left(\times\right) = 2, \operatorname_f\left(-\right) = 1, \operatorname_f\left(0\right) = \operatorname_f\left(1\right) = 0$. Interpretation functions: ::$I_\left(+\right)\colon Q\times Q\to Q$ is addition of rational numbers, ::$I_\left(\times\right)\colon Q\times Q\to Q$ is multiplication of rational numbers, ::$I_\left(-\right)\colon Q\to Q$ is the function that takes each rational number ''x'' to -''x'', and ::$I_\left(0\right)\in Q$ is the number 0 and ::$I_\left(1\right)\in Q$ is the number 1; and $I_$ and $I_$ are similarly defined.Note: 0, 1 and − on the left refer to signs of $S_f$. 0, 1, 2, and - on the right refer to natural numbers of $N_0$ and to the unary operation ''minus'' in $Q$ But the ring ''Z'' of integers, which is not a field, is also a σ''f''-structure in the same way. In fact, there is no requirement that ''any'' of the field axioms hold in a σ''f''-structure. A signature for ordered fields needs an additional binary relation such as < or ≤, and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word. The ordinary signature for set theory includes a single binary relation ∈. A structure for this signature consists of a set of elements and an interpretation of the ∈ relation as a binary relation on these elements.

# Induced substructures and closed subsets

$\mathcal A$ is called an substructure (mathematics), (induced) substructure of $\mathcal B$ if * $\mathcal A$ and $\mathcal B$ have the same signature $\sigma\left(\mathcal A\right)=\sigma\left(\mathcal B\right)$; * the domain of $\mathcal A$ is contained in the domain of $\mathcal B$: $, \mathcal A, \subseteq , \mathcal B,$; and * the interpretations of all function and relation symbols agree on $, \mathcal A,$. The usual notation for this relation is $\mathcal A\subseteq\mathcal B$. A subset $B\subseteq, \mathcal A,$ of the domain of a structure $\mathcal A$ is called closed if it is closed under the functions of $\mathcal A$, i.e. if the following condition is satisfied: for every natural number ''n'', every ''n''-ary function symbol ''f'' (in the signature of $\mathcal A$) and all elements $b_1,b_2,\dots,b_n\in B$, the result of applying ''f'' to the ''n''-tuple $b_1b_2\dots b_n$ is again an element of ''B'': $f\left(b_1,b_2,\dots,b_n\right)\in B$. For every subset $B\subseteq, \mathcal A,$ there is a smallest closed subset of $, \mathcal A,$ that contains ''B''. It is called the closed subset generated by ''B'', or the hull of ''B'', and denoted by $\langle B\rangle$ or $\langle B\rangle_$. The operator $\langle\rangle$ is a finitary closure operator on the power set, set of subsets of $, \mathcal A,$. If $\mathcal A=\left(A,\sigma,I\right)$ and $B\subseteq A$ is a closed subset, then $\left(B,\sigma,I\text{'}\right)$ is an induced substructure of $\mathcal A$, where $I\text{'}$ assigns to every symbol of σ the restriction to ''B'' of its interpretation in $\mathcal A$. Conversely, the domain of an induced substructure is a closed subset. The closed subsets (or induced substructures) of a structure form a lattice (order), lattice. The meet (mathematics), meet of two subsets is their intersection. The join (mathematics), join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail.

## Examples

Let σ =  be again the standard signature for fields. When regarded as σ-structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms. The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a Field extension, subfield. The most obvious way to define a Graph (discrete mathematics), graph is a structure with a signature σ consisting of a single binary relation symbol ''E''. The vertices of the graph form the domain of the structure, and for two vertices ''a'' and ''b'', $\left(a,b\right)\!\in \text$  means that ''a'' and ''b'' are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of Glossary of graph theory#Subgraphs, subgraph. For example, let ''G'' be a graph consisting of two vertices connected by an edge, and let ''H'' be the graph consisting of the same vertices but no edges. ''H'' is a subgraph of ''G'', but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs.

# Homomorphisms and embeddings

## Homomorphisms

Given two structures $\mathcal A$ and $\mathcal B$ of the same signature σ, a (σ-)homomorphism from $\mathcal A$ to $\mathcal B$ is a map (mathematics), map $h:, \mathcal A, \rightarrow, \mathcal B,$ that preserves the functions and relations. More precisely: * For every ''n''-ary function symbol ''f'' of σ and any elements $a_1,a_2,\dots,a_n\in, \mathcal A,$, the following equation holds: ::$h\left(f\left(a_1,a_2,\dots,a_n\right)\right)=f\left(h\left(a_1\right),h\left(a_2\right),\dots,h\left(a_n\right)\right)$. * For every ''n''-ary relation symbol ''R'' of σ and any elements $a_1,a_2,\dots,a_n\in, \mathcal A,$, the following implication holds: ::$\left(a_1,a_2,\dots,a_n\right)\in R \implies \left(h\left(a_1\right),h\left(a_2\right),\dots,h\left(a_n\right)\right)\in R$. The notation for a homomorphism ''h'' from $\mathcal A$ to $\mathcal B$ is $h: \mathcal A\rightarrow\mathcal B$. For every signature σ there is a concrete category, concrete category (mathematics), category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphism (category theory), morphisms. A homomorphism $h: \mathcal A\rightarrow\mathcal B$ is sometimes called strong if for every ''n''-ary relation symbol ''R'' and any elements $b_1,b_2,\dots,b_n\in, \mathcal B,$ such that $\left(b_1,b_2,\dots,b_n\right)\in R$, there are $a_1,a_2,\dots,a_n\in, \mathcal A,$ such that $\left(a_1,a_2,\dots,a_n\right)\in R$ and $b_1=h\left(a_1\right),\,b_2=h\left(a_2\right),\,\dots,\,b_n=h\left(a_n\right).$ The strong homomorphisms give rise to a subcategory of σ-Hom.

## Embeddings

A (σ-)homomorphism $h:\mathcal A\rightarrow\mathcal B$ is called a (σ-)embedding if it is injective function, one-to-one and * for every ''n''-ary relation symbol ''R'' of σ and any elements $a_1,a_2,\dots,a_n$, the following equivalence holds: ::$\left(a_1,a_2,\dots,a_n\right)\in R \iff\left(h\left(a_1\right),h\left(a_2\right),\dots,h\left(a_n\right)\right)\in R$. Thus an embedding is the same thing as a strong homomorphism which is one-to-one. The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom. Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom.

## Example

As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a graph homomorphism, homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph ''H'' of ''G'' is not induced, the identity map id: ''H'' → ''G'' is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore ''H'' is a subobject of ''G'' which is not an induced substructure.

## Homomorphism problem

The following problem is known as the ''homomorphism problem'': :Given two finite structures $\mathcal A$ and $\mathcal B$ of a finite relational signature, find a homomorphism $h:\mathcal A\rightarrow\mathcal B$ or show that no such homomorphism exists. Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the Complexity of constraint satisfaction#Constraint satisfaction and the homomorphism problem, complexity of CSP can be studied using the methods of finite model theory. Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem.

# Structures and first-order logic

Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer.

## Satisfaction relation

Each first-order structure $\mathcal = \left(M, \sigma, I\right)$ has a satisfaction relation $\mathcal \vDash \phi$ defined for all formulas $\, \phi$ in the language consisting of the language of $\mathcal$ together with a constant symbol for each element of $M$, which is interpreted as that element. This relation is defined inductively using Tarski's T-schema. A structure $\mathcal$ is said to be a model of a Theory (mathematical logic), theory ''T'' if the language of $\mathcal$ is the same as the language of ''T'' and every sentence in ''T'' is satisfied by $\mathcal$. Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of Zermelo–Fraenkel axioms, ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms.

## Definable relations

An ''n''-ary relation ''R'' on the universe $M$ of $\mathcal$ is said to be definable (or explicitly definable, or $\emptyset$-definable) if there is a formula φ(''x''1,...,''x''''n'') such that :$R = \.$ In other words, ''R'' is definable if and only if there is a formula φ such that :$\left(a_1,\ldots,a_n \right) \in R \Leftrightarrow \mathcal \vDash \varphi\left(a_1,\ldots,a_n\right)$ is correct. An important special case is the definability of specific elements. An element ''m'' of $M$ is definable in $\mathcal$ if and only if there is a formula φ(''x'') such that :$\mathcal\vDash \forall x \left( x = m \leftrightarrow \varphi\left(x\right)\right).$

### Definability with parameters

A relation ''R'' is said to be definable with parameters (or $, \mathcal M,$-definable) if there is a formula φ with parameters from $\mathcal$ such that ''R'' is definable using φ. Every element of a structure is definable using the element itself as a parameter. Some authors use ''definable'' to mean ''definable without parameters'', while other authors mean ''definable with parameters''. Broadly speaking, the convention that ''definable'' means ''definable without parameters'' is more common amongst set theorists, while the opposite convention is more common amongst model theorists.

### Implicit definability

Recall from above that an ''n''-ary relation ''R'' on the universe $M$ of $\mathcal$ is explicitly definable if there is a formula φ(''x''1,...,''x''''n'') such that :$R = \$ Here the formula φ used to define a relation ''R'' must be over the signature of $\mathcal$ and so φ may not mention ''R'' itself, since ''R'' is not in the signature of $\mathcal$. If there is a formula φ in the extended language containing the language of $\mathcal$ and a new symbol ''R'', and the relation ''R'' is the only relation on $\mathcal$ such that $\mathcal \vDash \phi$, then ''R'' is said to be implicitly definable over $\mathcal$. By Beth's theorem, every implicitly definable relation is explicitly definable.

# Many-sorted structures

Structures as defined above are sometimes called s to distinguish them from the more general s. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Signature (logic)#Many-sorted signatures, Many-sorted signatures also prescribe on which sorts the functions and relations of a many-sorted structure are defined. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers. Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts ''V'' (for vectors) and ''S'' (for scalars) and the following function symbols: If ''V'' is a vector space over a field ''F'', the corresponding two-sorted structure $\mathcal V$ consists of the vector domain $, \mathcal V, _V=V$, the scalar domain $, \mathcal V, _S=F$, and the obvious functions, such as the vector zero $0_V^=0\in, \mathcal V, _V$, the scalar zero $0_S^=0\in, \mathcal V, _S$, or scalar multiplication $\times^:, \mathcal V, _S\times, \mathcal V, _V\rightarrow, \mathcal V, _V$. Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly. In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred category, fibred over another ("base") category, capturing the type theory.

# Other generalizations

## Partial algebras

Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.  ''x'' ''y'' (''x'' + ''y'' = ''y'' + ''x''). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a Variety (universal algebra), variety. Universal algebra solves this problem by adding a unary function symbol −1. In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity.

## Structures for typed languages

In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type.

## Higher-order languages

There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language.

## Structures that are proper classes

In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class. In Bertrand Russell's ''Principia Mathematica'', structures were also allowed to have a proper class as their domain.

* Mathematical structure

# References

* * * * * * * * * * *