Theory Of Functional Connections
   HOME

TheInfoList



OR:

The Theory of Functional Connections (TFC) is a mathematical framework for functional interpolation. It provides a method for deriving a functional—a function that operates on another function—which can transform constrained optimization problems into equivalent unconstrained ones. This transformation allows TFC to be applied to a wide range of mathematical problems, including the solution of differential equations. In this context, functional interpolation refers to the construction of functionals that always satisfy specified constraints, regardless of how the internal (or free) function is expressed.


From interpolation to functional interpolation

To provide a general context for the TFC, consider a generic
interpolation In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one ...
problem involving n constraints, such as a differential equation subject to a
boundary value problem In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satis ...
(BVP). Regardless of the differential equation, these constraints may be
consistent In deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory T is consistent if there is no formula \varphi such that both \varphi and its negation \lnot\varphi are elements of the set of consequences ...
or inconsistent. For instance, in a problem over the domain \mathcal: (0, 1) \cup (0, 1), the constraints f_1 (x,0) = 1 + x and f_2 (0, y) = 2 - y are inconsistent, as they yield different values at the shared point (0,0). If the n constraints are consistent, a function interpolating these constraints can be constructed by selecting n linearly independent support functions, such as
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called a power product or primitive monomial, is a product of powers of variables with n ...
s, \. The chosen set of support functions may or may not be consistent with the given constraints. For instance, the constraints y (-1) = y (+1) = 0 and \dfrac\bigg, _ = 1 are inconsistent with the support functions, \, as can be easily verified. If the support functions are consistent with the constraints, the interpolation problem can be solved, yielding an interpolant—a function that satisfies all constraints. Choosing a different set of support functions would result in a different interpolant. When an interpolation problem is solved and an initial interpolant is determined, all possible interpolants can, in principle, be generated by performing the interpolation process with every distinct set of linearly independent support functions consistent with the constraints. However, this method is impractical, as the number of possible sets of support functions is infinite. This challenge was addressed through the development of the ''TFC'', an analytical framework for performing functional interpolation introduced by Daniele Mortari at
Texas A&M University Texas A&M University (Texas A&M, A&M, TA&M, or TAMU) is a public university, public, Land-grant university, land-grant, research university in College Station, Texas, United States. It was founded in 1876 and became the flagship institution of ...
. The approach involves constructing a functional f \big(\mathbf, g (\mathbf)\big) that satisfies the given constraints for any arbitrary expression of g (\mathbf), referred to as the ''free function''. This functional, known as the ''constrained functional'', provides a complete representation of all possible interpolants. By varying g (\mathbf), it is possible to generate the entire set of interpolants, including those that are discontinuous or partially defined. Function interpolation produces a single interpolating function, while functional interpolation generates a family of interpolating functions represented through a functional. This functional defines the subspace of functions that inherently satisfy the given constraints, effectively reducing the solution space to the region where solutions to the constrained optimization problem are located. By employing these functionals,
constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
problems can be reformulated as unconstrained problems. This reformulation allows for simpler and more efficient solution methods, often improving accuracy, robustness, and reliability. Within this context, the Theory of Functional Connections (TFC) provides a systematic framework for transforming constrained problems into unconstrained ones, thereby streamlining the solution process. TFC addresses univariate constraints involving points, derivatives, integrals, and any linear combination of these. The theory is also extended to accommodate infinite and multivariate constraints and applied to solving ordinary, partial, and integro-differential equations. The consistency problem, which pertains to constraints, interpolation, and functional interpolation, is comprehensively addressed in. This includes the consistency challenges associated with boundary conditions that involve shear and mixed derivatives. The univariate version of TFC can be expressed in one of the following two forms: : \begin f \big(x, g (x)\big) = g (x) + \displaystyle\sum_^n \eta_j \big(x, g (x)\big) \, s_j (x) \\ f \big(x, g (x)\big) = g (x) + \displaystyle\sum_^n \phi_j \big(x, \mathbf(x)\big) \, \rho_j\big(x, g (x)\big), \end where n represents the number of linear constraints, g (x) is the free function, and s_j (x) are n user-defined, linearly independent ''support functions''. The terms \eta_j (x, g (x)) are the ''coefficient functionals'', \phi_j (x) are ''switching functions'' (which take a value of 1 when evaluated at their respective constraint and 0 at other constraints), and \rho_j\big(x, g (x)\big) are ''projection functionals'' that express the constraints in terms of the free function.


A rational example

To show how TFC generalizes interpolation, consider the constraints, \dot (x_1) = \dot_1 and \dot (x_2) = \dot_2. An interpolating function satisfying these constraints is, : f_a (x) = \dfrac \, \dot_1 + \dfrac \, \dot_2, as can be easily verified. Because of this interpolation property, the derivative of the function, : \delta (x) = g (x) - \dfrac \, \dot (x_1) - \dfrac \, \dot (x_2), vanishes at x_1 and x_2, for \textit function, g (x). Therefore, by adding \delta (x) to f_a (x), a functional is obtained that still satisfies the constraints, : f \big(x, g (x)\big) = f_a (x) + \delta (x) = \dfrac \, \dot_1 + \dfrac \, \dot_2 + g (x) - \dfrac \, \dot (x_1) - \dfrac \, \dot (x_2), no matter what g (x) is. Due to this property, this functional is referred to as ''constrained functional''. The key requirement for the functional f\big(x, g(x)\big) to work as intended is that the terms \dot (x_1) and \dot (x_2) are defined. Once this condition is met, the functional f\big(x, g(x)\big) is free to take on any arbitrary values beyond the specified constraints, thanks to the infinite flexibility provided by g(x). Importantly, this flexibility is not limited to the specific constraints chosen in this example. Instead, it applies universally to any set of constraints. This universality illustrates how TFC performs functional interpolation: it constructs a function that satisfies the given constraints while simultaneously allowing complete freedom in behavior elsewhere through the choice of g(x). In essence, this example demonstrates that the constrained functional f\big(x, g(x)\big) captures all possible functions that meet the given constraints, showcasing the power and generality of TFC in handling a wide variety of interpolation problems.


Applications of TFC

TFC has been extended and employed in various applications, including its use in shear-type and mixed derivative problems, the analysis of fractional operators, the determination of
geodesic In geometry, a geodesic () is a curve representing in some sense the locally shortest path ( arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a conn ...
s for BVP in
curved space Curved space often refers to a spatial geometry which is not "flat", where a '' flat space'' has zero curvature, as described by Euclidean geometry. Curved spaces can generally be described by Riemannian geometry, though some simple cases can be ...
s, and in
continuation In computer science, a continuation is an abstract representation of the control state of a computer program. A continuation implements ( reifies) the program control state, i.e. the continuation is a data structure that represents the computat ...
methods. Additionally, TFC has been applied to indirect
optimal control Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations ...
, the modeling of stiff
chemical kinetics Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a ...
, and the study of epidemiological dynamics. TFC extends into astrodynamic

where Lambert's problem is efficiently solved. It has also demonstrated potential in
nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation ...
and
structural mechanics Structural mechanics or mechanics of structures is the computation of deformations, deflections, and internal forces or stresses (''stress equivalents'') within structures, either for design or for performance evaluation of existing structures. ...
and
radiative transfer Radiative transfer (also called radiation transport) is the physical phenomenon of energy transfer in the form of electromagnetic radiation. The propagation of radiation through a medium is affected by absorption, emission, and scattering process ...
, among other areas. An efficient, free Python TFC toolbox is available at https://github.com/leakec/tfc. Of particular note is the application of TFC in
neural networks A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either Cell (biology), biological cells or signal pathways. While individual neurons are simple, many of them together in a netwo ...
, where it has shown exceptional efficiency, especially addressing high-dimensional problems and in enhancing the performance of
physics-informed neural networks Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning pro ...
by effectively eliminating constraints from the optimization process, a challenge that traditional neural networks often struggle to address. This capability significantly improves computational efficiency and accuracy, enabling the resolution of complex problems with greater ease, as proved by the University of Arizona. TFC has been employed with physics-informed neural networks and symbolic regression techniques for physics discovery of
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space, such as in a parametric curve. Examples include the mathematical models ...
s.{{cite journal , last1=De Florio , first1=Mario , last2=Kevrekidis , first2=Ioannis G. , last3=Karniadakis , first3=George Em , title=AI-Lorenz: A physics-data-driven framework for Black-Box and Gray-Box identification of chaotic systems with symbolic regression , journal=Chaos, Solitons & Fractals , date=1 November 2024 , volume=188 , pages=115538 , doi=10.1016/j.chaos.2024.115538 , arxiv=2312.14237 , bibcode=2024CSF...18815538D , url=https://www.sciencedirect.com/science/article/pii/S0960077924010907 , issn=0960-0779


Difference with spectral methods

At first glance, TFC and
spectral methods Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functio ...
may appear similar in their approach to solving constrained optimization problems. However, there are two fundamental distinctions between them: * Representation of solutions:
Spectral methods Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functio ...
represent the solution as a sum of
basis function In mathematics, a basis function is an element of a particular basis for a function space. Every function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represe ...
s, whereas TFC represents the ''free function'' as a sum of basis functions. This distinction allows TFC to analytically satisfy the constraints, while
spectral methods Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functio ...
treat constraints as additional data, approximating them with an accuracy dependent on the residuals. * Computational approach in BVP: In linear BVPs, the computational strategies of the two methods differ significantly. Spectral methods typically employ iterative techniques, such as the
shooting method In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the ...
, to reformulate the BVP as an
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or ...
, which is simpler to solve. Conversely, TFC directly addresses these problems through linear
least-squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
techniques, avoiding the need for iterative procedures. Both methods can perform optimization using either the
Galerkin method In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear c ...
, which ensures the residual vector is orthogonal to the chosen basis functions, or the
Collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
, which minimizes the norm of the residual vector.


Difference with Lagrange multipliers technique

The
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function (mathematics), function subject to constraint (mathematics), equation constraints (i.e., subject to the conditio ...
s method is a widely used approach for imposing constraints in an
optimization problem In mathematics, engineering, computer science and economics Economics () is a behavioral science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goo ...
. This technique introduces additional variables, known as multipliers, which must be computed to enforce the constraints. While the computation of these multipliers is straightforward in some cases, it can be challenging or even practically infeasible in others, thereby adding significant complexity to the problem. In contrast, TFC doesn't add new variables and enables the derivation of constrained functionals without encountering insurmountable difficulties. However, it is important to note that the Lagrange multiplier method has the advantage of handling inequality constraints, a capability that TFC currently lacks. A notable limitation of both approaches is their propensity to produce solutions that correspond to
local optima In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative' ...
rather than guaranteed global optima, particularly in the context of non-convex problems. Consequently, supplementary verification procedures or alternative methods may be required to assess and confirm the quality and global validity of the obtained solution. In summary, while TFC does not entirely replace the Lagrange multipliers method, it serves as a powerful alternative in cases where the computation of multipliers becomes excessively complex or infeasible, provided the constraints are limited to equalities.


References

Functions and mappings