History
An early form of promise theory was proposed by physicist and computer scientist Mark Burgess in 2004, initially in the context of information science, in order to solve observed problems with the use of obligation-based logics in computer management schemes, in particular for policy-based management. A collaboration between Burgess and Dutch computer scientist Jan Bergstra refined the model of a promise, which included the notion of impositions and the role of trust. The cooperation resulted in several books and many scientific papers covering a range of different applications. In spite of wider applications of promise theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. CFEngine had been under development since 1993 and Burgess had found that existing theories based on obligations were unsuitable as "they amounted to wishful thinking". Consequently, CFEngine uses a model of autonomy - as implied by promise theory—both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack. As of January 2023, more than 2700 companies are using CFEngine worldwide. Outside the configuration management andKey ideas
Promise theory is described as a modeling tool or a ''method of analysis'' suitable for studying any system of interacting components. It is not a technology or design methodology and does not advocate any position or design principle, except as a method of analysis.Agents
Agents in promise theory are said to be ''autonomous'', meaning that they are causally independent of one another. This independence implies that they cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation. Agents are thus self-determined until such a time as they partially or completely give up their independence by promising to accept guidance from other agents. Agents may be as simple as a heading in anIntentions and outcomes
Agents in promise theory may have ''intentions''. An intention may be realized by a behaviour or a target outcome. Intentions are thus made concrete by defining a set of ''acceptable outcomes'' associated with each intention. An outcome is most useful when it describes an invariant or mathematical fixed point in some description of states, because this can be both dynamically and semantically stable. Each intention expresses a quantifiable outcome, which may be described as aPromises
Promises arise when an agent shares one of its intentions with another agent voluntarily (e.g., by publishing its intent). The method of sharing is left to the modeller to explain. For example, an object, such as a door handle, is an agent that promises to be suitable for opening a door, although it could be used for something else (e.g., for digging a hole in the ground). We cannot assume that agents will accept the promises given in the spirit in which they were intended, because every agent has its own context and capabilities. The promise of ''door handleness'' could be expressed by virtue of its physical form or by having a written label attached in some language. An agent that uses this promise can ''assess'' whether the agent keeps its promise, or is ''fit for purpose''. Any agent can decide this for itself. A promise may be used voluntarily by another agent in order to influence its usage of the other agent. Promises facilitate interaction, cooperation, and tend to maximize an intended outcome. Promises are not commands or deterministic controls.Autonomy
''Obligations'', rather than promises have been the traditional way of modelling behaviour—in society, in technology, and in other areas. While still dominant, the obligation based model has known weaknesses, in particular in areas like scalability and predictability, because of its rigidness and lack of dynamism. Promise theory's point of departure from obligation logics is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In promise theory an agent may only make promises about its own behaviour. For autonomous agents, it is meaningless to make promises about another's behaviour. Although this assumption could be interpreted morally or ethically, in promise theory this is simply a pragmatic engineering principle, which leads to a more complete declaration of the intended roles of the actors or agents in a system: When making assumptions about others' behaviour is disallowed, one is forced to document every promise more completely in order to make predictions, which in turn will reveal possible failure modes by which cooperative behaviour could fail. Command and control systems—like those that motivate obligation theories—can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control. In philosophy and law, a promise is often viewed as something that leads to an obligation; promise theory rejects that point of view. Bergstra and Burgess state that the concept of a promise is quite independent of that of obligation.Economics
Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate ''selfish agent'' behaviour, and promise theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game. Promise theory has also been used to model and build new insights into monetary systems.Emergent behaviour
InAgency as a model of systems in space and time
The promises made by autonomous agents lead to a mutually approved graph structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of smart spaces, i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of semantic spacetime uses promise theory to discuss these spacetime concepts. Promises are more mathematically primitive than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modeling physical and virtual information systems.Promise theory, agile transformation, and social science
The Open Leadership Network and Open Space Technology organizers Daniel Mezick and Mark Sheffield invited promise theory originator Mark Burgess to keynote at the Open Leadership Network's Boston conference in 2019. This led to applying the formal development of promise theory to teach agile concepts. Burgess later extended the lecture notes into an online study course, which he claims prompted an even deeper study of the concepts of social systems, including trust and authority.References