Consensus Based Optimization
   HOME



picture info

Consensus Based Optimization
Consensus-based optimization (CBO) is a multi-agent derivative-free optimization method, designed to obtain solutions for global optimization problems of the form \min_ f(x), where f:\mathcal\to\R denotes the objective function acting on the state space \cal, which is assumed to be a normed vector space. The function f can potentially be nonconvex and nonsmooth. The algorithm employs particles or agents to explore the state space, which communicate with each other to update their positions. Their dynamics follows the paradigm of Metaheuristic, metaheuristics, which blend exporation with exploitation. In this sense, CBO is comparable to Ant colony optimization algorithms, ant colony optimization, wind driven optimization, particle swarm optimization or Simulated annealing. Algorithm Consider an ensemble of points x_t = (x_t^1,\dots, x_t^N) \in ^N, dependent of the time t\in[0,\infty). Then the update for the ith particle is formulated as a stochastic differential equation, dx^i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Derivative-free Optimization
Derivative-free optimization (sometimes referred to as blackbox optimization) is a discipline in mathematical optimization that does not use derivative information in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective function ''f'' is unavailable, unreliable or impractical to obtain. For example, ''f'' might be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them via finite differences are of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are called derivative-free algorithms. Introduction The problem to be solved is to numerically optimize an objective function f\colon A\to\mathbb for some set A (usually A\subset\mathbb^n), i.e. find x_0\in A such that without loss of generality f(x_0)\leq f(x) for all x\in A. When applicable, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Normed Vector Space
The Ateliers et Chantiers de France (ACF, Workshops and Shipyards of France) was a major shipyard that was established in Dunkirk, France, in 1898. The shipyard boomed in the period before World War I (1914–18), but struggled in the inter-war period. It was badly damaged during World War II (1939–45). In the first thirty years after the war the shipyard again experienced a boom and employed up to 3,000 workers making oil tankers, and then liquid natural gas tankers. Demand dropped off in the 1970s and 1980s. In 1972 the shipyard became Chantiers de France-Dunkerque, and in 1983 merged with others yards to become part of Chantiers du Nord et de la Mediterranee, or Normed. The shipyard closed in 1987. Foundation (1898–99) The Ateliers et Chantiers de France (ACF) company was officially founded on 6 July 1898 by a consortium of six shipping brokers, the Dunkirk chamber of commerce and the state. The state asked that the shipyard be able to build steamships and also four-maste ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Metaheuristic
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise. Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal sol ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ant Colony Optimization Algorithms
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. As an example, ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones to direct each other to resources while exploring their environment. The simulated 'ants' similarly record th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Particle Swarm Optimization
In computational science, particle swarm optimization (PSO) is a computational method that Mathematical optimization, optimizes a problem by iterative method, iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed Point particle, particles, and moving these particles around in the Optimization (mathematics)#Concepts and notation, search-space according to simple formula, mathematical formulae over the particle's Position (vector), position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. PSO is originally attributed to James Kennedy (social psychologist), Kennedy, Russell C. Eberhart, Eberhart and Yuhui Shi, Shi and was first int ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Simulated Annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optimum. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound. The name of the algorithm comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to alter its physical properties. Both are attributes of the material that depend on their thermodynamic free energy ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Euler–Maruyama Method
In Itô calculus, the Euler–Maruyama method (also simply called the Euler method) is a method for the approximate numerical analysis, numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ordinary differential equations to stochastic differential equations named after Leonhard Euler and Gisiro Maruyama. The same generalization cannot be done for any arbitrary deterministic method. Definition Consider the stochastic differential equation (see Itô calculus) :\mathrm X_t = a(X_t, t) \, \mathrm t + b(X_t, t) \, \mathrm W_t, with initial condition ''X''0 = ''x''0, where ''W''''t'' denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time [0, ''T'']. Then the Euler–Maruyama approximation to the true solution ''X'' is the Markov chain ''Y'' defined as follows: * Partition the interval [0, ''T''] into ''N'' equal subintervals of width \Delta t>0: ::0 = \tau_ < \tau_ < \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


LogSumExp
The LogSumExp (LSE) (also called RealSoftMax or multivariable softplus) function is a smooth maximum – a smooth approximation to the maximum function, mainly used by machine learning algorithms. It is defined as the logarithm of the sum of the exponentials of the arguments: \mathrm(x_1, \dots, x_n) = \log\left( \exp(x_1) + \cdots + \exp(x_n) \right). Properties The LogSumExp function domain is \R^n, the real coordinate space, and its codomain is \R, the real line. It is an approximation to the maximum \max_i x_i with the following bounds \max \leq \mathrm(x_1, \dots, x_n) \leq \max + \log(n). The first inequality is strict unless n = 1. The second inequality is strict unless all arguments are equal. (Proof: Let m = \max_i x_i. Then \exp(m) \leq \sum_^n \exp(x_i) \leq n \exp(m). Applying the logarithm to the inequality gives the result.) In addition, we can scale the function to make the bounds tighter. Consider the function \frac 1 t \mathrm(tx_1, \dots, tx_n). Then ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Python (programming Language)
Python is a high-level programming language, high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is type system#DYNAMIC, dynamically type-checked and garbage collection (computer science), garbage-collected. It supports multiple programming paradigms, including structured programming, structured (particularly procedural programming, procedural), object-oriented and functional programming. It is often described as a "batteries included" language due to its comprehensive standard library. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC (programming language), ABC programming language, and he first released it in 1991 as Python 0.9.0. Python 2.0 was released in 2000. Python 3.0, released in 2008, was a major revision not completely backward-compatible with earlier versions. Python 2.7.18, released in 2020, was the last release of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Julia (programming Language)
Julia is a high-level programming language, high-level, general-purpose programming language, general-purpose dynamic programming language, dynamic programming language, designed to be fast and productive, for e.g. data science, artificial intelligence, machine learning, modeling and simulation, most commonly used for numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism and the use of multiple dispatch as a core programming paradigm, a default just-in-time compilation, just-in-time (JIT) compiler (with support for ahead-of-time compilation) and an tracing garbage collection, efficient (multi-threaded) garbage collection implementation. Notably Julia does not support classes with encapsulated methods and instead it relies on structs with generic methods/functions not tied to them. By default, Julia is run similarly to scripting languages, using its runtime, and allows for read–eval–print loop, i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Mean-shift Algorithm
Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing. History The mean shift procedure is usually credited to work by Fukunaga and Hostetler in 1975. It is, however, reminiscent of earlier work by Schnell in 1964. Overview Mean shift is a procedure for locating the maxima—the modes—of a density function given discrete data sampled from that function. This is an iterative method, and we start with an initial estimate x . Let a kernel function K(x_i - x) be given. This function determines the weight of nearby points for re-estimation of the mean. Typically a Gaussian kernel on the distance to the current estimate is used, K(x_i - x) = e^ . The weighted mean of the density in the window determined by K is : m(x) = \frac where N(x) is the neighborhood of x , a set of p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]