Linear Separability
In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are ''linearly separable'' if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane. The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept. Mathematical definition Let X_ and X_ be two sets of points in an ''n''-dimensional Euclidean space. Then X_ and X_ are ''linearly separable'' if ther ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Linearly Separable Red-blue Cropped
In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a ''function (mathematics), function'' (or ''mapping (mathematics), mapping''); * linearity of a ''polynomial''. An example of a linear function is the function defined by f(x)=(ax,bx) that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables X, Y and Z is aX+bY+cZ+d. Linearity of a mapping is closely related to ''Proportionality (mathematics), proportionality''. Examples in physics include the linear relationship of voltage and Electric current, current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are ''Nonlinear system, nonlinear''. Generalized for functions in more than one dimension (mathematics), dimension, linearity means the property of a function of b ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Disjunctive Normal Form
In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or in philosophical logic a ''cluster concept''. As a normal form, it is useful in automated theorem proving. Definition A logical formula is considered to be in DNF if it is a disjunction of one or more conjunctions of one or more literals. A DNF formula is in full disjunctive normal form if each of its variables appears exactly once in every conjunction and each conjunction appears at most once (up to the order of variables). As in conjunctive normal form (CNF), the only propositional operators in DNF are and (\wedge), or (\vee), and not (\neg). The ''not'' operator can only be used as part of a literal, which means that it can only precede a propositional variable. The following is a context-free grammar for DNF: : ''DNF'' \, \to \, ''Conjunct'' \, \mid \, ''Conjunc ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Kirchberger's Theorem
Kirchberger's theorem is a theorem in discrete geometry, on linear separability. The two-dimensional version of the theorem states that, if a finite set of red and blue points in the Euclidean plane has the property that, for every four points, there exists a line separating the red and blue points within those four, then there exists a single line separating all the red points from all the blue points. Donald Watson phrases this result more colorfully, with a farmyard analogy: More generally, for finitely many red and blue points in d-dimensional Euclidean space, if the red and blue points in every subset of d+2 of the points are linearly separable, then all the red points and all the blue points are linearly separable. Another equivalent way of stating the result is that, if the convex hulls of finitely many red and blue points have a nonempty intersection, then there exists a subset of d+2 points for which the convex hulls of the red and blue points in the subsets also intersect ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Hyperplane Separation Theorem
In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in ''n''-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint. The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces. A related result is the supporting hyperplane theorem. In the context of support-vector machines, the ''optimally separating hyperplane'' or ''maximum-margin hy ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Clustering (statistics)
Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Cluster analysis refers to a family of algorithms and tasks rather than one specific algorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Normal (geometry)
In geometry, a normal is an object (e.g. a line, ray, or vector) that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the infinite straight line perpendicular to the tangent line to the curve at the point. A normal vector is a vector perpendicular to a given object at a particular point. A normal vector of length one is called a unit normal vector or normal direction. A curvature vector is a normal vector whose length is the curvature of the object. Multiplying a normal vector by results in the opposite vector, which may be used for indicating sides (e.g., interior or exterior). In three-dimensional space, a surface normal, or simply normal, to a surface at point is a vector perpendicular to the tangent plane of the surface at . The vector field of normal directions to a surface is known as '' Gauss map''. The word "normal" is also used as an adjective: a line ''normal'' to a plane, the ''normal'' component of ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Dot Product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused with scalar multiplication. is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two Euclidean vector, vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see ''Inner product space'' for more). It should not be confused with the cross product. Algebraically, the dot product is the sum of the Product (mathematics), products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euc ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Real Number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every real number can be almost uniquely represented by an infinite decimal expansion. The real numbers are fundamental in calculus (and in many other branches of mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives. The set of real numbers, sometimes called "the reals", is traditionally denoted by a bold , often using blackboard bold, . The adjective ''real'', used in the 17th century by René Descartes, distinguishes real numbers from imaginary numbers such as the square roots of . The real numbers include the rational numbers, such as the integer and the fraction . The rest of the real numbers are called irrational numbers. Some irrational numbers (as well as all the rationals) a ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Margin Classifier
In machine learning (ML), a margin classifier is a type of classification model which is able to give an associated distance from the decision boundary for each data sample. For instance, if a linear classifier is used, the distance (typically Euclidean, though others may be used) of a sample from the separating hyperplane In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is ... is the margin of that sample. The notion of ''margins'' is important in several ML classification algorithms, as it can be used to bound the generalization error of these classifiers. These bounds are frequently shown using the VC dimension. The generalization error bound in boosting algorithms and support vector machines is particularly prominent. Margin for boosting algorithms The margin for an iterative ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Maximum-margin Hyperplane
In geometry, the hyperplane separation theorem is a theorem about Disjoint sets, disjoint convex sets in ''n''-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are Closed set, closed and at least one of them is Compact set, compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal Projection (set theory), projections of the convex bodies onto the axis are disjoint. The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach theorem#Hahn–Banach separation theorem, Hahn–Banach separation theorem generalizes the result to topological vector spaces. A related result is the supporting hyperplane t ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Linear Classifier
In machine learning, a linear classifier makes a classification decision for each object based on a linear combination of its features. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables ( features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use. Definition If the input feature vector to the classifier is a real vector \vec x, then the output score is :y = f(\vec\cdot\vec) = f\left(\sum_j w_j x_j\right), where \vec w is a real vector of weights and ''f'' is a function that converts the dot product of the two vectors into the desired output. (In other words, \vec is a one-form or linear functional mapping \vec x onto R.) The weight vector \vec w is learned from a set of labeled training samples. Often ''f'' is a threshold function, which maps all values of \vec\cdot\vec above a certain threshold to the first class and all other value ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Support Vector Machine
In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories, SVMs are one of the most studied models, being based on statistical learning frameworks of VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). In addition to performing linear classification, SVMs can efficiently perform non-linear classification using the ''kernel trick'', representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensional feature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed. Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |