Salem–Spencer Set
In mathematics, and in particular in arithmetic combinatorics, a Salem-Spencer set is a set of numbers no three of which form an arithmetic progression. Salem–Spencer sets are also called 3-AP-free sequences or progression-free sets. They have also been called non-averaging sets, but this term has also been used to denote a set of integers none of which can be obtained as the average of any subset of the other numbers. Salem-Spencer sets are named after Raphaël Salem and Donald C. Spencer, who showed in 1942 that Salem–Spencer sets can have nearly-linear size. However a later theorem of Klaus Roth shows that the size is always less than linear. Examples For k=1,2,\dots the smallest values of n such that the numbers from 1 to n have a k-element Salem-Spencer set are :1, 2, 4, 5, 9, 11, 13, 14, 20, 24, 26, 30, 32, 36, ... For instance, among the numbers from 1 to 14, the eight numbers : form the unique largest Salem-Spencer set. This example is shifted by adding one to the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Roth's Theorem
In mathematics, Roth's theorem or Thue–Siegel–Roth theorem is a fundamental result in diophantine approximation to algebraic numbers. It is of a qualitative type, stating that algebraic numbers cannot have many rational approximations that are 'very good'. Over half a century, the meaning of ''very good'' here was refined by a number of mathematicians, starting with Joseph Liouville in 1844 and continuing with work of , , , and . Statement Roth's theorem states that every irrational algebraic number \alpha has approximation exponent equal to 2. This means that, for every \varepsilon>0, the inequality :\left, \alpha - \frac\ \frac with C(\alpha,\varepsilon) a positive number depending only on \varepsilon>0 and \alpha. Discussion The first result in this direction is Liouville's theorem on approximation of algebraic numbers, which gives an approximation exponent of ''d'' for an algebraic number α of degree ''d'' ≥ 2. This is already enough to demonstrate ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Non-interactive Zero-knowledge Proof
Across the many fields concerned with interactivity, including information science, computer science, human-computer interaction, communication, and industrial design, there is little agreement over the meaning of the term "interactivity", but most definitions are related to interaction between users and computers and other machines through a user interface. Interactivity can however also refer to interaction between people. It nevertheless usually refers to interaction between people and computers – and sometimes to interaction between computers – through software, hardware, and networks. Multiple views on interactivity exist. In the "contingency view" of interactivity, there are three levels: #Not interactive, when a message is not related to previous messages. #Reactive, when a message is related only to one immediately previous message. #Interactive, when a message is related to a number of previous messages and to the relationship between them. One body of research h ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Matrix Multiplication
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix (mathematics), matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices and is denoted as . Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of functions, composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. Computing matrix products is a central operation in all computational applications of linear algebra. Not ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Coppersmith–Winograd Algorithm
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network). Directly applying the mathematical definition of matrix multiplication gives an algorithm that takes time on the order of field operations to multiply two matrices over that field ( in big O notation). Better asymptotic bounds on the time required to multiply matrices have been known since the Strassen's algorithm in the 1960s, but the optimal time (that is, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Locally Linear Graph
In graph theory, a locally linear graph is an undirected graph in which every edge belongs to exactly one triangle. Equivalently, for each vertex of the graph, its neighborhood (graph theory), neighbors are each adjacent to exactly one other neighbor. That is, locally (from the point of view of any one vertex) the rest of the graph looks like a perfect matching. Locally linear graphs have also been called locally matched graphs. More technically, the triangles of any locally linear graph form the hyperedges of a triangle-free 3-uniform linear hypergraph, and they form the blocks of certain Steiner system, partial Steiner triple systems; and the locally linear graphs are exactly the Gaifman graphs of these hypergraphs or partial Steiner systems. Many constructions for locally linear graphs are known. Examples of locally linear graphs include the triangular cactus graphs, the line graphs of 3-regular triangle-free graphs, and the Cartesian product of graphs, Cartesian products of sma ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ruzsa–Szemerédi Problem
In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a graph in which every edge belongs to a unique triangle. Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from n points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than n^2 by a slowly-growing (but still unknown) factor. Equivalence between formulations The following questions all have answers that are asymptotically equivalent: they differ by, at most, constant factors from each other. *What is the maximum possible number of edges in a graph with n vertices in which every edge belongs to a unique triangle? The graphs with this property are called locally linear graphs ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomial function, linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the mathematical optimization, optimization of a linear objective function, subject to linear equality and linear inequality Constraint (mathematics), constraints. Its feasible region is a convex polytope, which is a set defined as the intersection (mathematics), intersection of finitely many Half-space (geometry), half spaces, each of which is defined by a linear inequality. Its objective function is a real number, real-valued affine function, affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the po ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Branch-and-bound
Branch and bound (BB, B&B, or BnB) is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores ''branches'' of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated ''bounds'' on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. The algorithm depends on efficient estimation of the lower and u ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Leo Moser
Leo Moser (11 April 1921, Vienna – 9 February 1970, Edmonton) was an Austrian-Canadian mathematician, best known for his polygon notation. A native of Vienna, Leo Moser immigrated with his parents to Canada at the age of three. He received his Bachelor of Science degree from the University of Manitoba in 1943, and a Master of Science from the University of Toronto in 1945. After two years of teaching he went to the University of North Carolina to complete a PhD, supervised by Alfred Brauer. There, in 1950, he began suffering recurrent heart problems. He took a position at Texas Technical College for one year, and joined the faculty of the University of Alberta in 1951, where he remained until his death at the age of 48. In 1966, Moser posed the question "What is the region of smallest area which will accommodate every planar arc of length one?"W. Moser, G. Bloind, V. Klee, C. Rousseau, J. Goodman, B. Monson, J. Wetzel, L. M. Kelly7, G. Purdy, and J Wilker, Fifth edition, ''Pro ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ternary Numeral System
A ternary numeral system (also called base 3 or trinary) has 3 (number), three as its radix, base. Analogous to a bit, a ternary numerical digit, digit is a trit (trinary digit). One trit is equivalent to binary logarithm, log2 3 (about 1.58496) bits of Units of information, information. Although ''ternary'' most often refers to a system in which the three digits are all non–negative numbers; specifically , , and , the adjective also lends its name to the balanced ternary system; comprising the digits −1, 0 and +1, used in comparison logic and ternary computers. Comparison to other bases Representations of integer numbers in ternary do not get uncomfortably lengthy as quickly as in binary numeral system, binary. For example, decimal 365 (number), 365 or senary corresponds to binary (nine bits) and to ternary (six digits). However, they are still far less compact than the corresponding representations in bases such as decimal – see below for a compact way to codi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Thomas Bloom
Thomas F. Bloom is a mathematician, who is a Royal Society University Research Fellow at the University of Manchester. He works in arithmetic combinatorics In mathematics, arithmetic combinatorics is a field in the intersection of number theory, combinatorics, ergodic theory and harmonic analysis. Scope Arithmetic combinatorics is about combinatorial estimates associated with arithmetic operations ... and analytic number theory. Education and career Thomas did his undergraduate degree in Mathematics and Philosophy at Merton College, Oxford. He then went on to do his PhD in mathematics at the University of Bristol under the supervision of Trevor Wooley. After finishing his PhD, he was a Heilbronn Research Fellow at the University of Bristol. In 2018, he became a postdoctoral research fellow at the University of Cambridge with Timothy Gowers. In 2021, he joined the University of Oxford as a Research Fellow. Then, in 2024, he moved to the University of Manchester, where he als ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |