Theil Index
The Theil index is a statistic primarily used to measure economic inequality and other economic phenomena, though it has also been used to measure racial segregation. The Theil index ''T''T is the same as redundancy in information theory which is the maximum possible entropy of the data minus the observed entropy. It is a special case of the generalized entropy index. It can be viewed as a measure of redundancy, lack of diversity, isolation, segregation, inequality, non-randomness, and compressibility. It was proposed by a Dutch econometrician Henri Theil (1924–2000) at the Erasmus University Rotterdam. Henri Theil himself said (1967): "The (Theil) index can be interpreted as the expected information content of the indirect message which transforms the population shares as prior probabilities into the income shares as posterior probabilities." Amartya Sen noted, "But the fact remains that the Theil index is an arbitrary formula, and the average of the logarithms of the recip ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Polygamma Function
In mathematics, the polygamma function of order is a meromorphic function on the complex numbers \mathbb defined as the th derivative of the logarithm of the gamma function: :\psi^(z) := \frac \psi(z) = \frac \ln\Gamma(z). Thus :\psi^(z) = \psi(z) = \frac holds where is the digamma function and is the gamma function. They are holomorphic on \mathbb \backslash\mathbb_. At all the nonpositive integers these polygamma functions have a pole of order . The function is sometimes called the trigamma function. Integral representation When and , the polygamma function equals :\begin \psi^(z) &= (-1)^\int_0^\infty \frac\,\mathrmt \\ &= -\int_0^1 \frac(\ln t)^m\,\mathrmt\\ &= (-1)^m!\zeta(m+1,z) \end where \zeta(s,q) is the Hurwitz zeta function. This expresses the polygamma function as the Laplace transform of . It follows from Bernstein's theorem on monotone functions that, for and real and non-negative, is a completely monotone function. Setting in the above ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
E (mathematical Constant)
The number is a mathematical constant approximately equal to 2.71828 that is the base of a logarithm, base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted \gamma. Alternatively, can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest. The number is of great importance in mathematics, alongside 0, 1, Pi, , and . All five appear in one formulation of Euler's identity e^+1=0 and play important and recurring roles across mathematics. Like the constant , is Irrational number, irrational, meaning that it cannot be represented as a ratio of integers, and moreover it is Transcendental number, transcendental, meaning that it is not a root of any non-zero polynomial with rational coefficie ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Natural Logarithm
The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if the base is implicit, simply . Parentheses are sometimes added for clarity, giving , , or . This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity. The natural logarithm of is the exponentiation, power to which would have to be raised to equal . For example, is , because . The natural logarithm of itself, , is , because , while the natural logarithm of is , since . The natural logarithm can be defined for any positive real number as the Integral, area under the curve from to (with the area being negative when ). The simplicity of this definition, which is matched in many other formulas ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Binary Logarithm
In mathematics, the binary logarithm () is the exponentiation, power to which the number must be exponentiation, raised to obtain the value . That is, for any real number , :x=\log_2 n \quad\Longleftrightarrow\quad 2^x=n. For example, the binary logarithm of is , the binary logarithm of is , the binary logarithm of is , and the binary logarithm of is . The binary logarithm is the logarithm to the base and is the inverse function of the power of two function. There are several alternatives to the notation for the binary logarithm; see the #Notation, Notation section below. Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in infor ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Logarithmic Function
{{mathematical disambiguation ...
Logarithmic can refer to: * Logarithm, a transcendental function in mathematics * Logarithmic scale, the use of the logarithmic function to describe measurements * Logarithmic spiral, * Logarithmic growth * Logarithmic distribution, a discrete probability distribution * Natural logarithm The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Boltzmann Constant
The Boltzmann constant ( or ) is the proportionality factor that relates the average relative thermal energy of particles in a ideal gas, gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin (K) and the molar gas constant, in Planck's law of black-body radiation and Boltzmann's entropy formula, and is used in calculating Johnson–Nyquist noise, thermal noise in resistors. The Boltzmann constant has Dimensional analysis, dimensions of energy divided by temperature, the same as entropy and heat capacity. It is named after the Austrian scientist Ludwig Boltzmann. As part of the 2019 revision of the SI, the Boltzmann constant is one of the seven "Physical constant, defining constants" that have been defined so as to have exact finite decimal values in SI units. They are used in various combinations to define the seven SI base units. The Boltzmann constant is defined to be exactly joules per kelvin, with the effect of defining the SI unit ke ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computer File
A computer file is a System resource, resource for recording Data (computing), data on a Computer data storage, computer storage device, primarily identified by its filename. Just as words can be written on paper, so too can data be written to a computer file. Files can be shared with and transferred between computers and Mobile device, mobile devices via removable media, Computer networks, networks, or the Internet. Different File format, types of computer files are designed for different purposes. A file may be designed to store a written message, a document, a spreadsheet, an Digital image, image, a Digital video, video, a computer program, program, or any wide variety of other kinds of data. Certain files can store multiple data types at once. By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and file copying, copied an arbitrary number of times. Files are typically organized in a file syst ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Information Entropy
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable X, which may be any member x within the set \mathcal and is distributed according to p\colon \mathcal\to , 1/math>, the entropy is \Eta(X) := -\sum_ p(x) \log p(x), where \Sigma denotes the sum over the variable's possible values. The choice of base for \log, the logarithm, varies for different applications. Base 2 gives the unit of bits (or " shannons"), while base ''e'' gives "natural units" nat, and base 10 gives units of "dits", "bans", or " hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Claude Shannon
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of the Information Age. Shannon was the first to describe the use of Boolean algebra—essential to all digital electronic circuits—and helped found artificial intelligence (AI). Roboticist Rodney Brooks declared Shannon the 20th century engineer who contributed the most to 21st century technologies, and mathematician Solomon W. Golomb described his intellectual achievement as "one of the greatest of the twentieth century". At the University of Michigan, Shannon dual degreed, graduating with a Bachelor of Science in electrical engineering and another in mathematics, both in 1936. A 21-year-old master's degree student in electrical engineering at MIT, his thesis, "A Symbolic Analysis of Relay and Switching Circuits", demonstrated that electric ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Generalized Entropy Index
The generalized entropy index has been proposed as a measure of income inequality in a population. It is derived from information theory as a measure of redundancy in data. In information theory a measure of redundancy can be interpreted as non-randomness or data compression; thus this interpretation also applies to this index. In addition, interpretation of biodiversity as entropy has also been proposed leading to uses of generalized entropy to quantify biodiversity. Formula The formula for general entropy for real values of \alpha is: GE(\alpha) = \begin \frac\sum_^N\left left(\frac\right)^\alpha - 1\right& \alpha \ne 0, 1,\\ \frac\sum_^N\frac\ln\frac,& \alpha=1,\\ -\frac\sum_^N\ln\frac,& \alpha=0. \end where N is the number of cases (e.g., households or families), y_i is the income for case i and \alpha is a parameter which regulates the weight given to distances between incomes at different parts of the income distribution. For large \alpha the index is especially sensit ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Atkinson Index
The Atkinson index (also known as the Atkinson measure or Atkinson inequality measure) is a measure of income inequality developed by British economist Anthony Barnes Atkinson. The measure is useful in determining which end of the distribution contributed most to the observed inequality. Definition The Atkinson index is defined as: :A_\varepsilon(y_1,\ldots,y_N)= \begin 1-\frac\left(\frac\sum_^y_^\right)^ & \mbox\ 0 \leq \varepsilon \neq 1 \\ 1-\frac\left(\prod_^y_\right)^ & \mbox\ \varepsilon=1 \\ 1-\frac\min \left(y_1,...,y_N\right) & \mbox\ \varepsilon=+\infty \end where y_ is individual income (''i'' = 1, 2, ..., ''N'') and \mu is the mean income. In other words, the Atkinson index is the complement to 1 of the ratio of the Hölder generalized mean of exponent 1−ε to the arithmetic mean of the incomes (where as usual the generalized mean of exponent 0 is interpreted as the geometric mean). Interpretation The index can be turned into a normative measure by imposing a c ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |