HOME





Strength Reduction
In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts ''strong'' multiplications inside a loop into ''weaker'' additions – something that frequently occurs in array addressing. Examples of strength reduction include replacing a multiplication within a loop with an addition and replacing exponentiation within a loop with a multiplication. Code analysis Most of a program's execution time is typically spent in a small section of code (called a hot spot), and that code is often inside a loop that is executed over and over. A compiler uses methods to identify loops and recognize the characteristics of register values within those loops. For strength reduction, the compiler is interested in: *Loop invariants: the values which do not change within the body of a loop. *Induction variables: the values which are being iterate ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Compiler Construction
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language, low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman - Second Edition, 2007 There are many different types of compilers which produce output in different useful forms. A ''cross-compiler'' produces code for a different Central processing unit, CPU or operating system than the one on which the cross-compiler itself runs. A ''bootstrap compiler'' is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Logical Shift
In computer science, a logical shift is a bitwise operation that shifts all the bits of its operand. The two base variants are the logical left shift and the logical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such as ''shift left by 1'' or ''shift right by n''. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its significand (mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with a circular shift). A logical shift is often used when its operand is being treated as a sequence of bits instead of as a number. Logical shifts can be useful as efficient ways to perform multiplication or division of unsigned integers by powers of two. Shifting left by ''n'' bits on a signed or unsigned binary number has the effect of multiplyin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Hacker's Delight
''Hacker's Delight'' is a software algorithm book by Henry S. Warren, Jr. first published in 2002. It presents fast bit-level and low-level arithmetic algorithms for common tasks such as counting bits or improving speed of division by using multiplication. Background The author, an IBM researcher working on systems ranging from the IBM 704 to the PowerPC, collected what he called "programming tricks" over the course of his career. These tricks concern efficient low-level manipulation of bit strings and numbers. According to the book's foreword by Guy L. Steele, the target audience includes compiler writers and people writing high-performance code. Summary Programming examples are written in C and assembler for a RISC architecture similar, but not identical to PowerPC. Algorithms are given as formulas for any number of bits, the examples usually for 32 bits. Apart from the introduction, chapters are independent of each other, each focusing on a particular subject ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Fast Inverse Square Root
Fast inverse square root, sometimes referred to as or by the hexadecimal constant , is an algorithm that estimates \frac, the Multiplicative inverse, reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x in Single-precision floating-point format#IEEE 754 standard: binary32, IEEE 754 floating-point format. The algorithm is best known for its implementation in 1999 in ''Quake III Arena'', a first-person shooter video game heavily based on 3D graphics. With subsequent hardware advancements, especially the x86 Streaming SIMD Extensions, SSE instruction rsqrtss, this algorithm is not generally the best choice for modern computers, though it remains an interesting historical example. The algorithm accepts a 32-bit floating-point number as the input and stores a halved value for later use. Then, treating the bits representing the floating-point number as a 32-bit integer, a logical shift right by one bit is performed and the result subtracted from ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Recursion (computer Science)
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursion, recursive problems by using function (computer science), functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as and . Repeatedly calling a function from within itse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Declarative Programming Language
In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow. Many languages that apply this style attempt to minimize or eliminate side effects by describing ''what'' the program must accomplish in terms of the problem domain, rather than describing ''how'' to accomplish it as a sequence of the programming language primitives (the ''how'' being left up to the language's implementation). This is in contrast with imperative programming, which implements algorithms in explicit steps. Declarative programming often considers programs as theories of a formal logic, and computations as deductions in that logic space. Declarative programming may greatly simplify writing parallel programs. Common declarative languages include those of database query languages (e.g., SQL, XQuery), regular expressions, logic programming (e.g. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Procedural Programming Language
Procedural programming is a programming paradigm, classified as imperative programming, that involves implementing the behavior of a computer program as procedures (a.k.a. functions, subroutines) that call each other. The resulting program is a series of steps that forms a hierarchy of calls to its constituent procedures. The first major procedural programming languages appeared –1964, including Fortran, ALGOL, COBOL, PL/I and BASIC. Pascal and C were published –1972. Computer processors provide hardware support for procedural programming through a stack register and instructions for calling procedures and returning from them. Hardware support for other types of programming is possible, like Lisp machines or Java processors, but no attempt was commercially successful. Development practices Certain software development practices are often employed with procedural programming in order to enhance quality and lower development and maintenance costs. Modularity and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]




Induction Variable
In computer science, an induction variable is a variable that gets increased or decreased by a fixed amount on every iteration of a loop or is a linear function of another induction variable. For example, in the following loop, i and j are induction variables: for (i = 0; i < 10; ++i)


Application to strength reduction

A common is to recognize the existence of induction variables and replace them with simpler computations; for example, the code above could be rewritten by the compiler as follows, on the assumption that the addition of a constant will be cheaper than a multiplication. j = -17; for (i = 0; i < 10; ++i) This optimization is a special case of

Arithmetic Shift
In computer programming, an arithmetic shift is a shift operator, sometimes termed a signed shift (though it is not restricted to signed operands). The two basic types are the arithmetic left shift and the arithmetic right shift. For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. Instead of being filled with all 0s, as in logical shift, when shifting to the right, the leftmost bit (usually the sign bit in signed integer representations) is replicated to fill in all the vacant positions (this is a kind of sign extension). Some authors prefer the terms ''sticky right-shift'' and ''zero-fill right-shift'' for arithmetic and logical shifts respectively. Arithmetic shifts can be useful as efficient ways to perform multiplication or division of signed integers by powers of two. Shifting left by ''n'' bits on a signed or uns ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Compiler Optimization
An optimizing compiler is a compiler designed to generate code that is optimized in aspects such as minimizing program execution time, memory usage, storage size, and power consumption. Optimization is generally implemented as a sequence of optimizing transformations, a.k.a. compiler optimizations algorithms that transform code to produce semantically equivalent code optimized for some aspect. Optimization is limited by a number of factors. Theoretical analysis indicates that some optimization problems are NP-complete, or even undecidable. Also, producing perfectly ''optimal'' code is not possible since optimizing for one aspect often degrades performance for another. Optimization is a collection of heuristic methods for improving resource usage in typical programs. Categorization Local vs. global scope Scope describes how much of the input code is considered to apply optimizations. Local scope optimizations use information local to a basic block. Since basic blocks cont ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Identity (mathematics)
In mathematics, an identity is an equality (mathematics), equality relating one mathematical expression ''A'' to another mathematical expression ''B'', such that ''A'' and ''B'' (which might contain some variable (mathematics), variables) produce the same value for all values of the variables within a certain domain of discourse. In other words, ''A'' = ''B'' is an identity if ''A'' and ''B'' define the same function (mathematics), functions, and an identity is an equality between functions that are differently defined. For example, (a+b)^2 = a^2 + 2ab + b^2 and \cos^2\theta + \sin^2\theta =1 are identities. Identities are sometimes indicated by the triple bar symbol instead of , the equals sign. Formally, an identity is a universally quantified equality. Common identities Algebraic identities Certain identities, such as a+0=a and a+(-a)=0, form the basis of algebra, while other identities, such as (a+b)^2 = a^2 + 2ab +b^2 and a^2 - b^2 = (a+b)(a-b), ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Peephole Optimization
Peephole optimization is an Optimizing_compiler, optimization technique performed on a small set of compiler-generated instructions, known as a peephole or window, that involves replacing the instructions with a logically equivalent set that has better performance. For example: * Instead of pushing a register onto the stack and then immediately popping the value back into the register, remove both instructions * Instead of multiplying ''x'' by 2, do * Instead of multiplying a floating point register by 8, add 3 to the floating point register's exponent The term ''peephole optimization'' was introduced by William Marshall McKeeman in 1965. Replacements Peephole optimization replacements include but are not limited to: * Null sequences – Delete useless operations * Combine operations – Replace several operations with one equivalent * Algebraic laws – Use algebraic laws to simplify or reorder instructions * Special case instructions – Use instructions designed for special ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]