HOME

TheInfoList



OR:

In
computing Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, ...
, memoization or memoisation is an optimization technique used primarily to speed up
computer programs A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components. A computer prog ...
by storing the results of expensive function calls and returning the cached result when the same inputs occur again. Memoization has also been used in other contexts (and for purposes other than speed gains), such as in simple mutually recursive descent parsing. Although related to caching, memoization refers to a specific case of this optimization, distinguishing it from forms of caching such as buffering or page replacement. In the context of some logic programming languages, memoization is also known as
tabling In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. Memoizatio ...
.


Etymology

The term "memoization" was coined by
Donald Michie Donald Michie (; 11 November 1923 – 7 July 2007) was a British researcher in artificial intelligence. During World War II, Michie worked for the Government Code and Cypher School at Bletchley Park, contributing to the effort to solve " Tunny ...
in 1968 and is derived from the
Latin Latin (, or , ) is a classical language belonging to the Italic languages, Italic branch of the Indo-European languages. Latin was originally a dialect spoken in the lower Tiber area (then known as Latium) around present-day Rome, but through ...
word " memorandum" ("to be remembered"), usually truncated as "memo" in American English, and thus carries the meaning of "turning he results ofa function into something to be remembered". While "memoization" might be confused with " memorization" (because they are etymological
cognate In historical linguistics, cognates or lexical cognates are sets of words in different languages that have been inherited in direct descent from an etymological ancestor in a common parent language. Because language change can have radical ef ...
s), "memoization" has a specialized meaning in computing.


Overview

A memoized function "remembers" the results corresponding to some set of specific inputs. Subsequent calls with remembered inputs return the remembered result rather than recalculating it, thus eliminating the primary cost of a call with given parameters from all but the first call made to the function with those parameters. The set of remembered associations may be a fixed-size set controlled by a replacement algorithm or a fixed set, depending on the nature of the function and its use. A function can only be memoized if it is
referentially transparent In computer science, referential transparency and referential opacity are properties of parts of computer programs. An expression is called ''referentially transparent'' if it can be replaced with its corresponding value (and vice-versa) with ...
; that is, only if calling the function has exactly the same effect as replacing that function call with its return value. (Special case exceptions to this restriction exist, however.) While related to
lookup table In computer science, a lookup table (LUT) is an array that replaces runtime computation with a simpler array indexing operation. The process is termed as "direct addressing" and LUTs differ from hash tables in a way that, to retrieve a value v w ...
s, since memoization often uses such tables in its implementation, memoization populates its cache of results transparently on the fly, as needed, rather than in advance. Memoization is a way to lower a function's ''time'' cost in exchange for ''space'' cost; that is, memoized functions become optimized for ''speed'' in exchange for a higher use of
computer memory In computing, memory is a device or system that is used to store information for immediate use in a computer or related computer hardware and digital electronic devices. The term ''memory'' is often synonymous with the term '' primary storag ...
''space''. The time/space "cost" of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s has a specific name in computing: ''
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
''. All functions have a computational complexity in ''time'' (i.e. they take time to execute) and in ''space''. Although a space–time tradeoff occurs (i.e., space used is speed gained), this differs from some other optimizations that involve time-space trade-off, such as strength reduction, in that memoization is a run-time rather than compile-time optimization. Moreover, strength reduction potentially replaces a costly operation such as multiplication with a less costly operation such as addition, and the results in savings can be highly machine-dependent (non-portable across machines), whereas memoization is a more machine-independent,
cross-platform In computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several computing platforms. Some cross-platform software ...
strategy. Consider the following
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
function to calculate the
factorial In mathematics, the factorial of a non-negative denoted is the product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times (n-1) \times (n-2) \ ...
of ''n'': function factorial (''n'' is a non-negative integer) if ''n'' is 0 then return 1 'by the convention that'' 0! = 1 else return factorial(''n'' – 1) times ''n'' 'recursively invoke factorial'' ''with the parameter 1 less than n'' end if end function For every
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
''n'' such that n ≥ 0, the final result of the function factorial is invariant; if invoked as ''x'' = factorial(3), the result is such that ''x'' will ''always'' be assigned the value 6. The non-memoized implementation above, given the nature of the recursive algorithm involved, would require ''n + 1'' invocations of factorial to arrive at a result, and each of these invocations, in turn, has an associated cost in the time it takes the function to return the value computed. Depending on the machine, this cost might be the sum of: # The cost to set up the functional call stack frame. # The cost to compare ''n'' to 0. # The cost to subtract 1 from ''n''. # The cost to set up the recursive call stack frame. (As above.) # The cost to multiply the result of the recursive call to factorial by ''n''. # The cost to store the return result so that it may be used by the calling context. In a non-memoized implementation, ''every'' top-level call to factorial includes the cumulative cost of steps 2 through 6 proportional to the initial value of ''n''. A memoized version of the factorial function follows: function factorial (''n'' is a non-negative integer) if ''n'' is 0 then return 1 'by the convention that'' 0! = 1 else if ''n'' is in ''lookup-table'' then return ''lookup-table-value-for-n'' else let x = factorial(n – 1) times ''n'' 'recursively invoke factorial'' ''with the parameter 1 less than n'' store ''x'' in ''lookup-table'' in the ''n''th slot 'remember the result of n! for later'' return x end if end function In this particular example, if factorial is first invoked with 5, and then invoked later with any value less than or equal to five, those return values will also have been memoized, since factorial will have been called recursively with the values 5, 4, 3, 2, 1, and 0, and the return values for ''each'' of those will have been stored. If it is then called with a number greater than 5, such as 7, only 2 recursive calls will be made (7 and 6), and the value for 5! will have been stored from the previous call. In this way, memoization allows a function to become more time-efficient the more often it is called, thus resulting in eventual overall speed-up.


Some other considerations


Functional programming

Memoization is heavily used in compilers for
functional programming In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions tha ...
languages, which often use call by name evaluation strategy. To avoid overhead with calculating argument values, compilers for these languages heavily use auxiliary functions called
thunk In computer programming, a thunk is a subroutine used to inject a calculation into another subroutine. Thunks are primarily used to delay a calculation until its result is needed, or to insert operations at the beginning or end of the other sub ...
s to compute the argument values, and memoize these functions to avoid repeated calculations.


Automatic memoization

While memoization may be added to functions ''internally'' and ''explicitly'' by a
computer programmer A computer programmer, sometimes referred to as a software developer, a software engineer, a programmer or a coder, is a person who creates computer programs — often for larger computer software. A programmer is someone who writes/creates ...
in much the same way the above memoized version of factorial is implemented, referentially transparent functions may also be automatically memoized ''externally''. The techniques employed by Peter Norvig have application not only in
Common Lisp Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ''ANSI INCITS 226-1994 (S20018)'' (formerly ''X3.226-1994 (R1999)''). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived fr ...
(the language in which his paper demonstrated automatic memoization), but also in various other
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming ...
s. Applications of automatic memoization have also been formally explored in the study of
term rewriting In mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. Such methods may be achieved by rewriting systems (also known as rewrite systems, rewrite engines, or r ...
and
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
. In programming languages where functions are
first-class object In programming language design, a first-class citizen (also type, object, entity, or value) in a given programming language is an entity which supports all the operations generally available to other entities. These operations typically include ...
s (such as Lua, Python, or
Perl Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was offic ...
), automatic memoization can be implemented by replacing (at run-time) a function with its calculated value once a value has been calculated for a given set of parameters. The function that does this value-for-function-object replacement can generically wrap any referentially transparent function. Consider the following pseudocode (where it is assumed that functions are first-class values): function memoized-call (''F'' is a function object parameter) if ''F'' has no attached array ''values'' then allocate an associative array called ''values''; attach ''values'' to ''F''; end if; if ''F''.''values rguments' is empty then ''F''.''values rguments' = ''F''(arguments); end if; return F.''values rguments'; end function In order to call an automatically memoized version of factorial using the above strategy, rather than calling factorial directly, code invokes memoized-call(factorial(''n'')). Each such call first checks to see if a holder array has been allocated to store results, and if not, attaches that array. If no entry exists at the position values rguments/code> (where arguments are used as the key of the associative array), a ''real'' call is made to factorial with the supplied arguments. Finally, the entry in the array at the key position is returned to the caller. The above strategy requires ''explicit'' wrapping at each call to a function that is to be memoized. In those languages that allow closures, memoization can be effected ''implicitly'' via a functor factory that returns a wrapped memoized function object in a decorator pattern. In pseudocode, this can be expressed as follows: function construct-memoized-functor (''F'' is a function object parameter) allocate a function object called ''memoized-version''; let memoized-version(arguments) be if ''self'' has no attached array values then this_object''.html" ;"title="this_(computer_science).html" ;"title="'self is a reference to this (computer science)">this object''">this_(computer_science).html" ;"title="'self is a reference to this (computer science)">this object'' allocate an associative array called ''values''; attach ''values'' to ''self''; end if; if self.''values rguments' is empty then self.''values rguments' = ''F''(arguments); end if; return self.''values rguments'; end let; return ''memoized-version''; end function Rather than call factorial, a new function object memfact is created as follows: memfact = construct-memoized-functor(factorial) The above example assumes that the function factorial has already been defined ''before'' the call to construct-memoized-functor is made. From this point forward, memfact(''n'') is called whenever the factorial of ''n'' is desired. In languages such as Lua, more sophisticated techniques exist which allow a function to be replaced by a new function with the same name, which would permit: factorial = construct-memoized-functor(factorial) Essentially, such techniques involve attaching the ''original function object'' to the created functor and forwarding calls to the original function being memoized via an alias when a call to the actual function is required (to avoid endless recursion), as illustrated below: function construct-memoized-functor (''F'' is a function object parameter) allocate a function object called ''memoized-version''; let ''memoized-version''(arguments) be if ''self'' has no attached array values then 'self is a reference to this object'' allocate an associative array called ''values''; attach ''values'' to ''self''; allocate a new function object called ''alias''; attach ''alias'' to ''self''; 'for later ability to invoke F indirectly'' self.''alias'' = ''F''; end if; if self.''values rguments' is empty then self.''values rguments' = self.''alias''(arguments); 'not a direct call to F'' end if; return self.''values rguments'; end let; return ''memoized-version''; end function (Note: Some of the steps shown above may be implicitly managed by the implementation language and are provided for illustration.)


Parsers

When a top-down parser tries to parse an
ambiguous Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement ...
input with respect to an ambiguous
context-free grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules are of the form :A\ \to\ \alpha with A a ''single'' nonterminal symbol, and \alpha a string of terminals and/or nonterminals (\alpha can be em ...
(CFG), it may need an exponential number of steps (with respect to the length of the input) to try all alternatives of the CFG in order to produce all possible parse trees. This eventually would require exponential memory space. Memoization was explored as a
parsing Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term ''parsing'' comes from ...
strategy in 1991 by Peter Norvig, who demonstrated that an algorithm similar to the use of
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
and state-sets in Earley's algorithm (1970), and tables in the CYK algorithm of Cocke, Younger and Kasami, could be generated by introducing automatic memoization to a simple backtracking recursive descent parser to solve the problem of exponential time complexity. The basic idea in Norvig’s approach is that when a parser is applied to the input, the result is stored in a memotable for subsequent reuse if the same parser is ever reapplied to the same input. Richard Frost also used memoization to reduce the exponential time complexity of parser combinators, which can be viewed as “Purely Functional Top-Down Backtracking” parsing technique. He showed that basic memoized parser combinators can be used as building blocks to construct complex parsers as executable specifications of CFGs. It was again explored in the context of parsing in 1995 by Johnson and Dörre. In 2002, it was examined in considerable depth by Ford in the form called packrat parsing. In 2007, Frost, Hafiz and Callaghan described a top-down parsing algorithm that uses memoization for refraining redundant computations to accommodate any form of ambiguous CFG in
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
time ( Θ(n4) for left-recursive grammars and Θ(n3) for non left-recursive grammars). Their top-down parsing algorithm also requires polynomial space for potentially exponential ambiguous parse trees by 'compact representation' and 'local ambiguities grouping'. Their compact representation is comparable with Tomita’s compact representation of bottom-up parsing. Their use of memoization is not only limited to retrieving the previously computed results when a parser is applied to a same input position repeatedly (which is essential for polynomial time requirement); it is specialized to perform the following additional tasks: * The memoization process (which could be viewed as a ‘wrapper’ around any parser execution) accommodates an ever-growing direct left-recursive parse by imposing depth restrictions with respect to input length and current input position. * The algorithm’s memo-table ‘lookup’ procedure also determines the reusability of a saved result by comparing the saved result’s computational context with the parser’s current context. This contextual comparison is the key to accommodate indirect (or hidden) left-recursion. * When performing a successful lookup in a memotable, instead of returning the complete result-set, the process only returns the references of the actual result and eventually speeds up the overall computation. * During updating the memotable, the memoization process groups the (potentially exponential) ambiguous results and ensures the polynomial space requirement. Frost, Hafiz and Callaghan also described the implementation of the algorithm in PADL’08 as a set of
higher-order function In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: * takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itse ...
s (called parser combinators) in Haskell, which enables the construction of directly executable specifications of CFGs as language processors. The importance of their polynomial algorithm’s power to accommodate ‘any form of ambiguous CFG’ with top-down parsing is vital with respect to the syntax and semantics analysis during
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
. Th
X-SAIGA
site has more about the algorithm and implementation details. While Norvig increased the ''power'' of the parser through memoization, the augmented parser was still as time complex as Earley's algorithm, which demonstrates a case of the use of memoization for something other than speed optimization. Johnson and Dörre demonstrate another such non-speed related application of memoization: the use of memoization to delay linguistic constraint resolution to a point in a parse where sufficient information has been accumulated to resolve those constraints. By contrast, in the speed optimization application of memoization, Ford demonstrated that memoization could guarantee that parsing expression grammars could parse in
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
time even those
languages Language is a structured system of communication. The structure of a language is its grammar and the free components are its vocabulary. Languages are the primary means by which humans communicate, and may be conveyed through a variety of met ...
that resulted in worst-case backtracking behavior. Consider the following
grammar In linguistics, the grammar of a natural language is its set of structural constraints on speakers' or writers' composition of clauses, phrases, and words. The term can also refer to the study of such constraints, a field that includes doma ...
: S → (A c) , (B d) A → X (a, b) B → X b X → x (Notation note: In the above example, the production S → (A c) , (B d) reads: "An ''S'' is either an ''A'' followed by a c or a ''B'' followed by a d." The production X → x reads "An ''X'' is an x followed by an optional ''X''.") This grammar generates one of the following three variations of string: ''xac'', ''xbc'', or ''xbd'' (where ''x'' here is understood to mean ''one or more ''x'''s''.) Next, consider how this grammar, used as a parse specification, might effect a top-down, left-right parse of the string ''xxxxxbd'': :The rule ''A'' will recognize ''xxxxxb'' (by first descending into ''X'' to recognize one ''x'', and again descending into ''X'' until all the ''x'''s are consumed, and then recognizing the ''b''), and then return to ''S'', and fail to recognize a ''c''. The next clause of ''S'' will then descend into B, which in turn again descends into ''X'' and recognizes the ''x'''s by means of many recursive calls to ''X'', and then a ''b'', and returns to ''S'' and finally recognizes a ''d''. The key concept here is inherent in the phrase again descends into ''X''. The process of looking forward, failing, backing up, and then retrying the next alternative is known in parsing as backtracking, and it is primarily backtracking that presents opportunities for memoization in parsing. Consider a function RuleAcceptsSomeInput(Rule, Position, Input), where the parameters are as follows: * Rule is the name of the rule under consideration. * Position is the offset currently under consideration in the input. * Input is the input under consideration. Let the return value of the function RuleAcceptsSomeInput be the length of the input accepted by Rule, or 0 if that rule does not accept any input at that offset in the string. In a backtracking scenario with such memoization, the parsing process is as follows: : When the rule ''A'' descends into ''X'' at offset 0, it memoizes the length 5 against that position and the rule ''X''. After having failed at ''d'', ''B'' then, rather than descending again into ''X'', queries the position 0 against rule ''X'' in the memoization engine, and is returned a length of 5, thus saving having to actually descend again into ''X'', and carries on ''as if'' it had descended into ''X'' as many times as before. In the above example, one or ''many'' descents into ''X'' may occur, allowing for strings such as ''xxxxxxxxxxxxxxxxbd''. In fact, there may be ''any number'' of ''x'''s before the ''b''. While the call to S must recursively descend into X as many times as there are ''x'''s, ''B'' will never have to descend into X at all, since the return value of RuleAcceptsSomeInput(''X'', 0, ''xxxxxxxxxxxxxxxxbd'') will be 16 (in this particular case). Those parsers that make use of syntactic predicates are also able to memoize the results of predicate parses, as well, thereby reducing such constructions as: S → (A)? A A → /* some rule */ to one descent into ''A''. If a parser builds a
parse tree A parse tree or parsing tree or derivation tree or concrete syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term ''parse tree'' itself is used primarily in co ...
during a parse, it must memoize not only the ''length'' of the input that matches at some offset against a given rule, but also must store the sub-tree that is generated by that rule at that offset in the input, since subsequent calls to the rule by the parser will not actually descend and rebuild that tree. For the same reason, memoized parser algorithms that generate calls to external code (sometimes called a
semantic action routine In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine. The most common type of compiler ...
) when a rule matches must use some scheme to ensure that such rules are invoked in a predictable order. Since, for any given backtracking or syntactic predicate capable parser not every grammar will ''need'' backtracking or predicate checks, the overhead of storing each rule's parse results against every offset in the input (and storing the parse tree if the parsing process does that implicitly) may actually ''slow down'' a parser. This effect can be mitigated by explicit selection of those rules the parser will memoize.


See also

* Approximate computing – category of techniques to improve efficiency *
Computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
– more information on algorithm complexity *
Director string In mathematics, in the area of lambda calculus and computation, directors or director strings are a mechanism for keeping track of the free variables in a Expression (mathematics), term. Loosely speaking, they can be understood as a kind of memoiza ...
– rapidly locating free variables in expressions * Flyweight pattern – an object programming design pattern, that also uses a kind of memoization * Hashlife – a memoizing technique to speed up the computation of cellular automata * Lazy evaluation – shares some concepts with memoization * Materialized view – analogous caching in database queries *
Partial evaluation In computing, partial evaluation is a technique for several different types of program optimization by specialization. The most straightforward application is to produce new programs that run faster than the originals while being guaranteed t ...
– a related technique for automatic program optimization


References


External links

; Examples of memoization in various programming languages
groovy.lang.Closure#memoize()
– Memoize is an Apache Groovy 1.8 language feature.
Memoize
– Memoize is a small library, written by Tim Bradshaw, for performing memoization in
Common Lisp Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ''ANSI INCITS 226-1994 (S20018)'' (formerly ''X3.226-1994 (R1999)''). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived fr ...
.
IncPy
– A custom Python interpreter that performs automatic memoization (with no required user annotations) *Dave Herman'
Macros for defining memoized procedures
in Racket.
Memoize.pm
– a
Perl module A Perl module is a discrete component of software for the Perl programming language. Technically, it is a particular set of conventions for using Perl's package mechanism that has become universally adopted. A module defines its source code to ...
that implements memoized functions.
Java memoization
– an example in Java using dynamic proxy classes to create a generic memoization pattern.
memoization.java
- A Java memoization library.
C++Memo
– A C++ memoization framework.
C-Memo
– Generic memoization library for C, implemented using pre-processor function wrapper macros.
Tek271 Memoizer
– Open source Java memoizer using annotations and pluggable cache implementations.
memoizable
– A
Ruby A ruby is a pinkish red to blood-red colored gemstone, a variety of the mineral corundum ( aluminium oxide). Ruby is one of the most popular traditional jewelry gems and is very durable. Other varieties of gem-quality corundum are called ...
gem that implements memoized methods.
Python memoization
– A Python example of memoization.
OCaml memoization
– Implemented as a Camlp4 syntax extension.
Memoization in Lua
– Two example implementations of a general memoize function in Lua.
Memoization in Mathematica
– Memoization and limited memoization in
Mathematica Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimiza ...
.
Memoization in Javascript
– Extending the Function prototype in
JavaScript JavaScript (), often abbreviated as JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. As of 2022, 98% of websites use JavaScript on the client side for webpage behavior, of ...
( archived version of http://talideon.com/weblog/2005/07/javascript-memoization.cfm ).
Memoization in Javascript
– Examples of memoization in javascript using own caching mechanism and using the YUI library

– eXecutable SpecificAtIons of GrAmmars. Contains publications related to top-down parsing algorithm that supports left-recursion and ambiguity in polynomial time and space.
Memoization in Scheme
– A
Scheme A scheme is a systematic plan for the implementation of a certain idea. Scheme or schemer may refer to: Arts and entertainment * ''The Scheme'' (TV series), a BBC Scotland documentary series * The Scheme (band), an English pop band * ''The Schem ...
example of memoization on a class webpage.
Memoization in Combinatory Logic
– A web service to reduce
Combinatory Logic Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of com ...
while memoizing every step in a database.
MbCache
– Cache method results in .NET. {{Parsers Software optimization Computer performance