|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[computer science]], '''run-time algorithm specialization''' is a methodology for creating efficient algorithms for costly computation tasks of certain kinds. The methodology originates in the field of [[automated theorem proving]] and, more specifically, in the [[Vampire theorem prover]] project.
| | Hi there, I am Andrew Berryhill. To climb is some thing I really appreciate doing. Office supervising is where her main earnings comes from but she's currently applied for an additional 1. My wife and I live in Mississippi but now I'm contemplating other options.<br><br>Look into my blog :: real psychic ([http://myoceancounty.net/groups/apply-these-guidelines-when-gardening-and-grow/ click for info]) |
| | |
| The idea is inspired by the use of [[partial evaluation]] in optimising program translation.
| |
| Many core operations in theorem provers exhibit the following pattern.
| |
| Suppose that we need to execute some algorithm <math>\mathit{alg}(A,B)</math> in a situation where a value of <math>A</math> ''is fixed for potentially many different values of'' <math>B</math>. In order to do this efficiently, we can try to find a specialization of <math>\mathit{alg}</math> for every fixed <math>A</math>, i.e., such an algorithm <math>\mathit{alg}_A</math>, that executing <math>\mathit{alg}_A(B)</math> is equivalent to executing <math>\mathit{alg}(A,B)</math>.
| |
| | |
| The specialized algorithm may be more efficient than the generic one, since it can ''exploit some particular properties'' of the fixed value <math>A</math>. Typically, <math>\mathit{alg}_A(B)</math> can avoid some operations that <math>\mathit{alg}(A,B)</math> would have to perform, if they are known to be redundant for this particular parameter <math>A</math>.
| |
| In particular, we can often identify some tests that are true or false for <math>A</math>, unroll loops and recursion, etc.
| |
| | |
| == Difference from partial evaluation ==
| |
| The key difference between run-time specialization and partial evaluation is that the values of <math>A</math> on which <math>\mathit{alg}</math> is specialised are not known statically, so the ''specialization takes place at run-time''.
| |
| | |
| There is also an important technical difference. Partial evaluation is applied to algorithms explicitly represented as codes in some programming language. At run-time, we do not need any concrete representation of <math>\mathit{alg}</math>. We only have to ''imagine'' <math>\mathit{alg}</math> ''when we program'' the specialization procedure.
| |
| All we need is a concrete representation of the specialized version <math>\mathit{alg}_A</math>. This also means that we cannot use any universal methods for specializing algorithms, which is usually the case with partial evaluation. Instead, we have to program a specialization procedure for every particular algorithm <math>\mathit{alg}</math>. An important advantage of doing so is that we can use some powerful ''ad hoc'' tricks exploiting peculiarities of <math>\mathit{alg}</math> and the representation of <math>A</math> and <math>B</math>, which are beyond the reach of any universal specialization methods.
| |
| | |
| ==Specialization with compilation==
| |
| | |
| The specialized algorithm has to be represented in a form that can be interpreted.
| |
| In many situations, usually when <math>\mathit{alg}_A(B)</math> is to be computed on many values <math>B</math> in a row, we can write <math>\mathit{alg}_A</math> as a code of a special [[abstract machine]], and we often say that <math>A</math> is ''compiled''.
| |
| Then the code itself can be additionally optimized by answer-preserving transformations that rely only on the semantics of instructions of the abstract machine.
| |
| | |
| Instructions of the abstract machine can usually be represented as records. One field of such a record stores an integer tag that identifies the instruction type, other fields may be used for storing additional Parameters of the instruction, for example a pointer to another
| |
| instruction representing a label, if the semantics of the instruction requires a jump. All instructions of a code can be stored in an array, or list, or tree.
| |
| | |
| Interpretation is done by fetching instructions in some order, identifying their type
| |
| and executing the actions associated with this type.
| |
| In [[C (programming language)|C]] or C++ we can use a '''switch''' statement to associate
| |
| some actions with different instruction tags.
| |
| Modern compilers usually compile a '''switch''' statement with integer labels from a narrow range rather efficiently by storing the address of the statement corresponding to a value <math>i</math> in the <math>i</math>-th cell of a special array. One can exploit this
| |
| by taking values for instruction tags from a small interval of integers.
| |
| | |
| ==Data-and-algorithm specialization==
| |
| | |
| There are situations when many instances of <math>A</math> are intended for long-term storage and the calls of <math>\mathit{alg}(A,B)</math> occur with different <math>B</math> in an unpredictable order.
| |
| For example, we may have to check <math>\mathit{alg}(A_1,B_1)</math> first, then <math>\mathit{alg}(A_2,B_2)</math>, then <math>\mathit{alg}(A_1,B_3)</math>, and so on.
| |
| In such circumstances, full-scale specialization with compilation may not be suitable due to excessive memory usage.
| |
| However, we can sometimes find a compact specialized representation <math>A^{\prime}</math>
| |
| for every <math>A</math>, that can be stored with, or instead of, <math>A</math>.
| |
| We also define a variant <math>\mathit{alg}^{\prime}</math> that works on this representation
| |
| and any call to <math>\mathit{alg}(A,B)</math> is replaced by <math>\mathit{alg}^{\prime}(A^{\prime},B)</math>, intended to do the same job faster.
| |
| | |
| == See also ==
| |
| | |
| * [[Psyco]], a specializing run-time compiler for [[Python (programming language)|Python]]
| |
| * [[multi-stage programming]]
| |
| | |
| ==References==
| |
| | |
| * [http://www.springerlink.com/content/r127l5t26vu1m360/ A. Voronkov, "The Anatomy of Vampire: Implementing Bottom-Up Procedures with Code Trees", ''Journal of Automated Reasoning'', 15(2), 1995] (''original idea'')
| |
| | |
| ==Further reading==
| |
| * A. Riazanov and A. Voronkov, "Efficient Checking of Term Ordering Constraints", ''Proc. IJCAR'' 2004, Lecture Notes in Artificial Intelligence 3097, 2004 (''compact but self-contained illustration of the method'')
| |
|
| |
| * A. Riazanov and A. Voronkov, [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.8542 Efficient Instance Retrieval with Standard and Relational Path Indexing], Information and Computation, 199(1-2), 2005 (''contains another illustration of the method'')
| |
| | |
| * A. Riazanov, [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.2624 "Implementing an Efficient Theorem Prover"], PhD thesis, The University of Manchester, 2003 (''contains the most comprehensive description of the method and many examples'')
| |
| | |
| [[Category:Algorithms]]
| |
| [[Category:Software optimization]]
| |
Hi there, I am Andrew Berryhill. To climb is some thing I really appreciate doing. Office supervising is where her main earnings comes from but she's currently applied for an additional 1. My wife and I live in Mississippi but now I'm contemplating other options.
Look into my blog :: real psychic (click for info)