# Talk:Strassen algorithm

## The Actual code

An example of this actually being implemented would be a nice addition to the article. I searched the web and didn't find a single example of this algorithm actually being implemented. Sure you can talk about the algorithm all day, but it means nothing if no one can put it into code. —Preceding unsigned comment added by 69.239.153.215 (talk) 13:22, 2 January 2009 (UTC)

link to reference 1 is broken 75.142.209.115 (talk) 19:46, 29 October 2008 (UTC)
link to reference 2 is broken as well Nov 12, 2008

Thanks for your comments. I fixed the references. -- Jitse Niesen (talk) 13:34, 12 November 2008 (UTC)

While a lot of older textbooks will say that "additions don't matter, but multiplications do". That is usually not relevant anymore. Multiplying numbers is slower if you are working with multi-word representations of numbers (sometimes called 'big numbers'), but if you are just multiplying "float"s or "double"s then multiplications and additions take the same amount of time on most processors (as far as i know) today. I'm going to change the article to say, "if we only consider multiplications" without the statement bout multiplications being inherently slower.... :)

Yes, it depends on the sizes of the numbers being multiplied. Perhaps a clarification of this point should be added to the article? I'm not familiar with the size of the matrix components used in most matrix algorithms so I will defer this point to the experts. - Gauge 03:19, 21 Jan 2005 (UTC)
This is mathematical thing, not a technical article. It states theoretical bounds.--213.141.159.52 14:30, 7 March 2006 (UTC)
The larger importance of multiplications becomes apparent when one starts to use the algorithm recursively, since the additions and multiplications at all but the bottom levels are then matrix additions and multiplications. A matrix multiplication is near O($n^{3}$ ), whereas the matrix addition is O($n^{2}$ ). Hence the number of additions in a Strassen-type matrix multiplication algorithm only affects the constant factor, but the number of multiplications affect the exponent. See section 2.4 of {{#invoke:citation/CS1|citation

|CitationClass=book }}81.231.33.217 (talk) 13:29, 30 June 2008 (UTC)

Yes, the reduction in complexity is a purely mathematical result independent of any computer implementation. I've fixed the article. 128.232.237.113 (talk) 22:41, 5 March 2009 (UTC)

## Notation for field

I am unfamiliar with the use of blackboard bold to denote an arbitrary field, as I understand it to mean one of the canonical sets of numbers (e.g., the complex numbers, the integers, etc.). Shouldn't the article use K or K instead of $\mathbb {K}$ ? Pmdboi 18:05, 19 February 2006 (UTC)

Some texts use $\mathbb {K}$ or K to emphasize the point that K is a field but I think most text use a simple K instead. I changed the article accordingly. And I changed K into the more common F. MathMartin 19:09, 19 February 2006 (UTC)

## Error in formula?

Looking at Mathworld, I see a different formula for M1: http://mathworld.wolfram.com/StrassenFormulas.html Doing a simple case by hand, the wikipedia ones seem wrong. 141.218.136.204 20:07, 22 February 2007 (UTC)

I"m sorry, but I don't see where the formula at MathWorld differs from the formula here. Could you please be more specific? -- Jitse Niesen (talk) 00:49, 23 February 2007 (UTC)

## Cross-over point

I reverted the edit of 31 January 2007, which added this fragment:

A popular misconception is that Strassen's algorithm outperforms the classic multiplication approach only for large matrix sizes. This is not true for modern architectures due to Strassen algorithm's superior cache behavior. For example, on a 2GHz Core Duo processor, Strassen's algorithm is three times faster at multiplying 128x128 matrices. It's time is pretty much the same as that block-multiplication for 256x256 and 512x512 matrices. For 1024x1024 and larger matrices (when the entire problem set no longer fits into processor's L2 cache), Strassen's algorithm is 3 times faster than block-multiplication and 12 times as fast as the classical three nested-loop implementation.

It might be true, but it does need a reference, especially since these kind of measurements are typically very dependent on details and it is contradicted by other references like Golub and Van Loan's Matrix Computations. -- Jitse Niesen (talk) 03:40, 23 February 2007 (UTC)

Agreed. A crossover point certainly exists, but is hardware dependent. I'll put something fluffier in. Deco 05:18, 23 February 2007 (UTC)

Winograd's 7 multiplications and fewer additions than Strassen algorithm was published 1971 ({{#invoke:Citation/CS1|citation |CitationClass=journal }}), not 1980. It has the same asymptotic complexity as Strassen's: $O(n^{\log _{2}7})$ .

The "Ultra-fast matrix multiplication" reference in the article describes another matrix multiplication attributed to Winograd, which has asymptotic complexity O($n^{3}$ ). It is possible that this was published in 1980.

That time period also saw many other developments which are worth mentioning:

• The first Strassen-like algorithm — in the sense of a computational scheme for multiplication of matrices of a fixed size that uses clever tricks to reduce the number of element multiplications needed — to achieve better asymptotic complexity than Strassen's original algorithm ({{#invoke:Citation/CS1|citation

|CitationClass=journal }}). It's wasn't too practical though, as it still used 47216 multiplications to multiply two 48×48 matrices, thus only reducing the exponent by 0.02.

• The Strassen-Schönhage algorithm (which is not the same as the Schönhage-Strassen algorithm for integer multiplication), which reduced the exponent to below 2.5.{{#invoke:Citation/CS1|citation

|CitationClass=journal }} (I don't recall the details, and haven't got {{#invoke:citation/CS1|citation |CitationClass=book }} at hand so I can check; possibly Strassen's contribution was to push the exponent below 2.5.)

81.231.33.217 (talk) 14:08, 30 June 2008 (UTC)

## asymptotically faster

• It is asymptotically faster than the standard matrix multiplication algorithm

What does asymptotically faster mean? Thanks, --Abdull (talk) 10:55, 25 July 2008 (UTC)

"For large enough input", in this case "for large enough matrix sides". 81.231.34.61 (talk) 10:49, 11 September 2008 (UTC)

## Numerical Analysis.

The section entitled "Numerical Analysis" should be "Performance Analysis". Numerical Analysis could include an analysis of any of: numerical stability, error accumulation, bit accuracy (vs. precision), or the correctness of the algorithm (though that may be a stretch). It does mention the first (numerical stability) but there is no actual analysis, just a mention and a general link. I also think that there is some problem with the performance analysis, but I will go into that separately. --RBarryYoung (talk) 13:20, 5 January 2010 (UTC)

## cleanup

Statements about relative speed are vague.

See Modern Computer Algebra (MCA) by Joachim von zur Gathen and Jurgen Gerhard for EXTENSIVE discussion of Strassen algorithm.

Put in citations to MCA and to relevant work it cites. Michael P. Barnett (talk) 16:54, 3 May 2011 (UTC)

## Errors in template

This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: the article contains vague statements and lacks references to work it mentions and to the considerable relevant work that it ignore. This should be ignores, and the template is poorly written. Doug (talk) 04:33, 16 April 2012 (UTC)

## Code Optimization

I was looking at the code and see that this version is a memory hog. Could it be more optimized if it were using location references instead of creating a new submatrix to perform each of the operations on? — Preceding unsigned comment added by 71.227.161.5 (talk) 09:14, 6 May 2012 (UTC)

## Blunder

$O([7+o(1)]^{n})=O(N^{\log _{2}7+o(1)})\approx O(N^{2.807})$ 2.807 is a blunder. Right 2.8074 or 2.81. Very simple Russion МетаСкептик12 (talk) 18:24, 1 July 2012 (UTC)