# Fast Syndrome Based Hash

In cryptography, the Fast Syndrome-based hash Functions (FSB) are a family of cryptographic hash functions introduced in 2003 by Daniel Augot, Matthieu Finiasz, and Nicolas Sendrier.  Unlike most other cryptographic hash functions in use today, FSB can to a certain extent be proven to be secure. More exactly, it can be proven that breaking FSB is at least as difficult as solving a certain NP-complete problem known as Regular Syndrome Decoding so FSB is provably secure. Though it is not known whether NP-complete problems are solvable in polynomial time, it is often assumed that they are not.

Several versions of FSB have been proposed, the latest of which was submitted to the SHA-3 cryptography competition but was rejected in the first round. Though all versions of FSB claim provable security, some preliminary versions were eventually broken.  The design of the latest version of FSB has however taken this attack into account and remains secure to all currently known attacks.

As usual, provably security comes at a cost. FSB is slower than traditional hash functions and uses quite a lot of memory, which makes it impractical on memory constrained environments. Furthermore, the compression function used in FSB needs a large output size to guarantee security. This last problem has been solved in recent versions by simply compressing the output by another compression function called Whirlpool. However, though the authors argue that adding this last compression does not reduce security, it makes a formal security proof impossible. 

## Description of the hash function

For security purposes as well as to get a faster hash speed we want to use only “regular words of weight $w$ ” as input for our matrix.

### The Compression Function

This version is usually called Syndrome Based Compression. It is very slow and in practice done in a different and faster way resulting in Fast Syndrome Based Compression. We split $H$ into sub-matrices $H_{i}$ of size $r\times n/w$ and we fix a bijection from the bit strings of length $w\log(n/w)$ to the set of sequences of $w$ numbers between 1 and $n/w$ . This is equivalent to a bijection to the set of regular words of length $n$ and weight $w$ , since we can see such a word as a sequence of numbers between 1 and $n/w$ . The compression function looks as follows:

We can now use the Merkle-Damgård construction to generalize the compression function to accept inputs of arbitrary lengths.

Algorithm:

## Security Proof of FSB

The Merkle-Damgård construction is proven to base its security only on the security of the used compression function. So we only need to show that the compression function $\phi$ is secure.

A cryptographic hash function needs to be secure in three different aspects:

1. Pre-image resistance: Given a Hash h it should be hard to find a message m such that Hash(m)=h
2. Second pre-image resistance: Given a message m1 it should be hard to find a message m2 such that Hash(m1) = Hash(m2)
3. Collision resistance: It should be hard to find two different messages m1 and m2 such that Hash(m1)=Hash(m2)

Note that if an adversary can find a second pre-image, than he can certainly find a collision. This means that if we can prove our system to be collision resistant, it will certainly be second-pre-image resistant.

Usually in cryptography hard means something like “almost certainly beyond the reach of any adversary who must be prevented from breaking the system”. We will however need a more exact meaning of the word hard. We will take hard to mean “The runtime of any algorithm that finds a collision or pre-image will depend exponentially on size of the hash value”. This means that by relatively small additions to the hash size, we can quickly reach high security.

### Pre-image resistance and Regular Syndrome Decoding (RSD)

As said before, the security of FSB depends on a problem called Regular Syndrome Decoding (RSD). Syndrome Decoding is originally a problem from coding theory but its NP-Completeness makes it a nice application for cryptography. Regular Syndrome Decoding is a special case of Syndrome Decoding and is defined as follows:

This problem has been proven to be NP-Complete by a reduction from 3-dimensional matching. Again, though it is not known whether there exist polynomial time algorithms for solving NP-Complete problems, none are known and finding one would be a huge discovery.

It is easy to see that finding a pre-image of a given hash $S$ is exactly equivalent to this problem, so the problem of finding pre-images in FSB must also be NP-Complete.

We still need to prove collision resistance. For this we need another NP-Complete variation of RSD: 2-Regular Null Syndrome Decoding.

### Collision resistance and 2-Regular Null Syndrome Decoding (2-NRSD)

2-NRSD has also been proven to be NP-Complete by a reduction from 3-dimensional matching.

Suppose that we have found a collision, so we have Hash(m1) = Hash(m2) with $m_{1}\neq m_{2}$ . Then we can find two regular words $w_{1}$ and $w_{2}$ such that $Hw_{1}=Hw_{2}$ . We then have $H(w_{1}+w_{2})=Hw_{1}+Hw_{2}=2Hw_{1}=0$ ; $(w_{1}+w_{2})$ is a sum of two different regular words and so must be a 2-regular word of which the hash is zero, so we have solved an instance of 2-NRSD. We conclude that finding collisions in FSB is at least as difficult as solving 2-NRSD and so must be NP-Complete.

The latest versions of FSB use the compression function Whirlpool to further compress the hash output. Though this cannot be proven, the authors argue that this last compression does not reduce security. Note that even if one were able to find collisions in Whirlpool, one would still need to find the collisions pre-images in the original FSB compression function to find a collision in FSB.

### Examples

Solving RSD, we are in the opposite situation as when hashing. Using the same values as in the previous example, we are given $H$ separated into $w=3$ sub-blocks and a string $r=1111$ . We are asked to find in each sub-block exactly one column such that they would all sum to $r$ . The expected answer is thus $s_{1}=1$ , $s_{2}=0$ , $s_{3}=3$ . This is known to be hard to compute for large matrices.

In 2-NRSD we want to find in each sub-block not one column, but two or zero such that they would sum up to 0000 (and not to $r$ ). In the example, we might use column (counting from 0) 2 and 3 from $H_{1}$ , no column from $H_{2}$ column 0 and 2 from $H_{3}$ . More solutions are possible, for example might use no columns from $H_{3}$ .

### Linear cryptanalysis

The provable security of FSB means that finding collisions is NP-complete. But the proof is a reduction to a problem with asymptotically hard worst-case complexity. This offers only limited security assurance as there still can be an algorithm that easily solves the problem for a subset of the problem space. For example, there exists a linearization method that can be used to produce collisions of in a matter of seconds on a desktop PC for early variants of FSB with claimed 2^128 security. It is shown that the hash function offers minimal pre-image or collision resistance when the message space is chosen in a specific way.

### Practical security results

The following table shows the complexity of the best known attacks against FSB.

Output size (bits) Complexity of collision search Complexity of inversion
160 2100.3 2163.6
224 2135.3 2229.0
256 2190.0 2261.0
384 2215.5 2391.5
512 2285.6 2527.4

## Genesis

FSB is a speed-up version of Syndrom-based hash function (SB). In the case of SB the compression function is very similar to the encoding function of Niederreiter's version of McEliece cryptosystem. Instead of using the parity check matrix of a permuted Goppa code, SB uses a random matrix $H$ . From the security point of view this can only strengthen the system.

## Variants

In 2007, IFSB was published. In 2010, S-FSB was published, which is 30% faster than the original.

In 2011, D.J. Bernstein and Tanja Lange published RFSB, which is 10x faster than the original FSB-256. RFSB was shown to run very fast on the Spartan 6 FPGA, reaching throughputs of around 5 Gbit/s.