Sunteți pe pagina 1din 7

Chapter 1

Summary and Conclusion

So, in this chapter we introduced the problem poser. The problem of
solving the sparse linear system over GF(2) is thus inspired by the
problem of breaking the cryptographic system. For a symmetric-key
stream cipher cryptosystem, the security is dependent on the period of
the key such that the ciphers key or the state of the key cannot be
determined through the key stream. In particular we are interested in
breaking the public-key cryptographic system. The public-key
cryptographic system uses two keys, one private and the other one as
public. For Diffie-Hellman key exchange the security of the system is
dependent on solving the Discrete Logarithm Problem. In RSA
calculation of the keys include prime multiplication. So, the security of
RSA public-key cryptosystem depends on finding the prime factors. If
they can be found then the RSA public-key cryptosystem can be

Chapter 2
Related Work and Background
2.1 Introduction
In the previous chapter we stated that the security of public-key
cryptographic system is dependent on solving a mathematical problem.
For RSA, it is finding the prime factors of a large number, which about
300-digit long/1024-bit long. And for Diffie-Hellman key-exchange it is
dependent on solving the Discrete Logarithm Problem(DLP). Now, it is
evident that to solve these problems we have to solve a large matrix of
linear equations. This matrix is a sparse matrix whose entries are in
GF(2)(Galois Field of 2 elements). This sparse linear system of
equations over GF(2) arises out of the Sieving Module of Number Field
Sieve (NFS) [6] or Functions Field Sieve (FFS)[7].

The simplest method of all that comes to mind is the Gaussian-

Elimination method. But his is too costly in terms of time and space for
a large system of sparse linear system of equations. So, another
method to solve the problem is needed. An approach is needed that
can exploit the sparsity property of the system of linear equations, as
this will greatly reduce the time and space.

In this chapter we are going to look over some of the methods that can
be employed in order to solve that linear system of equation. We are
also going to look over the alternative method which is best suited for
our need.

2.2 Gaussian-Elimination
Gaussian-Elimination is the standard method for solving a system of
linear equations. This method can also be used for finding the Rank of
a matrix and also the inverse of an invertible matrix. Gaussian-
Elimination method is composed of two steps. The first step reduces
the matrix of system of linear equations into a triangular or echelon
form. In the second step it used back-substitution to solve the given
system of linear equations. The time complexity of this method is O(n),
where n is the size of the system of linear equations.
In the first of Gaussian-Elimination we triangulate the matrix of linear
systems of equations. As a result a large amount of matrix elements
are modified. In our case, the sparsity property of the matrix will be
destroyed. Also the space complexity will increase as we will have to
store all of the elements of the matrix. Hence, Gaussian-Elimination
method is not suitable for our need.

Fig-5 Gaussian-Elimination

There is another version of Gaussian-Elimination method, known as

Structured Gaussian-Elimination. This was proposed y Odlyzko[14] and
further improved by himself and Lamacchia[15]. This method was
proved to be quite effective even in the context of sparse matrix. It
was designed in order to reduce the size of the system and also
preserving the sparsity property while solving. But the limit up to
which structured Gaussian-Elimination need to be carried out before
forwarding it to sparse system solver is not clear and is also empirical.

2.3 Iterative Method

A handful of iterative methods have been shown over last couple of
decades to solve the problem of sparse linear equations. Two of them
are Lanczos and Conjugate Gradient method. They start by guessing a
solution and then they compute approximations to reach the solution
of the system.

These iterative methods are inherently sequential, mainly because no

iteration in these algorithms can start before the previous iteration
completes. Therefore, the only option left is to parallelize each
iteration of the algorithm. But in that scenario, it essentially implies
parallelization at a relatively fine level.

As stated earlier in Chapter I, our system of linear equations is

composed of GF(2) entries. There are obviously a lot of advantages of
the systems in GF(2) compared to those in GF(p). The storing of values
can be done in a single bit for the former case but it requires multi-
precision storage in the later. Again, the arithmetic involved in GF(p)
must be multi-precision arithmetic which is too much expensive.
Whereas, the arithmetic involved in GF(2) can be done in bitwise
operations which is obviously much efficient than the earlier one.
These difficulties set the challenges to implement GF(p) case
separately. So, these are methods that are inefficient for GF(2) system.
The reason for their inefficiency is that, these methods over GF(2)
require manipulation with single bits, whereas most of the computers
can handle chunks of 32/64-bits simultaneously. Further, if the bit
vectors are compressed for efficient storage, the single bits need to
extract by uncompressing before use. These algorithms should be
modified to work with blocks of vectors instead of single vectors.

So the focus shifted from efficient algorithms to efficient

implementation. Coppersmith then introduced block version of
Lancozs[16] and Wiedemann [17]. He modified the algorithm in such a
manner that the algorithm can be run on 32 vectors at a same time. In
the following section we are going to look at the Wiedemanns
approach of solving the system of linear equation.

2.3.1 Block Wiedemann

Wiedemann algorithm is used to solve the system of linear equation of
p field. It requires linear space and quadratic time, while applying the
black box for the coefficient matrix no more than 3n times[1]. On an
input of n x n coefficient matrix. A given by a so called black-box, which
is a program that can multiply the matrix by a vector, and of a vector b,
the algorithm finds, with high probability in case the system is solvable,
a random solution vector x with Ax=b. Wiedemanns algorithm is based
on the fact that when a square matrix is repeatedly applied to a vector,
the resulting vector sequence is linear recursive.

The working principle of Block Wiedemann algorithm [17] is that a

linear generator for the sequence {UtrBiV}i0 of m x n matrices, mn
can be computed from considering the first L terms of the sequence,

LN/m + N/n +2n/m +2

Coppersmith proposed the Block form of Wiedemann algorithm in

order to exploit the capacity of computers to perform multiple
operations at the same time. Instead of considering a single vector, he
considered block of vectors in order to do simultaneous computations.
The algorithm works with two matrices(block vectors) U MN x m and V
MN x n. As many as n solutions are found simultaneously on a single
invocation of the algorithm. The cost is decreased in accordance to the
amount of blocking. The algorithm is as follows:

The overall cost of finding n candidate solutions to Bw=0 is within

2.4 Summary and Conclusion

So, in this chapter we have considered different methods to solve the
system of linear equations. We first considered the standard approach
to solve a system of linear equations i.e., the Gaussian-Elimination
method. This method was not suitable for our need as it will increase
the time and space complexity of the problem, and also it will destroy
the sparsity property of the matrix while solving it. There is an
improved version also, the Structured Gaussian-Elimination method,
but it is not clear in this method that up to what limit we need to do the
structured Gaussian Elimination before forwarding it to sparse solver.
We then considered the iterative methods, the Lanczos method, which
maintains the sparsity of the matrix but is inefficient in the case of
GF(2) entries. A block version of this or some other algorithm was
needed. So, Block Wiedemann method was the match for us. We
couldve used the block version of the Lanczos method but that method
lacks the property of coarse-grain parallelization which is there in Block
Wiedemann method. Finally we considered the Block Wiedemann
method itself. In this method we chose to work on a block of
vectors(according to Coppersmith) in order to utilize the property of
computer doing multiple calculations simultaneously.

Chapter 3

3.2.1 Sparse Matrix Generation

Modern sieve-based integer factoring algorithms generate linear
systems containing over 500,000 equations and variables over the finite
field of two elements. Usually no more than 0:1% of the entries in the
coefficient matrix are nonzero.

For our project we are carrying out the Block Wiedemann algorithm
and this matrix is the input. Since, we are not implementing sieve-
based integer factoring algorithm, so we are generating it ourselves
instead. We are storing the entries of the matrix in Compressed Sparse
Row format. The entries of the matrix will in GF(2). So the first
element of each row represents how many ones are there in that row.
After that index of each nonzero element is represented. The order of
indices is non decreasing.

Following is the part of the compressed spare row format matrix