This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

RE: sparce matrices


Mikael Adlers wrote:

> I do not agree that a matrix of the order 10.000
> "is on shaky numerical ground".
> A sparse matrix of this size is easily solvable for a sparse solver
> (even a dense solver can handle a matrix of this size).
> There is no shaky numerical behaviour for matrices of this size
> and if it is sparse, it is even possible to sharpen
> the usual error estimates that are quite pessimistic in most cases.
> (There is a growth factor in the error estimate that are 2^(n-1).)

Should we assume that n is the number of rows (or colums) in the matrix?
What do you mean by growth factor?
Are you considering intrinsic and method errors as well as round-off?

> However, numerical analasists agree that this growth is very unlikely
> and the growth factor is in most practical cases is around 10).

Apparently,
you are contrasting the error bound with the expected error here.

> I have worked with a sparse factorization algorithm, 
> the QR factorization (computed the least squares solution to a system)
> and my test problems was about 30.000 x 10.000 with 130.000 nonzero
> elements and the factorization took a couple of seconds.
> I considered my test problems as small ones.

Were you using single or double precision arithmetic?
What was the error bound?  Expected error?

> In practice the density of a sparse matrix decreases
> as the dimension increases (like meshes,
> the number of neighbours do not increase as the mesh increase).

It appears that you are generalizing from a very special case.

> The really largescale FEM problems can generate sparse matrices
> of the order 1e6 - 1e7 and the solution of there system is possible
> on a supercomputer or cluster of ordinary pc using iterative methods
> with sufficient accuracy. I know of an example with an 3D grid
> if the space shuttle consisting of 16.000.000 grid points,
> each point generation at least one equation.

This is a common practice which I find very disturbing.
Good PDE solvers don't copy the elements from
a finite element/difference grid into a sparse matrix structure
just to apply sparse system solver.  The elements are updated in situ.

> It is to simplify to say that a goal of sparse computation
> is to minimize the movement of zeros to and from the memory.
> The goal is more to structure the computation to avoid operations
> on zeros elements and to minimize the creation of new nonzero elements
> (happens in all direct methods, i.e. methods based on factorization).

Once upon a time, floating-point operations were expensive
but modern floating-point pipelines are fast, cheap and plentiful.
Today, it doesn't take any more time to multiply by zero
than it takes to check for a zero and a branch on zero
would most likely just stall the floating-point pipeline.
The problem is the time required to fetch and store all of those zeros.

> The reason why matrix arithmetic libraries like gsl or LAPACK
> (I have no knowledge about VSIPL)
> do not support any sparse matrix data type is the following.
> To be able to use the sparsity of the problem,
> it is important that all intermediate results also are sparse.
> For example, when solving a linear system with LU decomposition,
> both L and U must be sparse.  Otherwise, all is for nothing.
> To do this, a graph representing the structure is created.
> Then some graph algorithms are added to analyse the graph
> and try to minimize any destruction of sparsity.

In general, neither L or U will be sparse.

> However, in iterative libraries like IML++,
> only matrix - vector products are needed and therefore,
> it is possible to use it with sparse data structures.

Iterative system solvers are the only practical method
for sparse matrices in general.

> The different storage formats that exist maximize the performance
> of the specific operation that you would like to do to the matrix.
> When solving a tridiagonal matrix,
> you would like to have it stored compressed row wise format
> and, when computing the QR factorization,
> it is more useful to have it column wise. 

Neither your computer or the C computer programming language
have any notion of rows or columns.
The notion of rows and columns exists only in the programmer's mind.
A QR factorization on an array of numbers
that you perceive to be stored column wise is simply
an LQ factorization on an the same array of numbers
that you perceive to be stored row wise.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]