This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: How to create a 3D matrix?


On Sun, 8 Jun 2003, Dr. David Kirkby wrote:

> I *think* the book numerical recipes in C by Press et all (the full
> book is available online) has a method for 3D arrays. Their method for
> 2D matrices never seems that efficient to me and I'm tempted to user
> pointer arithmetic, although it is more messy. The method used in that
> book means the array is not one continuous area or memory, so I
> suspect it does not make good use of the cache a CPU will have. 

Numerical Recipes has a nasty license requiring one to pay, pay and pay
to use their code in any sort of real project, especially one in a
cluster environment and (as you note) has code that ranges from decent
to really poor.  There is a famous online rant against the code quality
of NR that should show up in google.  One reason I was ecstatic when the
GSL project came out was so that I could get away from using NR for
anything, and have just about fully excised NR from all my projects.

The idea of using malloc's wrapped inside array creation routines to
allocate space in rectangular matrices is sound enough and I'm sure was
common practice long before NR was written.  It is very simple to write
a routine that allocates space in a contiguous nrows X ncols block and
repacks its addresses into a **array (or more, for higher order tensors)
and not TOO difficult to write routines that allocate space efficiently
for e.g. spherical harmonic (l,m) indexed arrays.  If anyone is
interested in seeing an example I can post one.

Numerical efficiency is likely to be better for a contiguous array
allocation, but a lot still depends on stride and access topology.  If
one is using lots of linear algebra, one should likely try to use the
ATLAS optimized BLAS libraries, which automagically block out matrices
and switch algorithms to get maximal performance from the particular
cache and memory architecture one is running on.

A final advantage of controlling your own allocation of arrays in this
way is that e.g. ODE solvers nearly invariably require a vector
(*pointer) of coupled variables and derivatives, while many physical
problems involve variables and derivatives with a generalized tensor
coupling, with more than one index.  Managing all the reindexing of
tensor components into a vector form is tedious, error prone, and often
results in code that is very difficult to read or check.  However, if
one repacks a variable array (and associated derviative array) ALLOCATED
as vectors, one can pass them to an ODE solver by their vector pointer
and still use the natural tensor notation in the derivative evaluating
function.  I've used this trick for years to get solve systems of
y_i,dydx_i while actually writing stuff like dC[l][m][lp][mp] = some
function involving C[lp][mp][ldp][mdp] and S[lp][mp][ldp][mdp], summed
over ldp,mdp, with another similar derivative for dS.  As in all of the
angular momentum idexed array packing tricks above and then some...

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]