This is the mail archive of the guile@cygnus.com mailing list for the Guile project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Well, that's interesting.


On Thu, 10 Jun 1999, Havoc Pennington wrote:
> On Thu, 10 Jun 1999, Lynn Winebarger wrote:
> >     This sort of thing would best be in the dynamic linker, not the
> > compiler.  It would be in a position to dynamically adjust it as the need
> > arose, rather than attempting to do statically what is an inherently
> > dynamic optimization.
> > 
> It's not in the compiler; what you do is rearrange the library on disk so
    I was referring to your reference to commercial compilers attempting
this kind of optimization.

> that functions used together are close to each other. It takes quite a bit
> of time to determine the optimum arrangement; that's what the simulated
> annealing algorithm does. Of course the optimum arrangement is only
> optimum for some particular pattern of use, so when optimizing you have to
> try to simulate "typical" use.

   While a stochastic search does take some (possibly significant) time,
it wouldn't be necessary to do this at every load - you have the library
take a random sampling of a program's usage, store that in a database,
then have the linker occasionally figure out the optimal pattern for the
program (or perhaps some set of programs), store that, and then load the
library using that pattern at run-time, independently of how its stored on
disk.   If you really needed a dynamically adjustable system (say you have
a long running program whose usage pattern changes over time) you could
have the loader deal with it periodically, perhaps with an explicit call,
perhaps via an implicit call in the code doing the random sampling.
    If you really wanted to do it dynamically on each load, you could
develop an quick search that finds an approximate solution, probably using
precomputed partial solutions for likely cases.

> The reason this increases speed is that the kernel loads the library in
> pages; if you can reduce the number of pages loaded by getting groups of
> functions on the same page, then the kernel won't be doing as much work.
> At least, that's my foggy memory of Nat's talk.
> 
   That's the argument all right.  

Lynn, who wagers his post would be put in the "crap" pile by the
  guy who monitors the linux-kernel mailing list.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]