This is the mail archive of the
gdb@sources.redhat.com
mailing list for the GDB project.
Re: stabs vs. dwarf-2 for C programs
Daniel Berlin <dan@cgsoftware.com> writes:
> Oh, and i know about the BFD_IN_MEMORY stuff, it just doesn't seem to
> work well compared to just mmapping the same files. Probably because
> mmap is a lot more optimized than just reading the whole file into
> memory at once.
BFD_IN_MEMORY was a wretched hack I threw in there to support OSF/1
3.2 archive files, in which the native ar program would compress
object files stored in archives (as I recall, I disassembled the
native ar program to work out the compression scheme). Using
BFD_IN_MEMORY was simpler than using a temporary file and much simpler
than decompressing on demand.
Since then DJ has beefed up BFD_IN_MEMORY to work for some other
cases. But in any case it was never intended to be a poor man's
mmap.
> For instance, linking gdb, which is one archive, 8 objects, requires
> 5000 out of memory seeks (ie fseeks), 7532 out of memory reads of
> random sizes, at random offsets, and 28000 out of memory writes.
>
> Of course, linking gdb is barely disk i/o bound, the numbers get much worse
> worse with large programs.
> The worst part about the reads is we do tons of small reads (almost
> all of the reads are 40 bytes or less), 80% being followed (IE we do 8
> 40 byte reads, and a seek) by tons of seeks to positions probably just
> outside the buffering range, and then reads.
I'm surprised that the reads are so small. I would have thought that
most of the reads when linking ELF are for complete ELF sections, and
I wouldn't expect most ELF sections to be 40 bytes or less.
I wonder if this is in part caused by the overall trend to more,
smaller, sections? The linker is written to handle one section at a
time, which would cause worse behaviour if there are many small
sections.
Ian