This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: gdb support for Linux vsyscall DSO


   Date: Sat, 10 May 2003 00:07:07 -0700
   From: Roland McGrath <roland@redhat.com>

   It's notable that I didn't say i386 in the subject.  I am helping David
   Mosberger implement the same concept on ia64 in Linux 2.5 as well.  When
   glibc starts using those entry points on Linux/ia64 (it doesn't yet), then
   there will be the same issues for getting at the unwind info.  The vsyscall
   DSO plan will be the same, so getting at the ELF sections is addressed the
   same way (more on that below).  In the ia64 case, the unwind info is ia64's
   flavor rather than DWARF2 flavor, but the use of ELF phdrs and sections to
   locate it is precisely analogous.

Thanks for your detailed explanation.

No work has yet been done to convert GDB's ia64 target to make it use
the new frame unwinder framework.  Andrew's idea was that it should be
possible to just hook the frame unwind methods into an existing
target, but that hasn't been tested yet.

   > It certainly is my intention to make it possible, although It's not
   > clear how things should be glued together.  

   As you might imagine, I had thought through most of the details of what gdb
   needs to do when I made the kernel changes (whose main purpose for me was
   to make the gdb support possible).  There are two cases that need to be
   addressed.  You've brought up the case of live processes.  There is also
   the case of core dumps.

   > I've seen your kernel patches and it seems as if there are two
   > possibilities:

   You omitted possibility #0, which I had intended to preemptively describe
   and explain why I rejected it.  That is, to have some file that gdb can
   read.  It would be simple to have a /proc file that gives you the DSO image
   as a virtual file, and have the dynamic linker's shlib list refer to this
   file name.  Then gdb might not need any change at all.  However, this sucks
   rocks for remote debugging.  It also doesn't sit right with me because the
   vsyscall DSO is actually there and your PC might be in it, even if you are
   not using the dynamic linker or glibc at all and have no "info shared" list.

Yup, that's why I didn't mention it :-).

   > 1. Reading the complete DSO from target memory, and somehow turning it
   >    in a bfd.  That would make it possible to use the existing symbol
   >    reading machinere to read symbols and .eh_frame info.

   This is what I had anticipated doing, at least the first cut.  Given some
   true assumptions about the vsyscall DSO, it is trivial to compute the total
   file size from the ELF headers and recover the byte image of the file
   exactly.  It can't be too hard to fake up a bfd in memory as if the
   contents had been read from a file.

Fair enough.

   [...]

   > 2. Write a symbol reader that uses the run-tume dynamic linker (struct
   >    r_debug, struct link_map) to locate the dynamic section of the
   >    shared object, and uses it to read the relevant loaded sections of
   >    the DSO from the target and interpret those bits.
   > 
   > If I'm not mistaken, the latter would also allow us to construct a
   > minimal symbol table for DSO's even if the we don't have access to an
   > on-disk file.  

   Indeed so!  Or even if you just have a completely stripped ELF DSO on disk.
   (Or even if you just get so tired of turning off auto-solib-add and using
   add-symbol-file, and cursing its inability to add .text's vma to the
   loadbase itself so you don't have to run objdump -h every damn time,
   umpty-ump times a day debugging subhurds, that you just break down and
   implement solib-absolute-prefix and become a compulsive maniac to keep
   development and test filesystems synchronized!  Oh, I guess that won't
   exactly be happening to anyone else in the future. ;-)

   This is roughly the same as what I had thought would be the better
   long-term plan than the above section-matching.  However, I would separate
   it into two pieces.  I would still advocate using a special case mechanism
   to find the vsyscall DSO's ELF header and locate its guts from there.  That
   works even if the run-tomb dynamic linker's data structures are mangled or
   missing.  But it is cleaner for the reasons above if that works from the
   phdrs out (thence quickly to .dynamic) only, and doesn't rely on finding
   section headers in the DSO.

The run-time dynamic linker's data structures usually are intact, but
a fall-back mechanism wouldn't hurt I guess.

   [...]

   These latter ideas (everything from the quoted #2 on) are of secondary
   concern.  I went into complete detail about them now only because you
   brought it up.  Symbols from the vsyscall DSO in a core dump are nice, but
   not essential.  Getting details from normal DSOs without using disk files
   is a new feature that is very nice but not necessary, nor related except in
   implementation details, to supporting vsyscall stuff.  The steps above are
   not part of my immediate goals.

OK, as long as we try to keep the code as generic as possible and
isolate platform-specific hacks in platform-specific files.

Note that symbols from the vsyscall would be very helpful to the user.
There is also an issue with signal trampolines.  GDB needs to be able
to recognize them in order to support stepping over functions when a
signal arrives.  I proposed Richard Henderson to add a special DWARF
CFI augmentation to mark __kernel_sigreturn and __kernel_rt_sigreturn
as signal trampolines.  That would allow me to recognize them without
knowing their names.

   The essential need is to get .eh_frame information from the vsyscall DSO in
   both live processes and core dumps.  The section-faking code already
   suffices for core dumps.  For the time being, the image provided by the
   kernel does have section headers, so it suffices just to synthesize a bfd
   containing the whole file image read out of a live inferior's memory.  My
   immediate goal is to get things working just using this.  Other things we
   can see about later.  For this immediate goal, I would take these steps:

   1. Make dwarf-frame.c work with .eh_frame info.  Mark is working on this,
      so I will wait for him or help with getting it done.

I just checked something in.  It could use a few more comments, and
certainly needs to be tested, but the basic support for .eh_frame
sections is there.

   2. Make dwarf-frame.c locate .eh_frame via .eh_frame_hdr, matching a section
      named "eh_frame_hdrNN" as core dumps now have.  
      This is pretty trivial after step 1.

   3. Modify corelow.c to do something like symbol_add_file with the core bfd.
      If that barfs on some backends, it could be done just for ELF cores.  I
      don't think it needs to be put into a Linux-specific module; if any
      other ELF core files contain useful phdrs they will work the same way.
      It also needs to remove the objfile/symfile again when the core file is
      detached.  On this I could use advice from those more expert on gdb.  I
      think it may suffice to call symbol_file_add on the core file name in
      place of opening the bfd directly, store the objfile pointer somewhere,
      and call free_objfile in core_detach.  But that is only a guess.  I can
      write this bit and observe internally that it works independent of the
      steps above that make it useful, so immediate implementation advice is
      solicited.

Ordinary shared libraries are cleared out in core_close() by
CLEAR_SOLIB().  This calls solib.c:clear_solib(), so I think you
should remove things in that codepath.

   4. Write a function that creates an in-memory bfd by reading an ELF header
      and the information it points to from inferior memory.  I'll take a
      whack at this soon.

   5. Write Linux backend code to locate the vsyscall DSO's ELF header, use
      that function to synthesize a bfd, and so something like symbol_add_file
      with it.  I'm not sure where this belongs.  It will be Linux-specific
      but not necessarily machine-dependent.  Where it belongs exactly
      probably depends on how exactly the locating is done.  The need to
      attach and detach the synthesized objfile is similar to the core file
      case.

   Heretofore I have avoided mentioning how we locate the vsyscall DSO's ELF
   header in the inferior's address space.  It's an unresolved question that
   needs discussion, but it's a minor and boring detail among all the issues
   involved here.  I saved it for those with the proven stamina to get through
   the first 200 lines of my ranting.

I made it hrough ;-).

   The kernel tells the running process itself where to find the vsyscall DSO
   image with an ElfNN_auxv_t element of type AT_SYSINFO_EHDR in the aux
   vector it writes at the base of the stack at process startup.  For gdb to
   determine this value, there are several approaches possible.

   * Add a ptrace request just for it, i.e. PTRACE_GETSYSINFO_EHDR or
     something.  That is trivial to add on the kernel side, and easy for
     native Linux backend code to use.  It just seems sort of unsightly.
     A way to get that address from some /proc/PID file would be equivalent.

   * Locate the address of the aux vector and read it out the inferior's
     stack.  The aux vector is placed on the stack just past the environment
     pointers.  AFAIK, gdb doesn't already have a way to know this stack
     address.  It's simple to record it in the kernel without significant new
     overhead, and have a way via either ptrace or /proc to get this address.
     I raise this suggestion because it may be most acceptable to the Linux
     kernel folks to add something with so little kernel work involved.  The
     problem with this is that the program might have clobbered its stack.
     glibc doesn't ordinarily modify it, but a buggy program might clobber it
     by random accident, and any program is within its rights to reuse that
     part of the stack.  It won't have done so at program startup, but if you
     attach to a live process that has clobbered this part of its stack then
     you won't find the vsyscall info so as to unwind from those PCs properly.
     I am curious what gdb hackers' opinions on this danger are.

   * Add a way to get the original contents of the aux vector, like
     /proc/PID/auxv on Solaris.  That could be /proc/PID/auxv, or new ptrace
     requests that act like the old PIOCNAUXV and PIOCAUXV.  On Solaris,
     /proc/PID/auxv's contents are not affected by PID clobbering the auxv on
     its stack.  In Linux, about half of the auxv entries are constants the
     kernel can easily fill in anew on the fly, but the other half are
     specific bits about the executable that are not easy to recover later
     from other saved information.  Though AT_SYSINFO_EHDR is all we need, a
     general interface like this really should give the complete set that was
     given to the process.  By far the simplest way to implement that is
     simply to save the array in the kernel.  As the kernel code is now, I can
     do that without any additional copying overhead, but it does add at least
     34 words to the process data structure, which Linux kernel people might
     well reject.  Of these three options, this one is my preference on
     aesthetic grounds but I don't know whether it will happen on the kernel
     side.

   I have not been able to imagine any way to get this magic address (the
   vsyscall DSO loadbase) directly from the system that does not require a
   special backend call and therefore cause some kind of new headache for
   remote debugging.  I don't know what people's thinking is on trying to
   translate this kind of thing across the remote protocol.

   There is also the option to punt trying to find it directly from the
   system, and rely on the dynamic linker's data structures to locate it.  As
   mentioned above, I don't like relying on the inferior's data structures not
   being frotzed, nor relying on there being a normal dynamic linker to be
   able know about the vsyscall DSO.  Furthermore, the normal dynamic linker
   report of l_addr is not helpful because that "loadbase" is the bias
   relative to the addresses in the DSO's phdrs, which is 0 for the vsyscall
   DSO since it is effectively prelinked.  The only address directly available
   is l_ld, the address of its .dynamic section.  There is no proper way from
   that to locate the phdrs or ELF file header, so it would have to be some
   kludge rounding down or searching back from there or something.  The only
   thing favoring this approach is that it requires no new target interfaces
   and no new remote debugging complications.

Relying on the dynamic linker's data structures certainly seems
attractive to me as an additional method since it works independently
of the remote protocol.  To avoid kludging around one could:

1. Expose l_phdr in `struct link_map'.

2. Add a field to `struct r_debug'.

3. Add a dynamic tag.  Since the vsyscall DSO is prelinked, I suppose
   this tag could be initailized correctly in the in-kernel image, and
   wouldn't need to be modified by the run-time dynamic linker.  One
   could even consider abusing DT_DEBUG for this purpose.

   I think this question is pretty open, though not all that exciting.  My
   inclination is to implement /proc/PID/auxv (with storage overhead) and see
   what the Linux kernel hackers' reaction to that is.  They may suggest that
   it work by reading out of the process stack (which is what
   /proc/PID/cmdline and /proc/PID/environ do).  I would like to know opinions
   from the gdb camp on how bad a thing to do that might be.  Second choice is

Well, a buffer overflow of a buffer allocated on the stack is a fairly
common problem, and it is not unimaginable that it overwrites auxv.  I
think it is much more likely that this happens than unintentionally
overwriting the dynamic linker data structures since IIRC those data
structures are not allocated from the standard heap.

   ot make /proc/PID/maps list the vsyscall DSO mapping in a recognizable way;
   that is likely to go over well enough with the kernel hackers.  I would be

That would probably be better that reading /proc/PID/auxv out of the
process stack.

   all for an entirely different solution that is both robust (not relying on
   the inferior's own fungible memory) and remote-friendly, if anyone thinks
   of one.

Mark


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]