This is the mail archive of the
mailing list for the GDB project.
Re: [rfc] Options for "info mappings" etc. (Re: [PATCH] Implement new `info core mappings' command)
- From: "Ulrich Weigand" <uweigand at de dot ibm dot com>
- To: pedro at codesourcery dot com (Pedro Alves)
- Cc: gdb-patches at sourceware dot org, jan dot kratochvil at redhat dot com, sergiodj at redhat dot com
- Date: Tue, 6 Dec 2011 17:46:04 +0100 (CET)
- Subject: Re: [rfc] Options for "info mappings" etc. (Re: [PATCH] Implement new `info core mappings' command)
Pedro Alves wrote:
> On Monday 05 December 2011 14:52:04, Ulrich Weigand wrote:
> > "info proc" doesn't work that way; it is completely tied to native
> > operation. Adding an "info core" does make some information available
> > for core files, but it doesn't really solve the underlying problem:
> > you still need to remember to use a different command, and even so
> > it doesn't work for remote/gdbserver targets at all.
> IIRC, the rationale given for the objections was that "info proc" was
> originally intended as just a frontend for /proc (hence it accepting
> PIDs not being debugged), and, that there are other core-specific info
> bits that we could attach as "info core" subcommands.
> Playing devil's advicate, under this perpective, another way to look
> at it, is to consider that "info proc PID" should still read /proc
> info from the running system, even when debugging a core, or just an exec.
Well, that's how "info proc" works today -- it looks at the native system
/proc no matter what inferior you're debugging, even when using a completely
different target (e.g. remote).
It's just that this behaviour seems a bit unexpected to me from a user's
perspective. Whether the proper fix for this is to re-purpose the
"info proc" command (along the lines of "show me the type of information
about the current inferior that the OS provides about its underlying
process in /proc, no matter how I'm currently accessing it"), or whether
we ought to just leave "info proc" as is and implement some new command.
> > That's why my suggestion was to instead move "info proc" to be
> > target-independent. That is to say, it would still show Linux-specific
> > information about a process, but it would no longer depend on whether
> > you look at that Linux process natively, remotely, or post-mortem.
> One could argue that generic info like that should be under
> the "inferior" moniker, not "process".
I personally wouldn't really care either way. But it's probably not a
good idea to provide yet more similar-but-slightly different commands
to confuse the user; IMO we shouldn't have both "info proc mappings"
*and* a new "info inferior mappings" ...
It seems to me that the original user request was along the lines:
users are familar with the "info proc mappings" command and use it
frequently; but the command doesn't work remotely and with core files,
can't we fix that?
> I definitely agree that "info proc FOO" should be forwarded. Debugging
> against remote or native should ideally provide the same experience,
> you're just connected to a different host (localhost or remote).
> > In my mind, the proposed TARGET_OBJECT_PROC would fall into the second
> > category, that is, it provides access to pre-existing, operating-system
> > defined contents, while simply abstracting the means of delivery. In
> > particular, I would not expect the "provider side" (Linux native target
> > or gdbserver code) to ever implement any sort of "conversion" of the
> > contents. If there ever should be changes to the contents of /proc
> > files, the task of adapting to those changes should lie solely on
> > the gdbarch code that consumes the TARGET_OBJECT_PROC objects.
> How are we making "info proc map" work with core files
> with this? I'd imagine the core target falling back to the gdbarch
> method, but are we then making the core target synthesize TARGET_OBJECT_PROC
> objects for the gdbarch method to consume? That's where the bit
> about "I don't expect the "provider side (...) to ever implement any
> sort of "conversion" of the contents" seems to fall short.
My thoughts for this were for core_xfer_partial to handle the
TARGET_OBJECT_PROC case by calling into a new gdbarch routine
gdbarch_core_xfer_proc or so (along the lines of
gdbarch_core_xfer_shared_libraries). The linux-tdep implementation
of this would then synthesize /proc/../map contents corresponding
to the core file.
(In the alternative, we could just have a generic gdbarch_core_xfer_partial
routine and move some of the existing platform-specific stuff there.)
It's true that in this case, we would synthesize /proc output, but in
a sense that's just because the kernel didn't provide it -- in theory,
the kernel could put /proc file contents into core file notes, just like
it does e.g. with spufs contents ...
Also, the code synthesizing /proc output would be in one place right
next to the code parsing /proc output, both in linux-tdep.c. So it
shouldn't be much of a maintenance hazard going forward to make sure
they keep in agreement ...
> > Of course, as you say, this means that TARGET_OBJECT_PROC really only
> > can ever be consumed by OS-specific, usually gdbarch code. (But that's
> > still better than having *native-target-only* code IMO.)
> If GDB already needs to know what it is reading, then this could also be
> implemented by having the gdbarch hook open/read remote:/proc/PID/maps ?
> No new target object or packets necessary? Because I'm not seeing what
> TARGET_OBJECT_PROC brings over that (though I'm still confused on how
> "info proc map" on cores is meant to be implemented with this).
For remote, we could do something along those lines (we cannot directly
use "remote:" because this is only implemented for BFD access -- but
we could use the underlying remote_hostio_pread etc. routines).
However, this would mean that gdbarch code would have to know whether or
not it runs on a remote target, or on a Linux native target (we wouldn't
want to access /proc on some system where this isn't available or maybe
does something different). Also, it wouldn't work for core files.
That's why I'm suggesting separating *accessing* the data (into target
code via TARGET_OBJECT_PROC) from *parsing* the data (into gdbarch code).
[ I guess we could implement TARGET_OBJECT_PROC without a new packet type
but implementing TARGET_OBJECT_PROC xfer in remote.c via remote_hostio_pread.
This makes the assumption that the remote side always has a Linux-style /proc,
however. Not sure whether we should make that assumption ... ]
> > I wouldn't mind renaming the object to TARGET_OBJECT_LINUX_PROC to make
> > the intention about the objects contents clearer. (I thought that maybe
> > other procfs targets could also use TARGET_OBJECT_PROC, but since of
> > course the contents would be different, it might be better to use a
> > new object type if and when we ever do that ...)
> I think the name is fine. There's something bugging
> me that may affect the decision though. Up until very recently, the
> FOO in "info proc FOO" was more freeform than it is now:
> But IIUC, while the TARGET_OBJECT_PROC object still takes the
> FOO to return as annex, the set of possible FOOs (map, exec, etc.)
> will now be hardcoded, instead of leaving those to the
> backend/target as well.
My thoughts were for common code to provide a superset of the
set of subcommands supported by all potential gdbarch backends,
and then have the particular backend reject those it doesn't
I guess we could try to make the list of subcommands itself
dynamic and depend on the current gdbarch. That might require
some enhancements to the command handling infrastructure, though.
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE