This is the mail archive of the archer@sourceware.org mailing list for the Archer project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Proof-of-concept on fd-connected linux-nat.c server


Hi Chris,

On Sat, 09 May 2009 21:50:04 +0200, Chris Moller wrote:
> I talked to Eric about gdb/froggy/utrace last week and after a while it
> became apparent that my impression of what I should be doing--a
> wholesale replacement of ptrace/waitpid in synchronous gdb--wasn't what
> he had in mind.

the Stork patch works with -ex "set non-stop yes" -ex "set target-async yes"
incl. "cont&" etc.; that it works in synchronous mode is just a coincidence.


> Apparently, what I was supposed to have been doing is
> working on using froggy/utrace in asynchronous free-running threads. 

I believe we are talking about inferior free-running threads here.  This is
what I was implementing in the Stork patch.

Multithreaded GDB itself is currently IMO out of a question as GDB does not
use any internal locking of its data structures.  Moreover at least for the
slow startup it was during my tests disk-performance limited, not CPU limited.


> That's make vastly more sense than the wholesale replacement
> thing--basically reimplementing ptrace/waitpid over a file descriptor
> would add almost no functionality,

 * No longer any SIGCHLD hassle in GDB, Tom Tromey was pointing out possible
   conflicts with Python code which would like to utilize SIGCHLD on its own.

 * (Small) prerequisite for a multihost functionality - there can be multiple
   file descriptors to each host doing ptrace/waitpid remotely for debugging
   multihost applications (which may be doing inter-host RPC calls).

 * Separate execution for waitpid handling can for example do PTRACE_GETREGS
   and some memory reading in advance.  That means also in parallel to the
   main GDB process on current+future multicore CPUs.  PTRACE_GETREGS by GDB
   code can later get resolved instantly from the registers sent
   asynchronously after the asynchronous waitpid notification so that there
   are no client<->server (possibly multihost) communication round-trip-time
   delays at the moment inferiors stops.

Still the question is if Stork is not just a reinvention of gdbserver.
gdbserver so far was not on a par with linux-nat.c but recently as it is
getting watchpoints/multiprocessing/etc. support the situation is changing.


> would likely hurt performance at least a little,

In the singlehost singleprocess model the current FSF GDB performance is
optimal.  But for multicpu+multihost+multiprocess configurations the optimal
execution model is different (and unfortunately adding some overhead for the
bare singlehost singleprocess operation).


> The asynchronous free-running thread thing, on the other hand, takes
> advantage of features of utrace/froggy not readily available with
> ptrace/waitpid and is much more in keeping with what I designed froggy for
> in the first place

I agree ptrace() should be replaced; this part was not touched in Stork.


> I'm going to add direct file descriptor access to the main branch and go
> with that.

I may be wrong but so far I believed there was a problem with this part (being
implemented in the stork patch in linux_nat_async()).


Anyway I just wanted to resolve my own unanswered questions on all the code by
Pedro Alves, feel free to ignore the patch if it was no use for you.


Regards,
Jan


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]