This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [Fwd: Re: gdb/725: Crash using debug target and regcaches (in5.3 branch?)]


On Thu, Nov 21, 2002 at 05:42:24PM -0500, Andrew Cagney wrote:

FYI,

Too many memory reads/writes was one reason for a ptrace'd threaded shlib program running slow, I suspect this is the other.

Maybe, maybe not... definitely needs to go though!  Thanks for such a
thorough investigation, it gave me a good idea.
[snip]

currently:
runtest linux-dp.exp print-threads.exp  17.21s user 48.22s system 82% cpu 1:19.56 total
With change:
runtest linux-dp.exp print-threads.exp  16.67s user 45.35s system 82% cpu 1:15.27 total
Given that the numbers are being overwhelmed by all those memory read ptrace calls, a ~5% improvement is significant.

Try something simpler like running gdb under strace (tweak testsuite/lib/gdb.exp to run 'strace $GDB' instead of $GDB) and then count how many ptrace calls of each type occure.

Briefly, the GNU/Linux thread code is giving regcache.c conflicting stories over which inferior ptid should be in the register cache. As a consequence, every single register fetch leads to a regcache flush and re-fetch. Outch!
Briefly, core GDB tries to fetch a register. This eventually leads to the call:
regcache_raw_read(REGNUM)
registers_tpid != inferior_tpid
(gdb) print registers_ptid
$6 = {pid = 31263, lwp = 0, tid = 0}
(gdb) print inferior_ptid
$7 = {pid = 31263, lwp = 31263, tid = 0}
-> flush regcache
-> registers_tpid = inferior_tpid
-- at this point regnum is invalid
target_fetch_registers (regnum)
Since the inferior doesn't match the target, the cache is flushed, inferior_ptid is updated, and the register is fetched. The fetch flows on down into the depths of the target and the call:
Seen the problem yet?

Yup.  Saw something else very interesting, too.


The long term fix is to have per-thread register caches, that is progressing.
I don't know about a short term fix though.

I was working on a short-term fix and discovered it was almost entirely
in place already.  Look at a couple of random fetch_inferior_registers
implementations; every one that a GNU/Linux platform uses already will
fetch the LWP's registers if the LWP is non-zero.  So why not give that
to 'em?  Leave the inferior_ptid as it is, and make
fetch_inferior_registers honor the LWP id.
It feels right. I'm hopeing that, eventually, the code will supply the registers direct to a (one of many) `struct thread_info' object.

So, thoughts on the attached patch?
Thread maintainer question (not so sure about the #ifdef linux though :-).

Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]