This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA] Use data cache for stack accesses


On Tue, Aug 25, 2009 at 11:44 AM, Pedro Alves<pedro@codesourcery.com> wrote:
> On Tuesday 25 August 2009 01:48:30, Doug Evans wrote:
>> On Thu, Aug 20, 2009 at 11:00 PM, Doug Evans<dje@google.com> wrote:
>> > On Wed, Jul 8, 2009 at 1:46 PM, Pedro Alves<pedro@codesourcery.com> wrote:
>> >> On Wednesday 08 July 2009 21:08:00, Jacob Potter write:
>
>> >> What if we do this within dcache itself, similarly
>> >> to get_thread_regcache? ?That would be probably in [dcache_xfer_partial].
>> >
>> > It seems that given that we can temporarily change inferiors without
>> > giving subsystems notice of the change, and given vfork, then we need
>> > to have intelligence in dcache to handle this (and then it's not clear
>> > if we should keep one dcache per inferior).
>> >
>> > How about having memory_xfer_partial notify dcache of every
>> > write/read, and then dcache could keep just one copy of the cache and
>> > flush it appropriately?
>
>> Something like this?
>
> Eh, that's exactly what I meant by 'similarly to get_thread_regcache'.
> (Ulrich rewritten it since somewhat to keep more than one more than
> one regcache live at once).
>
> A few small nits: Please fix up a few missing
> double-space-after-period instances, and here,
>
>> + ?if (inf
>> + ? ? ?&& readbuf == NULL
>
> both inf and readbuf are pointers, so please either make both
> subpredicates compare with NULL or neither. ?Hmm, actually, I'm not
> sure ?why you still need to check the inferior for nullness
> in this version?

It wasn't clear to me whether TRTTD was check inf for NULL either, but
the code can only be used if there is an inferior so I left it in.

> I think that the cache should be flushed with
> "set stack-cache off" -> "set stack-cache on", you never
> know what happened between these two commands, so you end up
> with a stale cache.

Righto.

> If write_memory|stack tries to write to [foo,bar), and that
> operation fails for some reason somewhere between foo and bar, I
> think that the cache between somewhere and bar shouldn't be
> updated with the new values. ?Is it?

Indeed.  I recall looking at this but I'll go back and check.

> I worry about new stale cache issues in non-stop mode.
> [...]
> It appears that (at least in non-stop or if any thread is running)
> the cache should only be live for the duration of an "high level
> operation" --- that is, for a "backtrace", or a "print", etc.
> Did you consider this?

It wasn't clear how to handle non-stop/etc. mode so I left that for
the next iteration.
If only having the data live across a high level operation works for
you, it works for me.

> Did you post number showing off the improvements from
> having the cache on? ?E.g., when doing foo, with cache off,
> I get NNN memory reads, while with cache off, we get only
> nnn reads. ?I'd be curious to have some backing behind
> "This improves remote performance significantly".

For a typical gdb/gdbserver connection here a backtrace of 256 levels
went from 48 seconds (average over 6 tries) to 4 seconds (average over
6 tries).


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]