This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA] Use data cache for stack accesses


On Tuesday 25 August 2009 01:48:30, Doug Evans wrote:
> On Thu, Aug 20, 2009 at 11:00 PM, Doug Evans<dje@google.com> wrote:
> > On Wed, Jul 8, 2009 at 1:46 PM, Pedro Alves<pedro@codesourcery.com> wrote:
> >> On Wednesday 08 July 2009 21:08:00, Jacob Potter write:

> >> What if we do this within dcache itself, similarly
> >> to get_thread_regcache? ?That would be probably in [dcache_xfer_partial].
> >
> > It seems that given that we can temporarily change inferiors without
> > giving subsystems notice of the change, and given vfork, then we need
> > to have intelligence in dcache to handle this (and then it's not clear
> > if we should keep one dcache per inferior).
> >
> > How about having memory_xfer_partial notify dcache of every
> > write/read, and then dcache could keep just one copy of the cache and
> > flush it appropriately?

> Something like this?

Eh, that's exactly what I meant by 'similarly to get_thread_regcache'.
(Ulrich rewritten it since somewhat to keep more than one more than
one regcache live at once).

A few small nits: Please fix up a few missing
double-space-after-period instances, and here,

> +  if (inf
> +      && readbuf == NULL

both inf and readbuf are pointers, so please either make both
subpredicates compare with NULL or neither.  Hmm, actually, I'm not
sure  why you still need to check the inferior for nullness
in this version?

I think that the cache should be flushed with
"set stack-cache off" -> "set stack-cache on", you never
know what happened between these two commands, so you end up
with a stale cache.

If write_memory|stack tries to write to [foo,bar), and that
operation fails for some reason somewhere between foo and bar, I
think that the cache between somewhere and bar shouldn't be
updated with the new values.  Is it?

I worry about new stale cache issues in non-stop mode.  E.g.,
take this test case:

#include <pthread.h>

volatile int *foop;

void *
thread_function0 (void *arg)
{
  while (1)
    {
      if (foop)
	(*foop)++;
      usleep (1);
    }
}

void *
thread_function1 (void *arg)
{
  volatile int foo = 0;

  foop = &foo;

  while (1)
    {
      usleep (1);
    }
}

int
main ()
{
  pthread_t threads[2];
  void *result;

  pthread_create(&threads[0], NULL,
		 thread_function0, NULL);
  pthread_create(&threads[1], NULL,
		 thread_function1, NULL);
  pthread_join (threads[0], &result);
  pthread_join (threads[1], &result);
}

If you set a breakpoint on thread_function1 e.g, at
the usleep line, while letting thread_function0 run free,
you'll see that if you issue multiple "(gdb) print foo"
commands, with the cache on, you'll get the same stale
result.  

 (gdb) set stack-cache on
 (gdb) p foo
 $19 = 7482901
 (gdb) p foo
 $20 = 7482901
 (gdb) p foo
 $21 = 7482901

 (gdb) set stack-cache off
 (gdb) p foo
 $22 = 155394461
 (gdb) p foo
 $23 = 155541672
 (gdb) p foo
 $24 = 155642546

(note that the cache isn't flushed with the stack-cache command:)

 (gdb) set stack-cache on
 (gdb) p foo
 $25 = 7482901

It appears that (at least in non-stop or if any thread is running)
the cache should only be live for the duration of an "high level
operation" --- that is, for a "backtrace", or a "print", etc.
Did you consider this?


Did you post number showing off the improvements from
having the cache on?  E.g., when doing foo, with cache off,
I get NNN memory reads, while with cache off, we get only
nnn reads.  I'd be curious to have some backing behind
"This improves remote performance significantly".

-- 
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]