This is the mail archive of the archer@sourceware.org mailing list for the Archer project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Q: ugdb && watchpoints


On Tue, 19 Oct 2010 20:11:59 +0200
Oleg Nesterov <oleg@redhat.com> wrote:

> I was even thinking about serializing. That is, ugdb schedules only
> one thread to step every time. This way at least we can always know
> who changes the memory. But this is non-trivial, very bad from
> perfomance pov, and doesn't work with syscalls.

I think that stepping one thread at a time is the approach that must
be taken if you want to accurately report the thread that triggered
the watchpoint.  (I don't understand the issue with syscalls though...)

> Any advice is very much appreciated. Most probably, there is no any
> clever solution. Once a traced sub-thread detects that a watchpoint
> was changed, it should mark this wp as "reported" for other threads
> and report it to gdb. IOW, we report the random thread and random wp.

Is there a big performance win in implementing software watchpoints in
ugdb?  If not, I wouldn't worry about it.  My experience with software
watchpoints in native gdb is that they're *very* slow and, as such,
are often not worth using at all.

> Another question. I guess, ugdb should implement hardware watchpoints
> as well? Otherwise, there is no any improvement compared to gdbserver
> in the likely case (at least I think that a-lot-of-wps is not that
> common). But we only have Z2 for both. So I assume that ugdb should
> try to use the hardware watchpoints, but silently fall back to
> emulating?

IMO, hardware watchpoints are definitely worth implementing.  From
a user perspective, I would prefer that the stub not implement software
watchpoint support when the real hardware watchpoints are used up.

Hope this helps...

Kevin


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]