This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Remote watchpoint support.


> From: jtc@redback.com (J.T. Conklin)
> Date: 31 Oct 2000 15:14:34 -0800
> 
> Did I just hear an echo?  :-)

Yes.  Isn't it true that echoes are so reassuring? ;-)

> In this case, I don't think it would be that difficult to fix.  For a 
> first cut, I'd change the macro to:
> 
>         target_stopped_data_address(&addr)
> 
> It would return 0 if GDB didn't stop because of a watchpoint, and 1
> (and set addr) if it did.

That's what I had in mind.  I thought you saw some specific problems
to make such a change.

In addition, I think it might be a good idea for breakpoint.c to use
STOPPED_BY_WATCHPOINT in the loop where it tests whether any of the
watchpoints triggered.  Using target_stopped_data_address for that, as
GDB does now, is inefficient: since the target end doesn't know
whether GDB needs the address or not, it is forced to do lots of
redundant work interrogating the debug interface.  The target end
needs to find out, for each one of the watchpoints, whether or not it
triggered, because the target_stopped_data_address API doesn't include
the information about a specific watchpoint.  If you have several
active watchpoints, this tends to an O(n^2) behavior, since GDB itself
loops over all the watchpoints.

(I'm guessing that this API peculiarity dates back to the first
platforms where watchpoints became available, where watchpoints
watched a very large area.)

Is there any reason why GDB couldn't use STOPPED_BY_WATCHPOINT inside
bpstat_stop_status?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]