This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: remote software breakpoint technical detail


On Thu, May 04, 2006 at 01:43:07AM +0800, Tzu-Chien Chiu wrote:
> 5) the processor fetches the breakpoint instruction into the execution
> pipeline, and point pc to the next instruction
> 6) the breakpoint instruction is decoded, recognized, and the processor 
> stalls
> 7) gdb restores instruction foo
> 8) the user issues the single instruction step ('si'), and he expects
> instruction foo be executed next, but...
> 
> The question is:
> 
> What value of pc should be expected after step 5 completes?

The correct answer is "it depends".  In GDB, this is controlled by
"set_gdbarch_decr_pc_after_break", and most architectures use whichever
setting is more reasonable for their architecture.  For yours, it
sounds like you want to have GDB decrement the PC after breakpoints.
It will write to the PC register itself.

> if $pc==foo+4, foo won't be executed but the following instruction,
> which is incorrect.
> 
> if $pc==foo, the breakpoint instruction _has been_ fetched into the
> execution pipeline at step 5, what makes the cpu to *fetch again* the
> instruction restored by gdb at step 7? GDB or the hardware must be
> designed to do so?

This is an issue whether or not you decrement PC.  In the presence of
debugger modification of code, something must flush the pipeline.  I
am not familiar with how other targets do this; I suspect that whatever
is mediating between GDB and the target CPU is responsible (e.g. a
standalone JTAG box or an independent debug server running on the host).

-- 
Daniel Jacobowitz
CodeSourcery


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]