This is the mail archive of the gdb@sourceware.cygnus.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: deferred stores as a solution to store register api?


>>>>> "Andrew" == Andrew Cagney <ac131313@cygnus.com> writes:
>> A believe that the actual write would be deferred.

Andrew> Is this a good thing or a dangerous thing? (I'm playing devils
Andrew> advocate.)

Can't it be both at once?

Andrew> At present, when GDB is communicating with a remote target, it
Andrew> operates ``write through''.  Any memory/register update goes
Andrew> straight out while reads can come from a cache (well for
Andrew> registers at least).  That way, if the plug is pulled or (more
Andrew> likely :-) GDB dumps core, the target state is up-to-date with
Andrew> respect to changes and the programmer can just resume their
Andrew> debug session.

In theory.  I suspect there are situations where a GDB crash / target
disconnect is not easily recovered.  For example, what happens if GDB
crashes in the midst of a inferior function call?

Andrew> I agree with your comment that the target api needs to be
Andrew> extended so that high-gdb can tell low-gdb (target) that it is
Andrew> performing a batched update and, as you suggest, something
Andrew> like _defered_store() will make that more efficient.  I just
Andrew> also think we may need to add some extra code so that the
Andrew> existing behavour (each command is atomic ???) is also
Andrew> retained.

The window where GDB holds state that the target does not would be
tightened if we add a target_do_deferred_store() after commands have
completed execution.  Would this be enough?

        --jtc

-- 
J.T. Conklin
RedBack Networks

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]