This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Always cache memory and registers



The only proviso being that the the current cache and target vector would need to be modified so that the cache only ever requested the data needed, leaving it to the target to supply more if available (much like registers do today). The current dcache doesn't do this, it instead pads out small reads :-(


It needs tweaking for other reasons too.  It should probably have a
much higher threshold before it starts throwing out data, for one
thing.

Padding out small reads isn't such a bad idea.  It generally seems to
be the latency that's a real problem, esp. for remote targets.  I think
both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
want to increase the padding.

No, other way.


Having GDB pad out small reads can be a disaster - read one too many bytes and ``foomp''. This is one of the reasons why the dcache was never enabled.

However, it is totally reasonable for the target (not GDB) to supply megabytes of memory mapped data when GDB only asked for a single byte! The key point is that it is the target that makes any padding / transfer decisions, and not core GDB. If the remote target fetches too much data and `foomp' then, hey not our fault, we didn't tell it to read that address :-^

One thing that could be added to this is the idea of a sync point.
When supplying data, the target could mark it as volatile. Such volatile data would then be drawn from the cache but only up until the next sync point. After that a fetch would trigger a new read. Returning to the command line, for instance, could be a sync point. Individual x/i commands on a volatile region would be separated by sync points, and hence would trigger separate reads.


Thoughts. I think this provides at least one techical reason for enabling the cache.


Interesting idea there.  I'm not quite sure how much work vs. return it
would be.

There needs to at least be a contingency plan (if someone finds a technical problem :-). I also think its relatively easy to implement. Reach a sync point, flush volatile data from the cache.


Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]