This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [PATCH] Improve performance of large restore commands
- From: Anton Blanchard <anton at samba dot org>
- To: Pedro Alves <palves at redhat dot com>
- Cc: Luis Machado <lgustavo at codesourcery dot com>, gdb-patches at sourceware dot org
- Date: Mon, 29 Jul 2013 15:44:57 +1000
- Subject: Re: [PATCH] Improve performance of large restore commands
- References: <20130725220858 dot 58184193 at kryten> <51F16780 dot 70408 at redhat dot com>
Hi Pedro,
Thanks for the review.
> > I noticed a large (100MB) restore took hours to complete. The
> > problem is target_xfer_partial repeatedly mallocs and memcpys the
> > entire 100MB buffer only to find a small portion of it is actually
> > written.
>
> I think you meant memory_xfer_partial, in the breakpoint shadow
> handling, right? I'd prefer pushing the capping close to the
> offending malloc/memcpy.
I wrote the summary after looking at the perf profile data. It is
indeed in memory_xfer_partial and to confirm I reran with a gdb
built with -fno-inline:
memcpy |
--- memcpy
target_xfer_partial
target_write_with_progress
target_write_memory
restore_command
I've moved the check closer and also added a better comment as Luis
suggested. Updated patch below.
> We could conceivably change that shadowing algorithm to not malloc
> at all. E.g., say, with a memory block like
>
> |------B------|
> start end
>
> with B being the address where a breakpoint is supposed to be planted,
> write block [start,B), then a write for the breakpoint instruction
> at B, then another block write for (B,e).
>
> Or, we could throttle the requested window width up/down depending
> on the buffer size returned at each partial transfer.
>
> I'm not actually suggesting doing this, only explaining why I'd
> rather put the cap close to the problem it solves.
> target_write_partial is used for other targets objects too, not just
> memory.
Yes, that definitely makes sense.
> > We already cap reads to 4K
>
> Where exactly? In the target backend, perhaps? I'm not finding a
> cap at the target.c level.
I've removed this comment. I was looking at target_fileio_read_alloc_1
but it actually starts at 4K and doubles in size as it grows.
Anton
--
[PATCH] Improve performance of large restore commands
I noticed a large (100MB) restore took hours to complete. The problem
is memory_xfer_partial repeatedly mallocs and memcpys the entire
100MB buffer for breakpoint shadow handling only to find a small portion
of it is actually written.
The testcase that originally took hours now takes 50 seconds.
--
2013-07-29 Anton Blanchard <anton@samba.org>
* target.c (memory_xfer_partial): Cap write to 4K
Index: b/gdb/target.c
===================================================================
--- a/gdb/target.c
+++ b/gdb/target.c
@@ -1669,6 +1669,13 @@ memory_xfer_partial (struct target_ops *
void *buf;
struct cleanup *old_chain;
+ /* A large write request is likely to be partially satisfied
+ by memory_xfer_partial_1. We will continually malloc
+ and free a copy of the entire write request for breakpoint
+ shadow handling even though we only end up writing a small
+ subset of it. Cap writes to 4K to mitigate this. */
+ len = min(4096, len);
+
buf = xmalloc (len);
old_chain = make_cleanup (xfree, buf);
memcpy (buf, writebuf, len);