This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [PATCH v2] Optimize memory_xfer_partial for remote
- From: Don Breazeal <donb at codesourcery dot com>
- To: Pedro Alves <palves at redhat dot com>, "gdb-patches at sourceware dot org" <gdb-patches at sourceware dot org>, "qiyaoltc at gmail dot com" <qiyaoltc at gmail dot com>
- Date: Thu, 30 Jun 2016 10:45:38 -0700
- Subject: Re: [PATCH v2] Optimize memory_xfer_partial for remote
- Authentication-results: sourceware.org; auth=none
- References: <1467058970-62136-1-git-send-email-donb at codesourcery dot com> <6e4787d9-f6d5-641e-ffd5-1dd255806b3b at redhat dot com>
On 6/30/2016 10:06 AM, Pedro Alves wrote:
> On 06/27/2016 09:22 PM, Don Breazeal wrote:
>
>>>> +/* The default implementation for the to_get_memory_xfer_limit method.
>>>> + The hard-coded limit here was determined to be a reasonable default
>>>> + that eliminated exponential slowdown on very large transfers without
>>>> + unduly compromising performance on smaller transfers. */
>>>
>>> Where's this coming from? Is this new experimentation you did,
>>> or are you talking about Anton's patch?
>>
>> Both. I did some experimentation to verify that things were significantly
>> slower without any memory transfer limit, which they were, although I never
>> reproduced the extreme scenario Anton had reported. Presumably the
>> performance differences were due to hardware and environment differences.
>> Regarding the comment, I thought some explanation of the hard-coded number
>> was appropriate. Is there a better or more preferable way to do this, e.g.
>> refer to the commit hash, or does it just seem superfluous?
>
> OK, you didn't mention this experimentation, which left me wondering.
> Particularly, the mention of "exponential" is what most made me pause,
> as it's a qualifier not mentioned elsewhere.
I should have used something like "significant" or "extreme" instead of
exponential.
>
> I guess my main problem with the comment is that by reading it in
> isolation, one has no clue of how what would cause the slowdown (normally
> transferring more at a time is faster!), and thus how to reevaluate
> the default in the future. How about extending to something like:
>
> /* The default implementation for the to_get_memory_xfer_limit method.
> The hard-coded limit here was determined to be a reasonable default
> that eliminated exponential slowdown on very large transfers without
> unduly compromising performance on smaller transfers.
> This slowdown is mostly caused by memory writing routines doing
> unnecessary work upfront when large requests end up usually
> only partially satisfied. See memory_xfer_partial's handling of
> breakpoint shadows. */
>
> Actually, I was going to approve this with that change, but another
> another thought crossed my mind, sorry...
>
> I assume you did this experimentation with remote targets? But this default
> will never be used with those, so that experimentation is meaningless for
> native targets? Actually, the whole capping is probably pointless with
> native targets, since there's really no marshalling and thus no limit.
> That'd suggest making the target method return "-1" or some such
> to indicate there's no limit. WDTY?
That makes sense to me. If it returns ULONGEST_MAX then the rest of the
patch can stay as-is. Something like this?
+/* The default implementation for the to_get_memory_xfer_limit method.
+ The default limit is essentially "no limit". */
+
+static ULONGEST
+default_get_memory_xfer_limit (struct target_ops *self)
+{
+ return ULONGEST_MAX;
+}
+
Thanks
--Don