This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.


On Tue, Sep 03, 2013 at 02:31:14PM -0500, Ryan S. Arnold wrote:
> On Tue, Sep 3, 2013 at 11:18 AM, Carlos O'Donell <carlos@redhat.com> wrote:
> > We have one, it's the glibc microbenchmark, and we want to expand it,
> > otherwise when ACME comes with their patch for ARM and breaks performance
> > for targets that Linaro cares about I have no way to reject the patch
> > objectively :-)
> 
> Can you be objective in analyzing performance when two different
> people have differing opinions on what performance preconditions
> should be coded against?
>
I fear more situations like when google and facebook will send their
implementation which saves each milion dollars in energy bill over others
implementation.

> > You need to statistically analyze the numbers, assign weights to ranges,
> > and come up with some kind of number that evaluates the results based
> > on *some* formula. That is the only way we are going to keep moving
> > performance forward (against some kind of criteria).
> 
> This sounds like establishing preconditions (what types of data will
> be optimized for).
> 
> Unless technology evolves that you can statistically analyze data in
> real time and adjust the implementation based on what you find (an
> implementation with a different set of preconditions) to account for
> this you're going to end up with a lot of in-fighting over
> performance.
>
Technology is there at least for x64, its political problem.

You do not need real time analysis most of time. When user accepts
possibility performance could be worse by constant factor then profiling
becomes easier. We could rely on data from previous runs of program
instead.

First step would be to replace ifunc selection by first running
benchmarks on processor and then creating hash table with numbers of
fastest implementation and selection will be done by this lookup.

My profiler could(modulo externalities) for each program measure
 which implementation is fastest on given processor.

Then AFAIR appending data to elf is legal so we could append hash table
with program specific numbers to binary.

A ifunc resolver will first look to end of binary if there is ifunc table.
This could be implemented that worst that can happen for false positive
is selecting slow implementation. If there is entry for current processor it 
will use it, otherwise it will look to table at end of libc.so and
repeat this step and if it was not found it will use default implementation.

This is doable, problems is motivate somebody to do profiling. My
approach tries to it possible for programmers (write testing, run it at
different machines and assemble), distributions (find enough users that
will agree to have sometimes turned profiling on and sending results.)
or end users who could turn profiling on to learn their patterns.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]