This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.


On Tue, Sep 03, 2013 at 03:15:25PM -0400, Carlos O'Donell wrote:
> I agree. The eventual goal of the project is to have some kind of
> whole system benchmarking that allows users to feed in their profiles
> and allow us as developers to see what users are doing with our library.
> 
> Just like CPU designers feed in a whole distribution of applications
> and look at the probability of instruction selection and tweak instruction
> to microcode mappings.
> 
> I am willing to accept a certain error in the process as long as I know
> we are headed in the right direction. If we all disagree about the
> direction we are going in then we should talk about it.
> 
> I see:
> 
> microbenchmarks -> whole system benchmarks -> profile driven optimizations

I've mentioned this before - microbenchmarks are not a way to whole
system benchmarks in that they don't replace system benchmarks.  We
need to work on both in parallel because both have different goals.

A microbenchmark would have parameters such as alignment, size and
cache pressure to determine how an implementation scales.  These are
generic numbers (i.e. they're not tied to specific high level
workloads) that a developer can use to design their programs.

Whole system benchmarks however work at a different level.  They would
give an average case number that describes how a specific recipe
impacts performance of a set of programs.  An administrator would use
these to tweak the system for the workload.

> I would be happy to accept a patch that does:
> * Shows the benchmark numbers.
> * Explains relevant factors not caught by the benchmark that affect
>   performance, what they are, and why the patch should go in.
> 
> My goal is to increase the quality of the written rationales for
> performance related submissions.

Agreed.  In fact, this should go in as a large comment in the
implementation itself.  Someone had mentioned in the past (was it
Torvald?) that every assembly implementation we write should be as
verbose in comments as it can possibly be so that there is no
ambiguity about the rationale for selection of specific instruction
sequences over others.

> >> If we have N tests and they produce N numbers, for a given target,
> >> for a given device, for a given workload, there is a set of importance
> >> weights on N that should give you some kind of relevance.
> >>
> > You are jumping to case when we will have these weights. Problematic
> > part is getting those.
> 
> I agree.
> 
> It's hard to know the weights without having an intuitive understanding
> of the applications you're running on your system and what's relevant
> for their performance.

1. Assume aligned input.  Nothing should take (any noticeable)
   performance away from align copies/moves
2. Scale with size
3. Provide acceptable performance for unaligned sizes without
   penalizing the aligned case
4. Measure the effect of dcache pressure on function performance
5. Measure effect of icache pressure on function performance.

Depending on the actual cost of cache misses on different processors,
the icache/dcache miss cost would either have higher or lower weight
but for 1-3, I'd go in that order of priorities with little concern
for unaligned cases.

Siddhesh


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]