This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Don't use SSE4_2 instructions on Intel Silvermont Micro Architecture.


On Fri, Jun 21, 2013 at 08:05:18AM +0530, Siddhesh Poyarekar wrote:
> On 21 June 2013 06:54, OndÅej BÃlka <neleai@seznam.cz> wrote:
> > Already tried and will not make same mistake again.
> 
> Seriously?  All you've tried to do is *remove* existing benchmarks
> instead of posting patches to make them better. 

What is better when somebody ask you for directions and you do not
know? Say that you do not know or tell them random direction to appear
that you know?

There are several cases where you 
>  When the idea of
> removing them was rejected you decided that it was easier to either
> talk about how crappy the current tests are or point everyone who is
> working on string functions to use the benchmarks you maintain outside
> of glibc, without making any real effort to port them or the ideas
> into the glibc benchmark framework.
> 
Sorry but integration depends on dryrun framework which got stalled.
Also patches to randomize ordering and use stop when given confidence
interval is reached are stalled. 
> I won't call any of that trying, especially when I stepped back for a
> significant amount of time to allow you to enhance the string
> benchmarks before I moved them into the benchmark framework.
>
It still contains bugs that are stoppers. I repeat myself here but if
you want can fill each of these to bugzilla.

First it does not randomize size in any way. This will cause branches to
be predicted and as branch prediction can account to 20% of time results
you get will be 20% off.

Same applies to alignment, it needs to be randomized otherwise you lose
part of performance profile. Setting alignment by config variable is
pointless as it will only distinguish aligned/unaligned.

Then we move to aggregation of results.
It tests a single implementation a time which is wrong. A runtime of
process depends on many variables and you introduce bias by doing this.

Fox example as you ran
make bench and browsed some page in firefox or some background process
would kick some function would be worse than it really is.

Proper way is test both of them at once and randomize which gets
selected.
 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]