This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Faster strchr implementation.


What is the link to strlen profler? I think we didn't check it on SLM.
I would like to do now.

Thanks,

--
Liubov

On Mon, Aug 12, 2013 at 6:57 PM, OndÅej BÃlka <neleai@seznam.cz> wrote:
> On Mon, Aug 12, 2013 at 04:38:45PM +0400, Liubov Dmitrieva wrote:
>>
> A simplest way is to take total time of each implementation and divide
> that by time spend by fastest implementation.
>
> This is what we are interested when we do profiling on practical
> workloads like in results_gcc.
>
> For random tests it does not make much sense.
> It will compute average with weights that are moreless arbitrary,
> In random tests used sizes 1-160 are ten times more likely
> than 160-1600 which are ten times more likely than 1600-16000.
>
> Then I did same correction as in estimated time spend graph. For given
> size variants get called different number of times so I normalized that.
>
> Now when I look to report.c I did not update it to accomodate more than
> 4 variants.
>
> for(j=0;j<100;j++){
>   long cnt_n = (*cnt)[0][1][j]+(*cnt)[1][1][j]+(*cnt)[2][1][j]+(*cnt)[3][1][j];
>   total_time[choice] += (*time)[choice][1][j]/((*cnt)[choice][1][j]+0.1)*(cnt_n);
> }
>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]