This is the mail archive of the cygwin mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

gprof reported run times off by factor 1.56 on some machines. Has timer tick rate changed ?


Dear Cygwin'ers

gprof reports run times 1.56 times too fast on some machines under cygwin.
On another cygwin installation it reports the correct run time.

I did these experiments with a simple test program that loops millions of
times (included below).

Can anyone shed light on why, and whether there is a work-around?

All of the machines have cygwin installations less than six months old.
I have attached cygcheck -s -v -r reports for the three machines.
 
gprof uses a timer to do program counter sampling to determine run times,
and on other linux platforms there have been problems of gprof reporting
inaccurate run times due to the systems sampling rate being different from
what gprof understands the sampling rate to be. This leads me to ask if the
timer tick rate differs on different cygwin installations? There is
discussion of this similar sounding issue for gprof on linux at:
http://sourceware.org/ml/binutils/2005-07/msg00420.html
http://bugs.gentoo.org/show_bug.cgi?id=90090
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=151763
 
I compiled my test program with gcc -p -pg -Wall longloop10.c and used the
same binary on 3 machines under cygwin.
gcc version is: gcc (GCC) 3.4.4 (cygming special) (gdc 0.12, using dmd
0.125)

I also checked on an ARM linux machine (with a different binary of course).

On each machine I ran the command with "time longloop10" to get the real,
usr, and sys times for the run and compared these with the gprof reported
runtimes.


                    |------ times in seconds -------|
Host                1=real    2=usr    3=sys  4=gprof  ratio 2/4     OS

 
armboard             764.225  764.215  0.01   764.2   1.00002       linux
500MHz Pentium III   137.749  137.487  0.020  137.37  1.000852      Windows
2000/cygwin
1.2GHz Pentium M      65.062   64.984  0.015  41.56   1.563619      Windows
XP/cygwin
2.4GHz Pentium 4      30.296   30.233  0.046  19.24   1.571362      Windows
20000/cygwin

So on the Pentium III under cygwin gprof reports a time that acccurately
matches the user time reported by gprof. Same on the ARM linux system.

For the Pentium M and Pentium 4 gprof reports a time about 1.56 times too
fast.

In each case the machine had no other significant load, and the test program
has ~99% of the CPU for the run (note close match of real and usr times).

I also wrote a short program that reports the value returned by
"sysconf(_SC_CLK_TCK)" and it reports the value of 1000 for all of the
cygwin machines.

My simple test program:

/* loop for a long time*/

#include <stdio.h>

int innerloop(int counter)
{
  int i;
  int j;

  for (i = 0; i <  1000000000 ; i++)
    {
      j = 2*i;
    }
  return (j);
}

int main(int argc, char **argv) {

  int outer;

  printf("About to start a long loop\n");

  for (outer = 0; outer < 10; outer++)
    {
      printf("%d start. ", outer);
      fflush(stdout);
      innerloop(outer);
      printf("done.\n");
    }
  printf("the end\n");

  return(0);
}

Program to print "sysconf(_SC_CLK_TCK)" 

#include <stdio.h>
#include <unistd.h>

int main(int argc, char **argv) {

  long ticks;

  ticks = sysconf(_SC_CLK_TCK);
  printf("sysconf(_SC_CLK_TCK) returns %ld\n", ticks);

  return(0);
}

Any thoughts anyone?

Cheers,

Guy

Attachment: cygcheck-pentiumIII.out
Description: Binary data

Attachment: cygcheck-pentium-m.out
Description: Binary data

Attachment: cygcheck-pentium-IV.out
Description: Binary data

--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Problem reports:       http://cygwin.com/problems.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]