This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BZ #12515] Improve precision of clock function


> So while microsecond accuracy is not mandatory, it doesn't mean that
> having microsecond accuracy is wrong.  It has already been pointed out
> that there are users out there who wonder why clock had such terrible
> precision.  What's the use case for someone to consider a more
> precision clock() return value to be a breakage?  If there is a good
> reason to consider this breakage, then I could version the symbol so
> that older apps retain the low precision.

A reason doesn't have to be good for it to be a reason.  The whole context
of my trepidation is the presumption of applications doing questionable
things.

On further reflection, I realize that the exact same concerns I meant to
raise are relevant to the kernel changing its internal tick frequency from
the traditional 100 to something higher, which happened several years ago.

So I suppose this will be OK.  Please post a fresh patch for final detail
review.


Thanks,
Roland


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]