This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH][BZ #12515] Improve precision of clock function
- From: Roland McGrath <roland at hack dot frob dot com>
- To: Siddhesh Poyarekar <siddhesh at redhat dot com>
- Cc: libc-alpha at sourceware dot org
- Date: Mon, 10 Jun 2013 15:59:52 -0700 (PDT)
- Subject: Re: [PATCH][BZ #12515] Improve precision of clock function
- References: <20130521145611 dot GM8927 at spoyarek dot pnq dot redhat dot com> <20130523192050 dot EBE582C09E at topped-with-meat dot com> <20130524045412 dot GA8927 at spoyarek dot pnq dot redhat dot com>
> So while microsecond accuracy is not mandatory, it doesn't mean that
> having microsecond accuracy is wrong. It has already been pointed out
> that there are users out there who wonder why clock had such terrible
> precision. What's the use case for someone to consider a more
> precision clock() return value to be a breakage? If there is a good
> reason to consider this breakage, then I could version the symbol so
> that older apps retain the low precision.
A reason doesn't have to be good for it to be a reason. The whole context
of my trepidation is the presumption of applications doing questionable
things.
On further reflection, I realize that the exact same concerns I meant to
raise are relevant to the kernel changing its internal tick frequency from
the traditional 100 to something higher, which happened several years ago.
So I suppose this will be OK. Please post a fresh patch for final detail
review.
Thanks,
Roland