This is the mail archive of the
glibc-bugs@sourceware.org
mailing list for the glibc project.
[Bug math/14064] New: libm-test.inc ulps calculation incorrect for subnormals
- From: "jsm28 at gcc dot gnu.org" <sourceware-bugzilla at sourceware dot org>
- To: glibc-bugs at sources dot redhat dot com
- Date: Sun, 06 May 2012 01:08:36 +0000
- Subject: [Bug math/14064] New: libm-test.inc ulps calculation incorrect for subnormals
- Auto-submitted: auto-generated
http://sourceware.org/bugzilla/show_bug.cgi?id=14064
Bug #: 14064
Summary: libm-test.inc ulps calculation incorrect for
subnormals
Product: glibc
Version: 2.15
Status: NEW
Severity: normal
Priority: P2
Component: math
AssignedTo: unassigned@sourceware.org
ReportedBy: jsm28@gcc.gnu.org
Classification: Unclassified
libm-test.inc:check_float_internal calculates ulps errors thus:
case FP_NORMAL:
ulp = diff / FUNC(ldexp) (1.0, FUNC(ilogb) (expected) - MANT_DIG);
break;
case FP_SUBNORMAL:
ulp = (FUNC(ldexp) (diff, MANT_DIG)
/ FUNC(ldexp) (1.0, FUNC(ilogb) (expected)));
(MANT_DIG is one less than the appropriate one of FLT_MANT_DIG, DBL_MANT_DIG,
LDBL_MANT_DIG.) The calculation for subnormals has the effect of assuming they
have as many significant mantissa bits as normal values; it may have been
written on the incorrect understanding that ilogb for a subnormal returns the
smallest normal exponent rather than the mathematical exponent of the input.
--
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.