More instantiation problems under hpux

Loren James Rittle rittle@latour.rsch.comm.mot.com
Tue Jan 15 15:24:00 GMT 2002


In article <200201152100.g0FL0dEb018426@hiauly1.hia.nrc.ca> you write:
> Here are the log messages for the FAILs from the testsuite run under
> hppa2.0w-hp-hpux11.11.  I tried to fix 21_strings/capacity.cc but
> there seem to be problems with the implementation if I try to
> instantiate basic_string<A<B>>.

OK, some of these are "known" (hopefully temporary) global problems.
Here is a current information dump:

> FAIL: 21_strings/capacity.cc (test for excess errors)
> Excess errors:
> /usr/ccs/bin/ld: Unsatisfied symbols:
>   std::basic_string<A<B>, std::char_traits<A<B> >, std::allocator<A<B>
> > >::_Rep::_S_max_size(data)
> collect2: ld returned 1 exit status

Agreed, that this is a port-specific problem (or I should say "isn't
affecting all ports").  Since I haven't studied it closely, other than
agreeing that an explicit instantiation appears required, I don't know
the best fix off-hand.

> FAIL: 26_numerics/binary_closure.cc (test for excess errors) [...]
> FAIL: 26_numerics/valarray.cc (test for excess errors) [...]

Please ignore these two failure as they sprung up due to a change in
the compiler and are affecting all ports (as of a few days ago).

> FAIL: 26_numerics/c99_classification_macros_c.cc (test for excess errors)[...]

This could be interesting to debug further.  I can't help since my
system only passes that test because /usr/include/math.h *does not*
define the new C99 macros (from the exact error messages, I infer that
your system does).  Unless I am confused, I think failing that test is
expected for ports that define the C99 macros until the shadow header
work is considered "done".

> [...] FAIL: 27_io/istream_extractor_arith.cc execution test [...]

I see the exact same failure on i386-unknown-freebsd[45]* (same
assertion line number).  The issue is that when a new testcase was
added, it assumed the built limits header file described the FP
hardware limits perfectly.  My analysis is that the limits file for
many ports doesn't match the real FP hardware limits very well...

It was decided that the built C++ limits header should match the C
limits header not the actual FP hardware (the fact that the C limits
header provided by an OS doesn't match the hardware might be
considered a bug).  Under this observation, it must be understood that
they are the published "worst-case" limits (i.e. should be read as
"you might not be able to store an FP with more precision or range"
not "you will never see an FP value with more precision and/or range")
the absolute limits that will be seen in all cases.  The best example
I can offer is FreeBSD running on i386.  By default, it is possible to
get a float or double with far more precision and range than in the
published header file unless it has been written to memory (in which
case, by default, it is truncated to the published limits).  Of
course, when this happens depends on compiler switches and exact code
paths, etc.

Can you confirm that this situation exists on hpux as well (and is not
failing the same assertion for an alternate reason)?  Here is one was
to do it:

Build istream_extractor_arith.exe using the rule in the log file.  Run
it under gdb.  When it hits the assertion, go up to the stackframe
with 'T t' in scope (I don't know if it varies per arch, but it should
be one or two levels).  Note type T and value t.  Compare the value to
the published limits file.  If you have the same issue I see, you
should see what looks like a valid FP value (noting the value i, you
can work out exactly what value t should have) yet outside the
published C/C++ limits.  I suppose that operator>>() could refuse to
return any values outside published limits for the related type.

If you see the same failure mode (thus confirming my theory beyond
i386 and one OS), then I could fix this test by making it more
dynamic.  I think the main thing to test here is that the test case
doesn't crash and that operator>> returns reasonable values whenever
!is.fail().  Thus, instead of inferring from
std::numeric_limits<T>::digits10 or
std::numeric_limits<T>::max_exponent10 a guess at an input string size
that fails, we could try to convert a small string to an FP.  If it
converts, we could test that the FP number matches the number in the
string we constructed within epsilon and then iterate with a larger
input string.  The test only passes once we add the final digit that
overflows (causing is.fail() to be true).

Regards,
Loren



More information about the Libstdc++ mailing list