This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Test suite docs


Hello Eli,

I am really sorry to learn that running the testsuite turned out
to be so trying. Don't hesitate to ask us for tips and tricks
before wasting more of your time.

I cannot say that I am a dejagnu expert. In fact, I cannot
say that I like this technology and the less time I spend fiddling
with it, the better I feel.

So, for lack of knowledge,  I will not volunteer to write the doco up.
However, here some of the things I do. Bear in mind that they might
not be the most efficient way of doing things, but they should
help you in the meantime.

> While I'm no newcomer to Free Software, and I expect to spend some
> time figuring out things on my own when it comes to using a new piece
> of software, the test suite makes it exceptionally hard, IMHO.  Some
> of the reasons are out of our control: the tests use several software
> packages (Dejagnu which uses Expect which uses TCL), so answers are
> potentially scattered across several unrelated packages, and the fact
> that none of them has GNU standard Info manuals (or at least I
> couldn't find them on fencepost.gnu.org) doesn't help.  But that's
> just one more reason to have a good user-level documentation in GDB to
> help overcome these difficulties.

I use the latest official dejagnu release, along with one of the latest
TCL and expect releases. They were the latest at the time when I did
the install, which was a while ago (perhaps a couple of years).

It used to be that we were supposed to use the dejagnu that was
in the src tree, but I don't think this is the case anymore. I think
this part of src has been deprecated, or perhaps removed.

>   . Where do I find the canonical results for my platform?
> 
>     People talk about XFAILs and ``unexpected failures'', but there
>     seems to be no place to consult the expected results for all the
>     tests and see if what you get is okay or not.  The test suite
>     prints a summary of the tests, but how do I find out what are
>     those ``unexpected successes'' and ``expected failures''?  What
>     are those XPASS, XFAIL, UNTESTED, and other indications displayed
>     while the suite runs?

This is actually a very tricky question, IMO. The results of the
testsuite are so dependent on the quality of the compiler that
I've given up on marking tests are xfails or kfails, etc.

What I have been doing ever since I started working on GDB, is
that I run the testsuite before I make the change, and then after
(as you suggest below). If I don't see any regressions, then I consider
that the change passes the testsuite (that's why I always mention
"no regression" when I check a change against the testsuite).

>   . How do I compare two runs?  If diff'ing testsuite/gdb.sum is the
>     right way, it seems to not be documented anywhere, and gdb.sum
>     doesn't seem to be preserved across runs, so one must manually
>     copy it to avoid overwriting it.  Am I missing something?

I personnally use a home-made program that is essentially a smart
diff tool for .sum files. It presents the information in 3 columns,
showing only the differences. Column one, the result in the reference
test; column 2, the result in the second .sum file; and column 3
the label of the test.

Here is an example of the output:
% sumtool -c gdb.sum.ref gdb.sum

* gdb.ada:
+------------+------------+----------------------------------------------------+
|       FAIL | PASS       | null_record.exp: ptype on null record              |
+------------+------------+----------------------------------------------------+

* gdb.threads:
+------------+------------+----------------------------------------------------+
|       PASS | FAIL       | schedlock.exp: thread 0 ran                        |
|       PASS | FAIL       | schedlock.exp: thread 1 ran                        |
|       PASS | FAIL       | schedlock.exp: thread 2 ran                        |
|       PASS | FAIL       | schedlock.exp: thread 3 ran                        |
|       PASS |            | schedlock.exp: other thread 3 didn't run           |
|            | PASS       | schedlock.exp: other thread 4 didn't run           |
|       PASS |            | schedlock.exp: other thread 3 didn't run (step ... |
|            |            | ... ping)                                          |
|            | PASS       | schedlock.exp: other thread 4 didn't run (step ... |
|            |            | ... ping)                                          |
+------------+------------+----------------------------------------------------+

Text doesn't allow me to show everything to you, but the "-c" option
mean "color", so PASS are printed in green, while "FAIL" are printed
in red. It's very easy to scan the information and pickup what's wrong.

If you like, I can send you the source. It's an Ada program, however,
so you'll need an Ada compiler.

>   . How does one disable a specific test?  Suppose some test takes an
>     exceptionally long time -- how do I run the suite without it?

The way I do it: either rename the .exp file, or even delete it.

>   . Where do I look for definitions and docs of specific subroutines
>     that the *.exp files use?
> 
>     Suppose I've found out about the set_xfail subroutine, and want to
>     look into it and see whether it can be used to disable a test:
>     where do I look for its definition or its documentation?  There's
>     the test's *.exp file, there's the testsuite/lib/ subdirectory
>     (which, btw, is only mentioned in passing in gdbint.texinfo), and
>     then there are Dejagnu and Expect and TCL.  Could we please have a
>     list of files or directories to look in, and a list of
>     documentation files to browse?

For this item, I don't know how to help you. You know about the testsuite/lib
dir, which is as much as I know. I often have a look at the TCL reference
manuals when I need something. But I've managed to get by only by pretty
much copy/paste'ing previous code.

-- 
Joel


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]