This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: relying on testsuite results


On Mon, Apr 6, 2009 at 12:18 PM, Thiago Jung Bauermann
<bauerman@br.ibm.com> wrote:

>> Meanwhile the test continues to fail :-(
>
> I'm sincerely curious about why this bothers you to the point of
> remembering this discussion from one month ago and fixing it.

My previous experience maintaining a product with extensive test suite on
multiple platforms has taught me that there are really only two states:
"no failures" and "some failures". Once you enter "some failures" state,
it tends to deteriorate to the point where you get new bug reports,
check your testsuite, and discover that they are already catching the
failure :-(

> I'd love to rely on testsuite results like this, but unfortunately there
> are too many "non-deterministic testcases" (as I call them) and they add
> a great deal of noise.

Yes, they do.
I think we should have a "fixit day", when everybody cleans them up.

> So the most use I get from the testsuite is to run regression tests on
> each patch I submit, and tediously eyeball the diff looking to see if
> any of the PASS<->FAIL flips actuallly mean something.

Yes, I do the same.

But I don't maintain/test the "before" tree. Instead I compare with
gdb.sum-20090309, and so this failure continues to "stick out".

> Do you have a way out of this except going through each of the
> unreliable tests and staring at them long enough to see why they flip
> (and that can be tricky)?

No, I don't :-(

-- 
Paul Pluzhnikov


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]