This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RFC: KFAIL DejaGnu patch


Well, I am attaching the DejaGnu changes for KFAILs.

Rob: can you please take a look, specially in the documentation
changes and see if they are fit for the master repository?

Michael Chastain: Will you be willing to help me test this?
I've tried it and it seems to work.  Yours scripts must also
be happy with it.

Best regards,
Fernnado


P.S.: I was trying to find some test to try the setup_kfail on.  I went
looking for the setup_xfail *-*-* calls but most don't have any comment
why they are marked like that.  Then I've tried looking for the FAILs
and see if I could find a bug entry for those -- but it is not a
trivial task.  I guess we will need everybody's help and memories to
gradually have things properly documented and related to the bug
entries.




-- 
Fernando Nasser
Red Hat Canada Ltd.                     E-Mail:  fnasser@redhat.com
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
Index: dejagnu/runtest.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/runtest.exp,v
retrieving revision 1.7
diff -c -p -r1.7 runtest.exp
*** runtest.exp	2001/01/21 22:47:21	1.7
--- runtest.exp	2002/04/06 00:19:24
*************** set psum_file   "latest"	;# file name of
*** 49,56 ****
  
  set exit_status	0		;# exit code returned by this program
  
! set xfail_flag  0
! set xfail_prms	0
  set sum_file	""		;# name of the file that contains the summary log
  set base_dir	""		;# the current working directory
  set logname     ""		;# the users login name
--- 49,58 ----
  
  set exit_status	0		;# exit code returned by this program
  
! set xfail_flag  0		;# indicates that a failure is expected
! set xfail_prms	0		;# GNATS prms id number for this expected failure
! set kfail_flag  0		;# indicates that it is a known failure
! set kfail_prms	0		;# bug id for the description of the known failure 
  set sum_file	""		;# name of the file that contains the summary log
  set base_dir	""		;# the current working directory
  set logname     ""		;# the users login name
Index: dejagnu/doc/dejagnu.texi
===================================================================
RCS file: /cvs/src/src/dejagnu/doc/dejagnu.texi,v
retrieving revision 1.2
diff -c -p -r1.2 dejagnu.texi
*** dejagnu.texi	2001/07/17 16:37:27	1.2
--- dejagnu.texi	2002/04/06 00:19:28
*************** case:
*** 453,467 ****
  @item PASS
  A test has succeeded.  That is, it demonstrated that the assertion is true.
  
- @cindex XFAIL, avoiding for POSIX
- @item XFAIL
- @sc{posix} 1003.3 does not incorporate the notion of expected failures,
- so @code{PASS}, instead of @code{XPASS}, must also be returned for test
- cases which were expected to fail and did not.  This means that
- @code{PASS} is in some sense more ambiguous than if @code{XPASS} is also
- used.  For information on @code{XPASS} and @code{XFAIL}, see
- @ref{Invoking runtest,,Using @code{runtest}}.
- 
  @item FAIL
  @cindex failure, POSIX definition
  A test @emph{has} produced the bug it was intended to capture.  That is,
--- 453,458 ----
*************** it has demonstrated that the assertion i
*** 469,477 ****
  message is based on the test case only.  Other messages are used to
  indicate a failure of the framework.
  
- As with @code{PASS}, @sc{posix} tests must return @code{FAIL} rather
- than @code{XFAIL} even if a failure was expected.
- 
  @item UNRESOLVED
  @cindex ambiguity, required for POSIX
  A test produced indeterminate results.  Usually, this means the test
--- 460,465 ----
*************** real test case yet.
*** 516,523 ****
  @end ftable
  
  @noindent
! The only remaining output message left is intended to test features that
! are specified by the applicable @sc{posix} standard as conditional:
  
  @ftable @code
  @item UNSUPPORTED
--- 504,512 ----
  @end ftable
  
  @noindent
! The only remaining @sc{posix} output message left is intended to test
! features that are specified by the applicable @sc{posix} standard as
! conditional:
  
  @ftable @code
  @item UNSUPPORTED
*************** running the test case.  For example, a t
*** 529,543 ****
  @code{gethostname} would never work on a target board running only a
  boot monitor.
  @end ftable
!   
  DejaGnu uses the same output procedures to produce these messages for
  all test suites, and these procedures are already known to conform to
  @sc{posix} 1003.3.  For a DejaGnu test suite to conform to @sc{posix}
! 1003.3, you must avoid the @code{setup_xfail} procedure as described in
! the @code{PASS} section above, and you must be careful to return
  @code{UNRESOLVED} where appropriate, as described in the
  @code{UNRESOLVED} section above.
  
  @node Future Directions
  @section Future directions
  @cindex future directions
--- 518,594 ----
  @code{gethostname} would never work on a target board running only a
  boot monitor.
  @end ftable
! 
  DejaGnu uses the same output procedures to produce these messages for
  all test suites, and these procedures are already known to conform to
  @sc{posix} 1003.3.  For a DejaGnu test suite to conform to @sc{posix}
! 1003.3, you must avoid the @code{setup_xfail} and @code{setup_kfail}
! procedures (see below), and you must be careful to return
  @code{UNRESOLVED} where appropriate, as described in the
  @code{UNRESOLVED} section above.
+   
+ Besides the @sc{posix} messages, DejaGnu provides for variations of the
+ PASS and FAIL messages that can be helpful for the tool maintainers.
+ It must be noted, however, that this feature is not @sc{posix} 1003.3
+ compliant, so it should be avoided if compliance is necessary.
+ 
+ The additional messages are:
+ 
+ @ftable @code
+ 
+ @item XFAIL
+ A test is expected to fail in some environment(s) due to some bug
+ in the environment that we hope is fixed someday (but we can't do
+ nothing about as it is not a bug in the tool that we are testing).
+ The procedure @code{setup_xfail} is used to indicate that a failure
+ is expected.
+ 
+ @cindex XFAIL, avoiding for POSIX
+ @sc{posix} 1003.3 does not incorporate the notion of expected failures,
+ so @sc{posix} tests must return @code{FAIL} rather
+ than @code{XFAIL} even if a failure was expected.
+ 
+ @item KFAIL
+ A test is known to fail in some environment(s) due to a known bug
+ in the tool being tested (identified by a bug id string).  This 
+ exists so that, after a bug is identified and properly registered
+ in a bug tracking database (Gnats, for instance), the count of 
+ failures can be kept as zero.  Having zero as a baseline in all
+ platforms allow the tool developers to immediately detect regressions
+ caused by changes (which may affect some platforms and not others).
+ The connection with a bug tracking database allows for automatic
+ generation of the BUGS section of man pages or Release Notes, as
+ well as a "Bugs Fixed this Release" section (by comparing to a
+ previous release set of known failures).
+ The procedure @code{setup_kfail} is used to indicate a failure is
+ known to exist.
+ 
+ @cindex KFAIL, avoiding for POSIX
+ As with @code{XFAIL}, @sc{posix} tests must return @code{FAIL} rather
+ than @code{KFAIL} even if a failure was due to a known bug.
+ 
  
+ @item XPASS
+ A test was expected to fail with either @code{XFAIL} or @code{KFAIL}
+ but passed instead.  Someone may have fixed the bug and failed to
+ unmark the test, or whatever problem that used to exist in the 
+ environment was corrected (the test may also be failing to detect the
+ failure due to some environment or output changes, so this must be
+ investigated as well).
+ 
+ @code{PASS}, instead of @code{XPASS}, must also be returned for test
+ cases which were expected to fail and did not.  This means that
+ @code{PASS} is in some sense more ambiguous than if @code{XPASS} is also
+ used.  
+ 
+ @end ftable
+ 
+ See also @ref{Invoking runtest,,Using @code{runtest}}.
+ For information on how to mark tests as expected/known to fail by using
+ @code{setup_xfail} and @code{setup_kfail}, see
+ @ref{framework.exp,,Core Internal Procedures}.
+ 
+ 
  @node Future Directions
  @section Future directions
  @cindex future directions
*************** succeed.
*** 612,618 ****
  @kindex XPASS
  @cindex successful test, unexpected
  @cindex unexpected success
! A pleasant kind of failure: a test was expected to fail, but succeeded.
  This may indicate progress; inspect the test case to determine whether
  you should amend it to stop expecting failure.
  
--- 663,669 ----
  @kindex XPASS
  @cindex successful test, unexpected
  @cindex unexpected success
! A pleasant kind of failure: a test was expected/known to fail, but succeeded.
  This may indicate progress; inspect the test case to determine whether
  you should amend it to stop expecting failure.
  
*************** regress; inspect the test case and the f
*** 628,636 ****
  @cindex expected failure
  @cindex failing test, expected
  A test failed, but it was expected to fail.  This result indicates no
! change in a known bug.  If a test fails because the operating system
! where the test runs lacks some facility required by the test, the
! outcome is @code{UNSUPPORTED} instead.
  
  @item UNRESOLVED
  @kindex UNRESOLVED
--- 679,697 ----
  @cindex expected failure
  @cindex failing test, expected
  A test failed, but it was expected to fail.  This result indicates no
! change in a known environment bug.  If a test fails because the operating
! system where the test runs lacks some facility required by the test
! (i.e. failure is due to the lack of a feature, not the existence of a bug),
! the outcome is @code{UNSUPPORTED} instead.
! 
! @item KFAIL
! @kindex KFAIL
! @cindex known failure
! @cindex failing test, known
! A test failed, but it was known to fail.  This result indicates no
! change in a known bug.  If a test fails because of a problem in the
! environment, not in the tool being tested, that is expected to be
! fixed one day, the outcome is @code{XFAIL} instead.
  
  @item UNRESOLVED
  @kindex UNRESOLVED
*************** recorded by your configuration's choice 
*** 844,851 ****
  change how anything is actually configured unless --build is also
  specified; it affects @emph{only} DejaGnu procedures that compare the
  host string with particular values.  The procedures @code{ishost},
! @code{istarget}, @code{isnative}, and @code{setup_xfail} are affected by
! @samp{--host}. In this usage, @code{host} refers to the machine that the
  tests are to be run on, which may not be the same as the @code{build}
  machine. If @code{--build} is also specified, then @code{--host} refers
  to the machine that the tests wil, be run on, not the machine DejaGnu is
--- 905,913 ----
  change how anything is actually configured unless --build is also
  specified; it affects @emph{only} DejaGnu procedures that compare the
  host string with particular values.  The procedures @code{ishost},
! @code{istarget}, @code{isnative}, @code{setup_xfail} and
! @code{setup_kfail} are affected by @samp{--host}.
! In this usage, @code{host} refers to the machine that the
  tests are to be run on, which may not be the same as the @code{build}
  machine. If @code{--build} is also specified, then @code{--host} refers
  to the machine that the tests wil, be run on, not the machine DejaGnu is
*************** common shell wildcard characters to spec
*** 1860,1877 ****
  output; use it as a link to a bug-tracking system such as @sc{gnats}
  (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
  
  @cindex @code{XFAIL}, producing
  @cindex @code{XPASS}, producing
! Once you use @code{setup_xfail}, the @code{fail} and @code{pass}
! procedures produce the messages @samp{XFAIL} and @samp{XPASS}
! respectively, allowing you to distinguish expected failures (and
! unexpected success!) from other test outcomes.
! 
! @emph{Warning:} you must clear the expected failure after using
! @code{setup_xfail} in a test case.  Any call to @code{pass} or
! @code{fail} clears the expected failure implicitly; if the test has some
! other outcome, e.g. an error, you can call @code{clear_xfail} to clear
! the expected failure explicitly.  Otherwise, the expected-failure
  declaration applies to whatever test runs next, leading to surprising
  results.
  
--- 1922,1964 ----
  output; use it as a link to a bug-tracking system such as @sc{gnats}
  (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
  
+ See notes under setup_kfail (below).
+  
+ @item setup_kfail "@var{config}  @r{[}@var{bugid}@r{]}"
+ @c two spaces above to make it absolutely clear there's whitespace---a
+ @c crude sort of italic correction!
+ @cindex test case, known failure
+ @cindex failure, known
+ @cindex known failure
+ Declares that the test is known to fail on a particular set of
+ configurations.  The @var{config} argument must be a list of full
+ three-part @code{configure} target name; in particular, you may not use
+ the shorter nicknames supported by @code{configure} (but you can use the
+ common shell wildcard characters to specify sets of names).  The
+ @var{bugid} argument is mandatory, and used only in the logging file
+ output; use it as a link to a bug-tracking system such as @sc{gnats}
+ (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
+ 
  @cindex @code{XFAIL}, producing
+ @cindex @code{KFAIL}, producing
  @cindex @code{XPASS}, producing
! Once you use @code{setup_xfail} or @code{setup_kfail}, the @code{fail}
! and @code{pass} procedures produce the messages @samp{XFAIL} or @samp{KFAIL}
! and @samp{XPASS} respectively, allowing you to distinguish expected/known
! failures (and unexpected success!) from other test outcomes.
! 
! If a test is marked as both expected to fail and known to fail for a
! certain configuration, a @samp{KFAIL} message will be generated.
! As @samp{KFAIL} messages are expected to draw more attention than
! the @samp{XFAIL} ones this will hopefuly ensure the test result is not
! overlooked.
! 
! @emph{Warning:} you must clear the expected/known failure after using
! @code{setup_xfail} or @code{setup_kfail} in a test case.  Any call to 
! @code{pass} or @code{fail} clears the expectedknown failure implicitly;
! if the test has some other outcome, e.g. an error, you can call
! @code{clear_xfail} to clear the expected failure or @code{clear_kfail}
! to clear the known failure explicitly.  Otherwise, the expected-failure
  declaration applies to whatever test runs next, leading to surprising
  results.
  
*************** for a particular set of configurations. 
*** 1951,1956 ****
--- 2038,2052 ----
  list of configuration target names.  It is only necessary to call
  @code{clear_xfail} if a test case ends without calling either
  @code{pass} or @code{fail}, after calling @code{setup_xfail}.
+ 
+ @item clear_kfail @var{config}
+ @cindex cancelling known failure
+ @cindex known failure, cancelling
+ Cancel a known failure (previously declared with @code{setup_kfail})
+ for a particular set of configurations.  The @var{config} argument is a
+ list of configuration target names.  It is only necessary to call
+ @code{clear_kfail} if a test case ends without calling either
+ @code{pass} or @code{fail}, after calling @code{setup_kfail}.
  
  @item verbose @r{[}-log@r{]} @r{[}-n@r{]} @r{[}--@r{]} "@var{string}" @var{number}
  @cindex @code{verbose} builtin function
Index: dejagnu/lib/framework.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/lib/framework.exp,v
retrieving revision 1.6
diff -c -p -r1.6 framework.exp
*** framework.exp	2001/01/15 08:12:07	1.6
--- framework.exp	2002/04/06 00:19:28
*************** proc unknown { args } {
*** 252,258 ****
  # Without this, all messages that start with a keyword are written only to the
  # detail log file.  All messages that go to the screen will also appear in the
  # detail log.  This should only be used by the framework itself using pass,
! # fail, xpass, xfail, warning, perror, note, untested, unresolved, or
  # unsupported procedures.
  #
  proc clone_output { message } {
--- 252,258 ----
  # Without this, all messages that start with a keyword are written only to the
  # detail log file.  All messages that go to the screen will also appear in the
  # detail log.  This should only be used by the framework itself using pass,
! # fail, xpass, xfail, kfail, warning, perror, note, untested, unresolved, or
  # unsupported procedures.
  #
  proc clone_output { message } {
*************** proc clone_output { message } {
*** 265,271 ****
  
      regsub "^\[ \t\]*(\[^ \t\]+).*$" "$message" "\\1" firstword;
      case "$firstword" in {
! 	{"PASS:" "XFAIL:" "UNRESOLVED:" "UNSUPPORTED:" "UNTESTED:"} {
  	    if $all_flag {
  		send_user "$message\n"
  		return "$message"
--- 265,271 ----
  
      regsub "^\[ \t\]*(\[^ \t\]+).*$" "$message" "\\1" firstword;
      case "$firstword" in {
! 	{"PASS:" "XFAIL:" "KFAIL:" "UNRESOLVED:" "UNSUPPORTED:" "UNTESTED:"} {
  	    if $all_flag {
  		send_user "$message\n"
  		return "$message"
*************** proc log_summary { args } {
*** 365,371 ****
  	if { $testcnt > 0 } {
  	    set totlcnt 0;
  	    # total all the testcases reported
! 	    foreach x { FAIL PASS XFAIL XPASS UNTESTED UNRESOLVED UNSUPPORTED } {
  		incr totlcnt test_counts($x,$which);
  	    }
  	    set testcnt test_counts(total,$which);
--- 365,371 ----
  	if { $testcnt > 0 } {
  	    set totlcnt 0;
  	    # total all the testcases reported
! 	    foreach x { FAIL PASS XFAIL KFAIL XPASS UNTESTED UNRESOLVED UNSUPPORTED } {
  		incr totlcnt test_counts($x,$which);
  	    }
  	    set testcnt test_counts(total,$which);
*************** proc log_summary { args } {
*** 389,395 ****
  	    }
  	}
      }
!     foreach x { PASS FAIL XPASS XFAIL UNRESOLVED UNTESTED UNSUPPORTED } {
  	set val $test_counts($x,$which);
  	if { $val > 0 } {
  	    set mess "# of $test_counts($x,name)";
--- 389,395 ----
  	    }
  	}
      }
!     foreach x { PASS FAIL XPASS XFAIL KFAIL UNRESOLVED UNTESTED UNSUPPORTED } {
  	set val $test_counts($x,$which);
  	if { $val > 0 } {
  	    set mess "# of $test_counts($x,name)";
*************** proc setup_xfail { args } {
*** 442,447 ****
--- 442,484 ----
  }
  
  
+ #
+ # Setup a flag to control whether it is a known failure
+ #
+ # A bug report ID _MUST_ be specified, and is the first argument.
+ # It still must be a string without '-' so we can be sure someone
+ # did not just forget it and we end-up using a taget triple as
+ # bug id.
+ #
+ # Multiple target triplet patterns can be specified for targets
+ # for which the test is known to fail.  
+ #
+ #
+ proc setup_kfail { args } {
+     global kfail_flag
+     global kfail_prms
+     
+     set kfail_prms 0
+     set argc [ llength $args ]
+     for { set i 0 } { $i < $argc } { incr i } {
+ 	set sub_arg [ lindex $args $i ]
+ 	# is a prms number. we assume this is a string with no '-' characters
+ 	if [regexp "^\[^\-\]+$" $sub_arg] { 
+ 	    set kfail_prms $sub_arg
+ 	    continue
+ 	}
+ 	if [istarget $sub_arg] {
+ 	    set kfail_flag 1
+ 	    continue
+ 	}
+     }
+ 
+     if {$kfail_prms == 0} {
+ 	perror "Attempt to set a kfail without specifying bug tracking id"
+     }
+ }
+ 
+ 
  # check to see if a conditional xfail is triggered
  #	message {targets} {include} {exclude}
  #              
*************** proc clear_xfail { args } {
*** 558,563 ****
--- 595,622 ----
  }
  
  #
+ # Clear the kfail flag for a particular target
+ #
+ proc clear_kfail { args } {
+     global kfail_flag
+     global kfail_prms
+     
+     set argc [ llength $args ]
+     for { set i 0 } { $i < $argc } { incr i } {
+ 	set sub_arg [ lindex $args $i ]
+ 	case $sub_arg in {
+ 	    "*-*-*" {			# is a configuration triplet
+ 		if [istarget $sub_arg] {
+ 		    set kfail_flag 0
+ 		    set kfail_prms 0
+ 		}
+ 		continue
+ 	    }
+ 	}
+     }
+ }
+ 
+ #
  # Record that a test has passed or failed (perhaps unexpectedly)
  #
  # This is an internal procedure, only used in this file.
*************** proc record_test { type message args } {
*** 566,571 ****
--- 625,631 ----
      global exit_status
      global prms_id bug_id
      global xfail_flag xfail_prms
+     global kfail_flag kfail_prms
      global errcnt warncnt
      global warning_threshold perror_threshold
      global pf_prefix
*************** proc record_test { type message args } {
*** 613,622 ****
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    }
  	}
  	UNTESTED {
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
--- 673,689 ----
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    }
  	}
+ 	KFAIL {
+ 	    if { $kfail_prms != 0 } {
+ 		set message [concat $message "\t(PRMS: $kfail_prms)"]
+ 	    }
+ 	}
  	UNTESTED {
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
*************** proc record_test { type message args } {
*** 624,641 ****
  	}
  	UNRESOLVED {
  	    set exit_status 1
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
  	    }
  	}
  	UNSUPPORTED {
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
--- 691,712 ----
  	}
  	UNRESOLVED {
  	    set exit_status 1
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
  	    }
  	}
  	UNSUPPORTED {
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
*************** proc record_test { type message args } {
*** 676,689 ****
      set warncnt 0
      set errcnt 0
      set xfail_flag 0
      set xfail_prms 0
  }
  
  #
  # Record that a test has passed
  #
  proc pass { message } {
!     global xfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
--- 747,762 ----
      set warncnt 0
      set errcnt 0
      set xfail_flag 0
+     set kfail_flag 0
      set xfail_prms 0
+     set kfail_prms 0
  }
  
  #
  # Record that a test has passed
  #
  proc pass { message } {
!     global xfail_flag kfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
*************** proc pass { message } {
*** 693,699 ****
  	unset compiler_conditional_xfail_data
      }
      
!     if $xfail_flag {
  	record_test XPASS $message
      } else {
  	record_test PASS $message
--- 766,772 ----
  	unset compiler_conditional_xfail_data
      }
      
!     if {$xfail_flag || $kfail_flag} {
  	record_test XPASS $message
      } else {
  	record_test PASS $message
*************** proc pass { message } {
*** 704,710 ****
  # Record that a test has failed
  #
  proc fail { message } {
!     global xfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
--- 777,783 ----
  # Record that a test has failed
  #
  proc fail { message } {
!     global xfail_flag kfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
*************** proc fail { message } {
*** 714,720 ****
  	unset compiler_conditional_xfail_data
      }
  
!     if $xfail_flag {
  	record_test XFAIL $message
      } else {
  	record_test FAIL $message
--- 787,795 ----
  	unset compiler_conditional_xfail_data
      }
  
!     if $kfail_flag {
! 	record_test KFAIL $message
!     } elseif $xfail_flag {
  	record_test XFAIL $message
      } else {
  	record_test FAIL $message
*************** proc init_testcounts { } {
*** 845,850 ****
--- 920,926 ----
      set test_counts(PASS,name) "expected passes"
      set test_counts(FAIL,name) "unexpected failures"
      set test_counts(XFAIL,name) "expected failures"
+     set test_counts(KFAIL,name) "known failures"
      set test_counts(XPASS,name) "unexpected successes"
      set test_counts(WARNING,name) "warnings"
      set test_counts(ERROR,name) "errors"
Index: dejagnu/testsuite/lib/libsup.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/testsuite/lib/libsup.exp,v
retrieving revision 1.1.1.1
diff -c -p -r1.1.1.1 libsup.exp
*** libsup.exp	1999/11/09 01:28:42	1.1.1.1
--- libsup.exp	2002/04/06 00:19:28
*************** proc make_defaults_file { defs } {
*** 64,69 ****
--- 64,71 ----
      puts ${fd} "set unsupportedcnt 0"
      puts ${fd} "set xfail_flag 0"
      puts ${fd} "set xfail_prms 0"
+     puts ${fd} "set kfail_flag 0"
+     puts ${fd} "set kfail_prms 0"
      puts ${fd} "set mail_logs 0"
      puts ${fd} "set multipass_name 0"
      catch "close $fd"

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]