This schema documentation describes the elements and attributes which are to be used in a submission of QT3 test results.
The results file must be rooted at a test-suite-result
element.
Denotes the root element of the results document. The root element contains the following elements
submission
contains administrative details of the results
submission.product
contains information about the product under
test.test-set
elements (which in turn contain a sequence of
test-case
elements) detail the outcome of the each test-case that was
executed.Provides information on the language syntax of the tests to product the test results. For example, one may be running tests with XQueryX syntax. We assume that if the syntax element is not present then the tests reported are of XQuery syntax.
Provides administrative information about the results submission.
Information about the test run whose results are being reported: the version of the test suite that was used, and the date on which it was run.
The version of the test suite that was run. Use "CVS" to mean the version of the test suite that was current in the W3C CVS repository on the date of submission, or CVS-nnnn to identify an earlier CVS version; or a specific version number for a non-CVS version.
Optional text describing the submission
True if the submitter requires the details of the submitter and the product under test to remain anonymous; false if this information can be published.
True if the submitter has no relationship with the implementor. False if the submission is by, or on behalf of, the implementor.
When submission is by a third party, the submitter's name will be published but the
implementation will remain anonymous (regardless of the setting of the anonymous
attribute).
Identifies who created the results submission, and when.
The name of the individual who submitted the results
The email contact information for the individual who submitted the results
The name of the organization on whose behalf the results were submitted
The date on which the results were submitted
Information about the product under test. This includes identification information, and information about the optional features implemented (or not implemented) by the product.
The W3C language/version implemented by the product under test for the purpose of this submission.
The name of the product under test
The version of the product under test
The name of the product's vendor
True if the product under test is generally available at the time of submission
True if the product under test is available under an open source license
Indicates a dependency which the implementation is or is not able to satisfy.
For every dependency that is present in the test catalog, the same dependency element should be present in the results file. If the product under test is able to satisfy the dependency, the attribute "satisfied" should have the value "true". If the product under test is not able to satisfy the dependency, the attribute "satisfied" should have the value "false". If the product under test is capable of being configured so that it runs both the tests with satisfied="true" and those with satisfied="false", the attribute "satisfied" should have the value "both".
The dependency with type="spec" is handled specially. The results file should only include results for one language, for example XQ30 or XP30, and should indicate which language was run by means of the "language" attribute of the "product" element.
The default value "true" indicates that the dependency must be satisified for the test to run
The setting "false" indicates that the test should only be run if the dependency is
NOT satisfied. For example, this might be used in a test to show what happens if a
language (such as lang="jp"
is requested and the processor does not
support that language.
The value "both" indicates that the test driver runs both those tests where the dependency is satisfied and those where it is not satisfied, the implication being that the product can be configured to either satisfy the dependency or not.
Denotes information on when the test were run and which version of the implementation.
Represents the results of a test-set present in the test catalog. The name must match the name of a test-set in the catalog.
The element contains one test-case element for every test case that was actually run and is to be included in the submission.
The name of the test set, which must match the name of a test set in the catalog.
This element contains the outcome of a test. The test case name should be reported exactly as provided in test suite catalog. The element contains the result and any comment, which is used to detail specific failures or dfferent error codes.
The attribute correctError=boolean is used where the test expects an error, a pass should be reported even if the implementation raises the wrong error code; but in such cases the attribute correctError=false should be present.
The name of the test case, which must match the name of a test case in the catalog.
The outcome of the test case. The values "pass" and "fail" indicate that the test was run and the test assertions were or were not satisfied. Note that a pass may be claimed if the test expects an error and the product threw an error, whether or not the error code matches; but in this case the attribute wrong-error-code must be set to true. The value "not-run" is assumed for tests that are present in the catalog and not present in the results submission; but including a "not-run" entry explicitly allows the inclusion of comments explaining why the test was not run (for example, it might be because a bug is outstanding against the test case).
Indicates that the test was run and the assertion was satisfied
Indicates that the test was run and the assertion was NOT satisfied
Indicates that the test was run and reported an error, and the expected result was an error, but the error code was not the error code expected. This is treated for the purpose of headline statistics as a pass, but should be reported in this way so the Working Group can assess the interoperability of error codes. Because an incorrect error code may indicate that a processor is not handling the query correctly, implementors are advised to check each case carefully.
Indicates that the test was not run because it is not applicable to this implementation (for example, because it depends on optional features, or because it is an XQuery test and the product under test is an XPath implementation).
Indicates that the test was not run because the results are disputed, by virtue of an unresolved bug report in the W3C Bugzilla system (which should be cited in the comment field). The dispute may relate to the test or to a statement in the specification that it relies upon. For statistical purposes this outcome is treated in the same way as "n/a".
Indicates that the test was not run because it exceeds limits imposed by the system under test, or consumes excess resources. Where the limits in question are explicitly identified as implementation-defined limits in the specification, it is preferable to handle this situation by having an explicit dependency in the test case, or by allowing an alternative error outcome for the test. However there are other cases such as long strings or sequences, or large integer values in the occurrence count of a regular expression, that are not explicitly discussed in the specification; or it may be a "soft" limit, where the processor could in theory execute the test successfully if given sufficient time and memory. For statistical purposes this outcome is treated in the same way as "n/a".
Indicates that the test was not run for unspecified reasons, perhaps because the test uses a feature that is not yet implemented in the product under test. For statistical purposes this outcome is treated in the same way as "fail". This is the default outcome assumed for tests that are present in the test catalog on the date of the test run, but which are not present in the results submission.
Optional comments about the test result, for example the reason why the test failed or why it was not run.
Should be set to true if a test pass is being claimed in the situation where the test case expects an error, and the product under test reports an error, but the error code reported by the product does not match the error code(s) expected.
The type
attribute of a dependency
element indicates what
type of dependency it is: the set of possible values is enumerated.
The most common type
is spec
, which indicates a dependency
on specific versions of XPath or XQuery. In this case the corresponding
value
attribute is a space-separated list whose tokens are, for example,
"XQ10" indicating XQuery 1.0, "XQ10+" indicating XQuery 1.0 or later, "XQ30+" indication
XQuery 3.0 or later, or "XP20+" indicating XPath 2.0 or later. The tokens in the list
are alternatives; the test may be run if any of the dependencies is satisfied.
Similarly, if the type
is xml-version
, the corresponding
value is a space-separated list whose tokens are "1.0" (XML 1.0), "1.1" (XML 1.1),
"1.0:5+" (1.0, 5th edition or later), "1.0:4-" (1.0, fourth edition or earlier).
As an attribute of the dependency
element, provides a string value to be
used to indicate the dependency.