Conformance, Interoperability and Portability Testing: Proposed Procedures and Practices
Date: 27 July 2012
Author: Jacques Durand
Technical Advisory Board (TAB) Work Product
If you have questions or comments, Email the TAB.
Who is this for?
This document is for spec editors and TC members focused on interoperability of TC specifications.
1. Objectives
This document proposes testing practices for OASIS standards and specifications. It addresses the following requirements and assumptions:
- The primary objective is to enhance the quality of standards, by allowing a testing process that is initiated before the standardization process is complete, providing timely feedback.
- Another objective is to speed-up the development of conforming and interoperable implementations, thus reducing the time to deploy and productize a standard.
- Another objective is to give some visibility and recognition to the testing efforts that a TC went through before submitting a standard for OASIS approval. This is done by augmenting the standardization process with optional publishing of test material and test reports. For example, there is today a requirement to provide an “implementation report” with a declaration of successful use of the standard, along with an optional interoperability report. This policy recommends that a conformance test suite followed by some level of conformance testing from implementations, be the actual yardstick by which to measure "successful use" of a standard.
- This is not a certification process. The goal is instead to facilitate the testing of implementations of OASIS standards. These guidelines recommend a general process for doing so, that can be assisted to some extent by OASIS staff and resources, and by similar experiences in other TCs.
- The target audience are TC members: the expectation is that TC members will write test assertions (ideally a TC deliverable), and that TC members (or their company) are likely to be early implementers as well as candidates for testing.
2. Definitions
Conformance: Conformance of an implementation to a specification or to a set of requirements is defined as fulfillment by the implementation of all requirements specified.
Conformance Profiles and Levels: Often implementations do not use all the features within a specification. In order to accommodate these implementations it may be desirable to divide a specification into sets of functions. Implementers would still be conforming if they implemented one or more of these sets rather than the entire standard. These sets are used to define profiles or levels. Profiles usually match different kinds of implementations of a same specification (e.g. a document format, a processor for such documents). Levels are used to indicate nested subsets of functionality, ranging from minimal or core functionality to full or complete functionality. [QAFramework from W3C]
Conformance Clause: A statement in a specification that lists all the criteria that must be satisfied by an artifact or an implementation in order to conform to the specification. The clause refers to a set of normative statements and other parts of the specification for details.
Conformance Section: A section in a specification that contains one or more conformance clauses. As different types of artefacts or implementations may conform in different ways, there might be more than one conformance clause. Conformance clauses may refer to and reuse other clauses in order to define different levels or profiles of conformance.
Conformance Target: An artifact such as a protocol, document, platform, process or service, which is the subject of conformance clauses and normative statements. There may be several conformance targets defined within a specification, and these targets may be diverse so as to reflect different aspects of a specification. For example, a protocol message and a protocol engine may be different targets.
Normative Statement: a statement made in the body of a specification that defines prescriptive requirements on a conformance target.
Reference Implementation: an implementation of a specification that is recognized as conforming (i.e. passed a conformance test suite based on agreed success criteria), and which is representative of best practices and of expected quality, so that it can set a standard for future implementations. However it is not recommended to use a reference implementation to assess conformance of another implementation.
Test Assertion: a Test Assertion is a statement of behavior for an implementation: action or condition that can be measured or tested. It is derived from the specification’s normative statements and bridges the gap between the narrative of the specification and the test cases. Each test assertion is an independent, complete, testable statement on how to verify that an implementation satisfies a normative statement.
Test Case: Consists of a set of a test tool(s), software or files (data, programs, scripts, or instructions for manual operations) that checks a particular requirement in the specification to determine whether the results produced by the implementation match the expected results, as defined by the specification. Typically, a Test Assertion results in one or more Test Cases. Each Test Case includes: (1) a description of the test purpose (what is being tested - the conditions / requirements / capabilities which are to be addressed by a particular test, (2) the pass/fail criteria, (3) a reference to the requirement or section in the standard from which the test case is derived (traceability back to the specification).
Test Suite: A combination of Test Cases and Test documentation. Is used to verify conformance or interoperability (or portability) of an implementation to the requirements in the standard, or a conformance profile or level.
Test Harness: An execution environment for Test Suites, e.g. including test drivers or test components. It is usually dependent on platform specifics such as programming language, operating system. Also called a Testbed.
Specification Coverage: Specifies the degree to which specification requirements are addressed by a test suite.
3. Interoperability vs. Portability
The conformance targets of a specification (i.e. what is being specified) usually fall into three major categories:
Data artifacts: These are specified as formats and syntaxes. Examples of these are text documents, templates, messages, structured (e.g. XML) definitions, interfaces definitions, configuration definitions.
State machines: These are specified as logical workflows of tasks or events, or transition rules for states. Implementations of these are processes and procedures, exchange protocols between parties, transactions.
Processors: These are specified as behaviors and interfaces for engines that process data artifacts and/or execute state machines. Examples implementations of these are workflow engines, document processors, message handlers.
These categories are not exclusive from each other: often a specification will address two or all three categories. For example, some specifications describe both a text document format, and also the expected behavior of a text processor for these documents. Also, in some cases it is not possible to clearly distinguish which category an implementation belongs to. A process definition (such as WS-BPEL, or ebBP) can be seen as both a representation of a state machine and of a data artifact: it specifies both an execution logic, and a representation syntax.
Interoperability is then defined as the ability of two Processor implementations to interact with each other in compliance with the specification. This interaction may involve data artifacts (e.g. messages) produced by one implementation and consumed by the other. It may also just involve interfaces (e.g. API calls). Interoperability does not imply symmetry: the processors do not have to be identical and may act in different roles. Typical implementations of which interoperability is expected, is of messaging handlers.
Portability is defined as an equivalent processing (by two or more Processor implementations) of a same data artifact instance or of a same state machine representation. In that case the representation (data artifact or state machine) is said to be portable. For example, it is expected that a document of a particular format be portable across implementations of the related document processor.
Some implementations may be seen as relevant to both : e.g. in a layered architecture, a software module may operate on top of a software platform. Both are Processors, so may be expected to interoperate (interoperability). But one may also consider the Module as an item that is portable from one platform implementation to the other, i.e. will entail an equivalent processing of the overall system in both cases.
4. Testing for Conformance, Interoperability and Portability
Every specification should be written so that it provides enough guidance for the definition of Conformance tests. Conformance testing serves a different purpose than Interoperability or Portability testing. While the notion of implementation conformance applies to all specifications, Interoperability is only relevant to some specifications, and so is Portability.
From a product viewpoint, these different forms of testing will achieve different goals:
Conformance testing is about product compliance: the assurance that your product is conforming to the standard.
Interoperability and Portability testing is about product usability: the assurance that your product will function as expected when operating with other products or platforms.
These two forms of testing are complementary, and in no way can substitute to each other. More precisely:
Conformance testing: Process of verifying that an implementation of a specification fulfills the requirements of this specification, or of a subset of these in case of a particular conformance profile or level. Testing is based on a test suite representing a set of conformance test cases, that an implementation under test may pass or fail. Overall criteria for deciding of conformance must be defined. The test environment may automate the testing, e.g. by simulating an operating environment, generating inputs and verifying outputs of the implementation under test.
In all cases, a well-defined conformance testing process will speed-up the deployment of conforming products and facilitate the adoption of a standard.
Interoperability testing: Process of verifying that two or more implementations of the same specification - typically two "processors" (see above) - can interoperate. Testing is based on a test suite representing a set of interaction scenarios, that implementations under test may pass or fail. The logistics of Interoperability testing is usually more costly (time, coordination, set-up, human efforts) than Conformance testing. Conformance does not guarantee interoperability, and interoperability testing is no substitute for a conformance test suite. But experience shows that Interoperability testing is more successful and less costly when Conformance of implementations has been tested first.
Portability testing: Process of verifying that an implementation (typically, of the "data artifact" kind or "state machine" kind, see above) will be processed the same way by different processor implementations. Testing is based on a test suite representing a set of processing scenarios with defined expected outcome, that implementations under test may pass or fail.
5. Testing Methodology
The following process is recommended:
Recommended Testing Practices
- Always do Conformance testing first. Conformance testing can eliminate many problems that would show during Interoperability or Portability testing. This will reduce the time and effort necessary to achieve successful interoperability or portability testing.
- Automate Conformance testing as much as possible. Conformance is the form of testing easiest to automate as it involves a single implementation under test.
- Design Interoperability or Portability test suites so that they are not redundant with tests already done during conformance testing: repeating tests that belong to conformance testing is a common mistake. Because the total set of possible scenarios between two implementations is very large and combinatorial in nature, one should choose the most common usage scenarios.
Starting with Testable Specifications
First thing first: specifications should be written in a way that is "test friendly". This means:
- A specification should clearly distinguish normative statements and requirements from non-normative parts and sentences. Unique labels for each normative statement greatly helps tracking down (and back up) the test assertions and test cases.
A specification should contain Conformance Clause(s) that are precise enough about what is expected of an implementation, and clearly identify profiles and levels of conformance (if several) and match these to an appropriate type of implementation. [see conf clause guideline http://docs.oasis-open.org/templates/TCHandbook/ConformanceGuidelines.html]
A specification should be associated with a set of test assertions (see testability guidelines: http://wiki.oasis-open.org/tab/TestabilityGuide). These assertions may either be in the specification itself, or in a companion document.
Also, see interoperability guidelines http://wiki.oasis-open.org/tab/InteropGuide for errors to avoid when writing specifications.
General Process for Conformance Testing:
NOTE: in the following, the term 'members' identifies some OASIS members, who do some testing work within the context of a TC - either the TC originator of the specification under test, or another TC focused on testing tasks.
- 1. The TC writes a set of test assertions matching the normative statements pertaining to a conformance profile defined by the conformance clause(s), and approves it (ideally, one of its deliverables).
- 2. Members develop or identify one or more Test Harnesses for executing the test suites. NOTE: the development and/or operation of a Test Harness is not covered by OASIS IPR policy. Members should have a clear understanding and agreement on the IPR implications of using or developing a test harnesses and related software artifacts. Solutions inclusive of all potential users, such as Open-source licensing should be considered.
- 3. Members develop a conformance test suite based on test assertions, for execution in the selected test harness. There might be more than one equivalent test suites, for running over different test harnesses. Various implementation options may be considered, e.g. the same test suite may be adaptable to different test harnesses, or a test harness can be configured for several test suites.
- 4. Members write a test plan, defining which subset of tests are to be run, and including logistic details on how tests will be performed. The test plan must define outcome objectives, e.g. a core set of tests that must be successfully passed, or a percentage that must be passed.
- 5. One of two modes of operation may be chosen: either (a) self-testing, where users operate the testbed (test harness or part of it) themselves by downloading it or by remote access, or (b) a test event is staged with a test operator offering testbed services and assistance to participants.
General Process for Interoperability and Portability Testing:
- 1. Ideally, individual implementations should have undergone Conformance Testing separately before doing Interoperability or Portability Testing, so that most interoperability issues due to conformance failures are already resolved. See the "Conformance testing" process.
- 2. Members write an interoperability or portability test suite covering at least the most common use of the specification. This includes (a) test scenarios between partners or between implementations, (b) a description of the test environment to be used for executing this test suite.
- 3. Members write a test plan based defining which test suite or subset in (2) is to be run, what test scenarios are used, what are the expected or acceptable outcomes, and logistic details on how tests will be performed.
Examples and additional material about some of the above steps are:
- Recommendations for test assertions writing:
The Test Assertions guidelines from OASIS: http://www.oasis-open.org/apps/org/workgroup/tag/document.php?document_id=40010 and more generally material from the Test Assertions Guidelines TC http://www.oasis-open.org/apps/org/workgroup/tag/
The QA Framework from W3C: http://www.w3.org/TR/qaframe-spec/
- Examples of test assertions and test cases for conformance:
In the Service Component Architecture (SCA) OASIS TCs, test assertions for the SCA Assembly model: http://docs.oasis-open.org/opencsa/sca-assembly/sca-assembly-1.1-test-assertions-cd01.pdf
In the SCA OASIS TCs: test cases (for a conformance test suite) for the SCA Assembly model: http://docs.oasis-open.org/opencsa/sca-assembly/sca-assembly-1.1-testcases-cd01.pdf
- Examples of test assertions and test suites for interoperability:
Examples of test assertions embedded in a specification document (HTML) are provided on WS-I site, see the BP 2.0Profile (http://ws-i.org/profiles/BasicProfile-2.0-2010-11-09.html) and RSP Profile (http://www.ws-i.org/Profiles/ReliableSecureProfile-1.0.html)
An example of test scenarios used in an interoperability test suite (here for RSP profile interoperability testing): http://www.ws-i.org/docs/RSP_10_Test_Scenarios.pdf
A more complete interoperability test suite package, including a test plan document and test scenarios, can be downloaded from WS-I: http://www.ws-i.org/deliverables/workinggroup.aspx?wg=basicprofile, see the BP 1.2 and 2.0 Interop Test Suites link (http://ws-i.org/profiles/BPInteroperabilityTestSuite-1.2-2.0-Final.zip)
6. Testing and the TC Process
There are a number of areas in the TC process where testing should be considered.
NOTE: Generally, there is a trade-off between time-to-market and quality of a specification. Depending on the specification, interoperability may or may not take precedence over functional needs. Testing and preparation for Testing has a cost in time and efforts, that should be weighted by the TC. Consequently the optimal approach to testing may vary from one TC to the other, as well as the timing of testing efforts within the general standardization schedule. The following recommendations correspond to an ideal situation:
When chartering the TC: The proposers should consider adding some form of testing deliverable(s) into the charter. However, in some cases it is more appropriate to have testing deliverables handled in a different TC that could be chartered just for this. At the very minimum, it is recommended that test assertions are written either as a separate deliverable or included in a specification. Test assertions help express the normative requirements in a precise way and aid in the debugging of the specification, and the authors of the specification are most qualified to write these. They are the launch pad for writing conformance and interoperability/portability test suites.
During Specification reviews: Specification writing and implementation testing should be an iterative process once an initial stable specification has been drafted. During public reviews, testing can provide valuable feedback into the specification that other reviewers might miss. The benefits can be obtained even if a TC only writes Test assertions as these can help to uncover gaps and ambiguities in specifications.
When a specification is submitted as a Candidate OASIS Standard: Three statements of use must be provided before an attempt to advance the specification is made. Statements of use are an indication of industry support for a specification, and is/are greatly enhanced if conformance and/or interop/portability testing has taken place. Any results from testing activities should accompany the statements of use. The TC Process does not require testing to take place, but if a TC has engaged in any testing it is recommended to provide a report on testing activities with the statements of use.
After a specification has become an OASIS Standard: It is possible for OASIS Members to request submission of the standard to a De Jure body such as ISO and ITU. The Liaison policy covers the process how to do this. While initially requiring an informal interoperability demo as part of this submission, a more rigorous conformance, interoperability and/or portability testing effort is more aligned with current expectations. It is recommended that a more systematic testing procedure be defined, executed and reported on when submitting an OASIS standard to a De Jure organization.
Because conformance testing should always be done before Interoperability testing, as a way to simplify and speed-up interoperability tests, it is recommended that conformance testing takes place before interoperability testing.
Finally it is worth noting that testing plays an important role in the adoption of a standard and product marketability, and testing and demo activities can take place after a standard has been adopted including outside of the TC or even OASIS.
Status of Testing Deliverables:
It is appropriate to have test assertions and test suites undergo public reviews, since not all vendors participate in the TC that defines them. This means such deliverables should be considered for Committee Note or Committee Specification status - but do not seem to justify an OASIS standard status.
The following options should be considered:
- Keeping Test Assertions separate from a related Test Suite. As previously mentioned, writing Test Assertions and writing Test Suites require different expertise, and while the authors of a specification are well positioned to write Test Assertions for it, they may not be qualified or have time to write a related Test Suite. A TC that does not have the time, resources or intent to design or develop Test Suites may still list Test Assertions as deliverable.
- Test Assertions may be considered either as Committee Note or Committee Specification material. It makes sense to have Test Assertions being part of the same work product as the specification under test (e.g. either embedded in the specification or in a separate document), as they are a good complement to specification material. In this case they could become part of a final OASIS standard. In contrast, a Test Suite is a significant deliverable in itself, of a different nature and should preferably not delay or overload the full standardization process of a specification.
A Test Suite document should generally not be considered beyond Committee Note material, as it is not normative material in itself but rather a technical document explaining the practical details of a testing procedure, e.g. acting as a reference guide for test engineers. Besides, the IPR status of related software is not covered by OASIS IPR policies. However in some cases one may consider a Test Suite specification as Committee Specification material, if it is expected that various implementations will be developed that need be aligned with such a specification. The conformance clause of a Test Suite specification would then target actual test suites as candidate implementations.
Appendix
What is in a Test Report
A test report (either conformance testing or interoperability testing) should contain:
1. Indentification of related test material:
- Identification of the test suite that was used (version, authors, reference to test material). This comprises the test harness software, and possibly set of test scenarios used to run the test harness.
- Identification of the test assertions on which the testing was based, in case the test suite does not refer to these.
- Reference to Test plan (see above).
2. Details on Test participants and event:
- Date or dates of the testing.
- Participants involved (who facilitated the testing, whose implementation(s) were under test, whose test harness or test drivers were used, etc.
- General description of how the tests were conducted, overall logsitics (e.g. was the testing distributed between distant participants over several days, or was the testing a face-to-face event.)
- Observations and comments useful for future test events.
3. Test Results:
- Summary showing overall test results for the exercise: how many tests were successfully passed, or failed, or showed other outcomes.
- For each test case of the test suite, or each test scenario: what are the test results (did the implementation under test pass, fail, generated warnings, or an inconclusive outcome.) NOTE: the test results do not need be clearly associated with each participant, as participants may wish to have their test results remain anonymous.
NOTE: Besides a test report, a by-product of interoperability testing may be a set of “implementation guidelines” or an “implementation profile” that addresses interoperability issues not covered by conformance to the specification. A specification may leave out details left as an implementation choice or a configuration choice, and yet necessary for full interoperability. In addition, small differences in interpreting a specification may elude conformance testing and yet lead two implementations to be non-interoperable or non-portable. This will help overcoming interoperability issues not covered by the specification, and also contribute to faster adoption of a standard.
TAB Wiki