The goal of the Open Accessibility Checks (OAC) is to create a universal series of tests that may be applied to HTML pages to determine their compliance with accessibility standards. This is an open source project that can be used to support worldwide efforts to create accessible web material.
Current web accessibility checking tools all vary in the tests they use. This causes confusion to the user when each tool gives a different report on a page's accessibility status. It also weakens accessibility guidelines because there is no standard method of testing compliance. It is hoped that the OAC will provide a standard that every accessibility checking tool can implement and therefore provide a consistent accessibility rating to the user.
Here is a link to the entire list of checks. This list is not sorted in any particular order but in the future there will be several methods of sorting and viewing the master list. Each check has been given an ID number for easy reference. These ID numbers have no signifigance and do not in any way indicate how checks are related or their accessibility priority.
These test are not specifically oriented towards any one accessibility standard but may be used by any standard that wishes to test for accessibility. Any accessibility standard can be made up using a subset of these checks. For example, here are several standards that are created from the master list of checks:
Currently, these OAC standards have no status. In the future, the governing bodies that make up accessibility standards, such as the WAI or BITV, can give OAC standards an official status. In effect, stating that to comply with their standard an OAC set of accessibility tests must be passed.
These checks may be implemented by a human or any software accessibility checking tool. Current tools that have committed to supporting the OAC are:
Each check is descripted in a clear and unambiguous format. Anyone wishing to implement the check, using software or by human, should be able to easily do so.
Each check has a trigger element with an option to specify an attribute or content. For example, the check for image missing ALT text uses the IMG element as a trigger. If a check requires testing of the document's content it would use the BODY element as a trigger. For checking content outside the HTML document, for example the DOCTYPE declaration, the check may specify the 'pre-html' pseudo element as a trigger.trigger | IMG element |
---|---|
test | trigger element must contain ALT attribute |
(TODO - Describe the check format.)
Each check has at least 2 associated test files. One test file contains an example of the accessibility problem that should fail the check. The other test file contains an example of HTML code that should pass the check.
Each of these tests can be implemented by a software testing tool however user intervention may still be required. The tests have 3 levels of 'confidence' that the test result is accurate:
If a test with a high 'confidence' rating fails then no user intervention is required to determine that the test result is accurate. If a test fails with a 'medium' or 'low' confidence rating fails then user intervention is required to determine whether the result is accurate or not.
Anyone may create an accessibility check and contribute it to the master list. Each governing body for an accessibility standard is free to accept or reject the check as part of their standard. To contribute a check, contact Chris Ridpath ([email protected]).
(This is the exciting part.) Each check is described in an XML file using a structured format. This file may be machine read and the checks created dynamically. Here is a site that uses a Java servlet to read the current checks file and perform accessibility checking. This design allows for rapid implementation of new accessibility checks. (TODO - A more complete description of this system.)
The source code for this evaluator is available from our CVS server at... under a ... license.
The Open Accessibility Checks project is currently supported by the Adaptive TechnologyResource Center at the University Of Toronto.
Nov. 24, 2003