Accessibility Testing and Reporting With TAW3
When it comes to assessing a Web site's accessibility, any Web designer should know by now that simply running the mark-up though an automated testing tool is not enough. Automated tools are limited, purely testing for syntax, easily ascertained "yes or no" situations and a set of (sometimes quite arbitrary) heuristics, which are often based on an interpretation of accessibility guidelines on the part of the tool's developers.
Nonetheless, automated checkers are a useful tool in the arsenal of accessibility-conscious designers, provided that their results are checked for false positives/negatives [1] and backed up by the necessary manual checks, carried out by a knowledgeable human tester who is familiar with any potential access issues and how they manifest themselves in a Web site.
This article gives a quick run-down of Testo Accesibilidad Web (TAW)3 [2], a free tool - developed by the Spanish Fundación CTIC (Centre for the Development of Information and Communication Technologies in Asturias) - to test Web pages against WCAG 1.0.
TAW3 is available both as an online version (similar to other tools such as Cynthia [3] and Wave [4]) and as a stand-alone java application. In this article, we will concentrate on the stand-alone version, which in version 3 is now also available in English.
Application Interface
The interface consists of a single window, divided into three main areas:
- standard menu bar;
- quick access buttons;
- analyser tab, which is further broken down into: a) analysis scope, b) action buttons, c) analyser settings, d) analysis result, e) source code view.
The analyser panel represents the main work area of the application. It is possible to work on any number of analyser panels within the same application window, which can act independently.
In the scope section of each analyser we can choose to test a single page, or to "spider" a site by defining the types of links to follow, the depth level and the overall number of pages to test.
After the initial automated part of the analysis has been completed, TAW presents us with a summary of all issues it found, as well as the number of points that require human judgement on the part of the tester.
Switching to the individual "Priority 1", "Priority 2" and "Priority 3" tabs, we get a comprehensive list of all the WCAG 1.0 checkpoints. From here, we can see which particular part of the page's mark-up triggered an error or requires human review.
Right-clicking on any of the checkpoints brings up a context menu to aid the tester in assessing the related issues:
- 'techniques' provides a direct link to the particular checkpoint in the W3C WCAG 1.0 document, which gives the expanded explanation of the checkpoint itself, relevant examples, and suggested techniques that satisfy the checkpoint
- "visual checking" creates a customised local copy of the currently analysed Web page, which adds mark-up and styling in order to visually highlight the potential issues and facilitate manual assessment. This approach of 'assisted manual testing' is similar to that of HERA [5], a Web-based tool produced by the Spanish Fundación Sidar.
Of course, this step can be further complemented by additional manual checking methodologies, such as those outlined in my previous article on "Evaluating Web Sites for Accessibility with Firefox" [6]
- once the tester has carried out the manual check, the result can be recorded via the 'validity level' sub-menu; the checkpoint can be marked as a clear pass or fail, as well as 'not tested', 'cannot tell' and 'not applicable'.
Where necessary, it is possible to make specific annotations for each checkpoint tested (for instance, to back up a particular validity level that was chosen). So, for a comprehensive test, we work through each of the checkpoints, ensuring that all automated checks have yielded the correct result and assigning a validity level to all of the human checks.
At any point, we can get a slightly more compact checklist for the current analyser via the Checkpoint › Checklist menu option.
An interesting feature of TAW is the ability to create sub-sets of WCAG 1.0 to test against. From the "Guidelines settings" dialog we can choose not only which level of compliance we're assessing (single A, double A, triple A), but also exclude specific checkpoints. For instance, if we make a conscious judgement that the priority 3 checkpoint 10.4 'Until user agents handle empty controls correctly, include default, place-holding characters in edit boxes and text areas' is not relevant any more (i.e. that the 'until user agents' clause has been satisfied for all user agents in common use today), we can explicitly omit this point from our testing regime.
Although very limited in scope (particularly due to a bug, see below), TAW also allows us to define additional checks via the User checkings' dialog. Currently, we are only able to select the HTML element the check applies to and (through a regular or wizard-based dialog) define which attribute this element is either required or not allowed to have in order to pass the check. For instance, we could create a rule to ensure that all BLOCKQUOTE elements found in a page also have a CITE attribute.
Once we are finished with the overall test, we can save our results in three different ways: as an HTML summary (which simply presents the frequency of errors encountered in tabular format), as an HTML "TAW report" (which adds markers and error descriptions to the analysed page, in the same way as TAW's online version) and as an Evaluation and Report Language (EARL) [7] file (a recent XML format which was specifically created with this type of application in mind).
Problems and Bugs
The main problem with TAW lies with its interface. Currently, there seems to be a large amount of unnecessary redundancy: in most cases, the same functions can be accessed via a button, menu option and context menu. This is not a bad thing in itself, but becomes confusing when it's not carried through consistently. For instance, the checkpoint checklist dialog provides buttons for "Visual checking" and "Checkpoint annotations", but does not offer the tester any way to set the checkpoints' validity levels.
The order in which some steps need to be carried out also matters, but no indication is given via the interface. For instance, "Reports" and "Guidelines" settings need to be chosen before an analysis is started. Changing the settings in an existing analyser pane has no effect, even after hitting the "refresh" button.
Some of the errors reported by the automated test are contentious and based on an interpretation of the relevant WCAG checkpoint. For example, the tool fails priority 2 checkpoint 3.5 "Use header elements to convey document structure and use them according to specification" if, for whatever reason, header levels "jump" by more than one level (e.g. having an
H3
immediately follow an
H1
, completely skipping
H2
) - neither WCAG nor the HTML specification explicitly forbid this (although it's admittedly not the best of practices).
Although the tool allows for the "spidering" of an entire site, if the same issue appears on multiple pages (which can happen, particularly if a site is based on a common template), it is not possible currently to review the issue only once and then set the validity level across all tested pages. This makes using the tool for more than one page at a time quite a tedious experience.
Guideline settings, and even the entire current TAW project, can be saved for future use, but surprisingly the developers opted for a proprietary file format. Personally, I would have welcomed the use of some flavour of XML, which would have opened up these files for potential reuse and manipulation outside of the TAW application itself.
Lastly, what I can only assume is a bug in the program: although it is possible to create user-defined checks, the tool does not allow the tester to set the validity level - the option is simply greyed out. This renders the whole concept of user checks fairly pointless for anything other than purely automated types of checks where the validity can be determined, without margin for error or human judgement, by the software.
Conclusion
TAW3 is certainly not as powerful and comprehensive as some of the commercial, enterprise level testing packages (such as the accessibility module of Watchfire WebXM [8]). In its current implementation, the application has a few bugs and an overall clumsy interface, which make it look unnecessarily complicated and confusing at first glance.
However, the strong emphasis on human checking and some of the advanced features (like the capability to export an EARL report, to test only against a sub-set of WCAG 1.0, and to create user-defined checks) make this a very interesting free tool for small-scale testing.
References
- Davies, M. "Sitemorse fails due diligence",
http://www.isolani.co.uk/blog/access/SiteMorseFailsDueDiligence - Fundación CTIC, Testo Accesibilidad Web online tool and download page,
http://www.tawdis.net/taw3/cms/en - HiSoftware Cynthia Says,
http://www.contentquality.com/ - Wave 3.0 Accessibility Tool,
http://wave.webaim.org/ - Fundación Sidar, 'HERA: Cascading Style Sheets for Accessibility Review' tool,
http://www.sidar.org/ex_hera/ - Lauke, P.H. "Evaluating Web Sites for Accessibility with Firefox", Ariadne 44, July 2005
http://www.ariadne.ac.uk/issue44/lauke/ - W3C Evaluation and Reporting Language (EARL) 1.0 Schema, http://www.w3.org/TR/2005/WD-EARL10-Schema-20050909/
- Watchfire WebXM product page, http://www.watchfire.com/products/webxm/