New standards and guidelines are being drafted, and the drafts offer guidelines that are very different from those we have discussed here. They provide a more operational approach rather than talking about the specific technology.
The first such requirement from WCAG20 October 2007 draft, says that all reasonable function must be available from the keyboard. This corresponds to the software provision Â§1194.21(a) that we discussed above (Accessibility Problems With Events).
2.1.1 Keyboard: All functionality of the content is operable through a keyboard interface without requiring specific timings for individual keystrokes, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)
For comparison, the current draft of the proposed Section 508 Standards includes the following wording for the keyboard access provision.
3-T - Keyboard Operation: All functionality of the product operable through the user interface must be operable through a keyboard interface without requiring specific timings for individual keystrokes. The only exception is where the underlying function requires input that depends on the path of the user's movement and not just the endpoints.
The wording of the two is essentially the same, and they require that it is possible to access the function of the page from the keyboard. In 1999 when the first versions of the Standards and Guidelines were finished, there was no issue of "access the function of the page" with the keyboard. For the web the function consisted of links and forms, and with your browser you could Tab to all of those. Keyboard access was a software issue, not a web issue. The browser took care of keyboard access. With the web becoming more interactive with Ajax and Web 2.0, this is changing. Keyboard access to the function of a web page is an issue!
Besides keyboard access for software, a second crucial requirement is that assistive technology be able to tell what various objects are for, the purpose or role of a control or widget that you land on. The idea is that as you move around an application with a screen reader, not only do you hear the text that is present, you also hear the identity of objects you come across, things like text entry fields, check boxes, radio buttons, and tree views and tab controls. In the 1999 Section 508 Software Accessibility Standards the requirement for making this information available is phrased as follows:
Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text.
For the web in 1999 this was not a problem. Screen readers knew about the HTML user interface elements (links and form controls) and announced the information including states and values: "check box checked," "select menu United States selected". The time when links and simple HTML controls were the only active elements on the web is over. Already in this section we have seen an example - the MSN page with a tree view and a tab control that we discussed in the section on hidden content.
Requirements for role and state information for interface elements will be in the new guidelines. The WCAG 2.0 draft requires, at the highest level (corresponding to priority 1), that it is possible for assistive technology to determine the purpose (role) and state of user interface widgets coded using scripting languages.
4.1.2 Name, Role, Value: For all user interface components, the name and role can be programmatically determined; states, properties, and values that can be set by the user can be programmatically determined and programmatically set; and notification of changes to these items is available to user agents, including assistive technologies. (Level A)
The current wording for the proposed Section 508 standards for this success criterion is the following:
3-O - User Interface Components: For all user interface components, including form elements and those generated by scripts
- the name and role must be programmatically determined
- states, properties, and values that can be set by the user must be programmatically determined and can be programmatically set, and
- notification of changes to these items is available to user agents, including assistive technologies.
For example: Frames must be titled with text that facilitates frame identification and navigation.
The only difference between the two is the awkward addition about frames, which may not be thought of as a user interface element.