1                  Introduction

1.1            AccessForAll Overview

The AccessForAll Specification (AfA) is intended to promote an inclusive user experience by enabling the matching of the characteristics of resources to the needs and preferences of individual users. The AfA specification consists of a common language for describing:

  • Digital learning resources. This is represented using the 1EdTech AccessForAll Digital Resource Description (DRD) v3.0 [AfADRD, 12] specification;
  • A learner’s needs and preferences with respect to how the learner can best interact with digital resources, [1] . This is represented using the 1EdTech AccessForAll Personal Needs and Preferences (PNP) v3.0 [AfAPNP, 12] specification.

1.2            Scope and Context

This document is the AfA 3.0 Best Practice and Implementation Guide and should be used in conjunction with:

  • 1EdTech AccessForAll Digital Resource Description (DRD) v3.0 [AfADRD, 12];
  • 1EdTech AccessForAll Personal Needs and Preferences (PNP) v3.0 [AfAPNP, 12].

1.3            Structure of This Document

The structure of this document is:

2.     The Basic AccessForAll Scenario

An overview of the core scenario that the AfA addresses;

3.     Documents in this Specification

A summary of the documents and XSD namespaces used for the specification;

4.     Models & Notations

The notation used in the examples used in this document and the corresponding AfA information models;

5.     Extensibility

A description of how proprietary features can be added to the AfA;

6.     Refinement

A description of how AfA uses feature refinement;

7.     Matching Preferences to Resources

An explanation of how the PNP for a user is matched to a resource that has a DRD;

8.     Examples of AccessForAll Use in Systems

Examples of how the AfA can be used;

9.     Future Topics

Topics that will be addressed in later releases of the AfA specification;

Appendix A – Terms

The glossary of key terms used in AfA;

Appendix B – Advanced Material

Identification of some issues that will be addressed before the AfA 3.0 becomes a Final Release and for which feedback from the community is requested.

1.4            Nomenclature

AfA                        AccessForAll

AfA DRD              AccessForAll Digital Resource Description

AfA PNP               AccessForAll Personal Needs & Preferences

API                         Application Programming Interface

ARIA                     Accessible Rich Internet Applications

ASCII                     American Standard Code for Information Interchange

AT                          Assistive Technology

AT-SPI                   Assistive Technology Service Provider Interface

ATK                       Accessibility Toolkit

DAISY                   Digital Accessible Information System

DRD                       Digital Resource Description

1EdTech            1EdTech Consortium Inc.

ISO                         International Standards Organization

MSAA                   Microsoft Active Accessibility

NIMAS                  National Instructional Materials Accessibility Standard

OEBPS                   Open eBook Publication Structure

PDF                        Portable Document Format

PEF                         Portable Embosser Format

PNP                        Personal Needs & Preferences

UML                       Unified Modeling Language

URI                         Uniform Resource Identifier

W3C                       World Wide Web Consortium

WAI                       Web Accessibility Initiative

WCAG                   Web Content Accessibility Guidelines

XML                       Extensible Mark-up Language

1.5            References

[AfA, 12]                       1EdTech AccessForAll Digital Overview v3.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Public Draft Release, 1EdTech Inc., September 2012.

[AfADES, 12]                1EdTech AccessForAll data Element Specification v3.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Public Draft Release, 1EdTech Inc., September 2012.

[AfADRD, 12]               1EdTech AccessForAll Digital Resource Description Information Model v3.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Public Draft Release, 1EdTech Inc., September 2012.

[AfAPNP, 12]                1EdTech AccessForAll Personal Needs & Preferences Information Model v3.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Public Draft Release, 1EdTech Inc., September 2012.

[ISO639, 98]                   ISO 639-2:1998 (E/F), Codes for the representation of names of languages — Part 2: Alpha-3 code/Codes pour la représentation des noms de langue — Partie 2: Code alpha-3.

[RFC4646]                      RFC 4646: Tags for identifying Languages, A.Phillips and M.Davis, The Internet Society, September 2006.

[WCAG2, 98]                W3C/WAI Web Content Accessibility Guidelines 2.0 [W3C/WAI WCAG], W3C, 1998
{http://www.w3.org/TR/2008/REC-WCAG20-20081211/}.

2                  The Basic AccessForAll Scenario

The core concept around which AccessForAll is based is that of “access mode.” Here, an “access mode” is the term used to describe the human sensory perceptual system or cognitive faculty through which a person may process or perceive information.  Digital media, when delivered to the user, has one or more specific access modes such as visual, auditory, textual or tactile.

In any given context, a user may prefer that a resource be delivered in, or augmented by, an alternative modality. The context may be based on user preferences, device capabilities, and/or the environment. AccessForAll defines properties for describing the access modes present in a resource (accessMode) and for describing the user’s preferences for access mode delivery (accessModeRequired). These properties enable systems to deliver media that matches the user’s modality preferences.

Beyond simple access modes, there are commonly accepted terms used to describe media adaptations in accessibility communities, such as captions, transcripts, and image descriptions. We call these Adaptation Types, and AfAv3.0 supports specifying replacements or augmentations for Access Modes using these terms. For example, a caption for a video is a textual alternative to the auditory mode of the video, but it is a particular type of text: a caption. This differs from a transcript, for example.

Automatically delivering replacements or augmentations based on these properties requires knowledge of the relationships between particular adaptation types, the Access Modes they possess and those they replace, and the media in which they are embodied. This version of the specification begins the process of making that knowledge explicit and precise, and the authors expect the specification to be more complete in the final version. This is discussed further in Section 6 and Appendix B.

An important principle of this metadata specification is that where there is an adaptation for a resource, the access mode of the adaptation must be described in a separate metadata instance from the resource it adapts, whether or not that adaptation is external to the resource or part of it. We do not at this stage recommend a specific mechanism for that but may do so in the final specification.

 

3                  Documents in this Specification

The set of documents that make up AccessForAll version 3.0 are listed in Table 3.1:

Table 3.1 List of AfAv3.0 documents.

Document

Description

1EdTech AccessForAll Personal Needs and Preferences (PNP) v3.0 Information Model

Definition of the AccessForAll information model for describing a person’s requirements for accessibility.

1EdTech AccessForAll Digital Resource Description (DRD) v3.0 Information Model

Definition of the AccessForAll information Model for describing a resource. This model is used to define meta data about a resource that will be used to match resources to a person’s requirements for accessibility.

1EdTech AccessForAll 3.0 XSDs

The set of XSDs, including for the core, used to validate DRD and PNP XML instances.

1EdTech AccessForAll 3.0 Core Profiles

Definition of DRD and PNP Core Profiles for use with AfA.

1EdTech AccessForAll 3.0 Data Element Specification (DES)

Definition of the AccessForAll Data Element Specifications. The DES consists of definitions for AccessForAll Vocabulary data types and the application of the specific values to those data types.

1EdTech AccessForAll v3.0 Overview

 

An overview of AccessForAll v3.0 covering the design goals, a history of previous work, and the document set that defines AccessForAll v 3.0.

1EdTech AccessForAll v3.0 Best Practices and Implementation Guide (BPIG) (this document)

Defines best practices for authors looking to apply AccessForAll v3.0 to resources, preferences systems, and content matching systems. It also provides guidance on how to extend the AccessForAll Model.

This document.

 

The namespaces used in the AfAv3.0 full specifications are:

The namespaces used in the AfAv3.0 Core Profile are:

 

4                  Models & Notations

The Information Models that underlie the AfA specification are independent of any particular representation or implementation technology. Many representations for the information and bindings to technologies are possible, each providing unique opportunities. This specification uses an 1EdTech Profile of the Unified Modeling Language (UML) representation bound in implementation by an Exchange Markup Language (XML) Schema. XML is well suited to building implementations with known behavior using well-tested, current-day technologies.  The technology environment is increasingly focused on web-based applications, and so we also intend to produce an HTML+RDFa web representation of Digital Resource Descriptions.

In our examples, we use a form of pseudo-code similar to property list notation. For example, a DRD instance might contain the information:

 

             

              accessMode = textual

              accessModeAdapted = visual

              isAdaptationOf = resourceID5

 

 

These should be read as

         ‘accessMode’ has a value of textual;
         and ‘accessModeAdapted’ has a value of visual;
         and ‘isAdaptationOf’ has a value of resourceID5.

We use this notation for its flexibility. For example, in a transaction-based system implementation, these instances could represent records. In an object-oriented system, they could represent attributes of objects. In XML this example would be coded as:

 

 

<?xml version="1.0" encoding="UTF-8"?>

<accessForAllResource xmlns="…AfA_DRD_Namespace…"

   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

   xsi:schemaLocation="…AfA_DRD_Namespace… imsafav3p0drd_v1p0.xsd">

  

  <accessMode>textual</accessMode>

  <accessModeAdapted>visual</accessModeAdapted>

  <isAdaptationOf>resourceID5<isAdaptationOf>

 

</accessForAllResource>

 

 

In practice there would be many more attributes and the isAdapatationOf identifier would be a URL.

 

5                  Extensibility

Organizations may need to extend AfAv3.0 to solve problems unique to their implementation. Extension mechanisms are included in the specification.

The 1EdTech AfAv3.0 Core Profiles use schemas to define a core set of properties, vocabularies and terms. Our purpose in defining the 1EdTech AfAv3.0 Core Profile is to provide an absolute minimum set of properties and terms around which conformance can be built. With this in mind, we do not recommend extensions to the core schemas – if additional properties are required we recommend building schemas by defining a Profile using properties and terms from the full schema.

A human-readable description of the terms and vocabularies defined in the schemas is given in the Data Element Specifications document [AfADES, 12].

Terms are constrained to be only those defined in the schemas or additional terms having the artificial prefix ext:. For example, the extra term “music” could be used in an instance like this:

 

 

< accessMode >ext:music</ accessMode >

 

 

Extra terms in this form can be freely added to any instance and used as part of any vocabulary without restriction and are not validated by the 1EdTech schemas except for their form.  Extra properties can be added to the full model by defining them in local schemas using the usual XML namespacing mechanisms to include the 1EdTech schemas. There are several different ways to do this depending on local requirements and tools. For example, a schema that adds a property for required emotional style might be done like this:


 

 

<?xml version = "1.0" encoding = "UTF-8"?>

<xs:schema xmlns="http://mylocalschema" targetNamespace="http://mylocalschema"

    xmlns:xs="http://www.w3.org/2001/XMLSchema"

    elementFormDefault="unqualified"

    attributeFormDefault="unqualified">

   <xsd:import namespace="http://www.imsglobal.org/xsd/accessibility/
       afapnpv3p0/"

       schemaLocation="imsafa3p0pnp_v1p0"/>

            

        <xs:sequence>

            <xs:element name="requiredEmotionalStyle" minOccurs="0"

                 maxOccurs="1">

                <xs:simpleType>

                    <xs:union>

                        <xs:simpleType>

                            <xs:restriction base="xs:string">

                                <xs:enumeration value="myterms-extravert"/>

                                <xs:enumeration value="myterms-introvert"/>

                                <xs:enumeration value="myterms-sensing"/>

                                <xs:enumeration value="myterms-intuitive"/>

                                <xs:enumeration value="myterms-thinking"/>

                                <xs:enumeration value="myterms-feeling"/>

                            </xs:restriction>

                        </xs:simpleType>

                        <xs:simpleType>

                            <xs:restriction base="ExtensionString.Type"/>

                        </xs:simpleType>

                    </xs:union>

                </xs:simpleType>

            </xs:element>

       </xs:sequence>

 

</xs:schema>

 

 

6                  Refinement

In this specification, we are describing refinement relationships between some terms and properties informally. Organizations building implementations may wish to represent this knowledge in different ways – for some it may be implicit in the implementation while others may model the information formally, for example in an ontology. Community feedback on the appropriate representation for knowledge like this is sought.

These refinement relationships are similar but not identical to the one that Dublin Core uses. The refinement relationship in AfA is:

 

If B refines A then B is a more precise or more specialized form of A.

 

In practice, this means that something having characteristic B also has characteristic A. This implication can be used to help select a resource that best matches a user’s preference from among many available resources.

This section discusses these refinements, explaining how they can be interpreted.

6.1            Refining Visual with More Detailed Access Modes

Figure 6.1 shows the vocabulary terms for ‘AccessModeValue’. The diagram illustrates that the term visual is refined by several terms: colour, ‘textOnImage’, ‘itemSize’, ‘orientation’ and ‘position.’

 

AccessModeValue terms are: textual, visual, auditory, tactile and olfactory. Olfactory is outside the Core. Refinements of visual are colour and textOnImage (Core) and itemSize, orientation and position (non-Core).

Figure 6.1 AccessForAll 3.0 'AccessModeValue' Terms

Use of the term visual means that a resource has visual content, i.e., content that requires the sense of vision. The refinement terms, because they refine visual, imply that the resource has visual content while providing more specific information about the nature of the visual content. A matching algorithm can use the implication to help select resources when ideal matches are not available.

6.1.1           Example

Suppose a resource depends upon the use of colour for its meaning (such as a lesson about how to use traffic lights). That resource might have a DRD containing:

 

 

accessMode = colour

 

 

This means

                “This resource uses colour to convey information.”

 

If a user of the resource is unable to perceive colour then they may have a PNP containing:

 

 

accessModeRequired.existingAccessMode = colour

accessModeRequired.adaptationRequested = textual

 

 

This describes the relationship between these two access modes, and means:

                “I would like textual alternatives to information conveyed using colour.”

 

Ideally, an alternative would be available that supplements the colour information using a textual access mode. Such a resource would have a DRD containing

 

 

accessMode = textual

accessModeAdapted = colour

 

 

This means

   “This resource has textual content which adapts the colour-specific content of the original resource.”

 

Consider the case where this ideal match is not available, but an adaptation that provides a textual alternative to visual content is available. This resource would have a DRD containing:

 

 

accessMode = textual

accessModeAdapted = visual

 

 

This means

   “This resource has textual content which adapts the visual content of the original resource.”

In this case, the user has requested an alternative to colour while the resource offers an alternative to visual. The matching system can use the fact that colour refines visual to infer that the resource could be suitable for this user: if the textual content completely describes the visual content, it should describe the colours as part of that.

An alternative for visual might not match the user’s needs as well as an alternative for colour. A user who is colour-blind might need only a short piece of text explaining that the traffic light has red on top, yellow in the middle, and green on the bottom, and noting which of the bulbs is lit in this image. A replacement for the complete visual content might include a description of the size and shape of the traffic light and the layout of the cars at the intersection pictured. This requires that the user read or listen to more material than necessary to find out what they need to know

Nonetheless, implementations will better serve users by offering the non-ideal alternatives than by simply delivering the original, inaccessible resource.

 

6.2            Refining ‘isAdaptationOf’ with ‘isFullAdaptationOf’ and ‘isPartialAdaptationOf’

Figure 6.2 illustrates the properties of the DRD, including the relationship between ‘isAdaptationOf’ and its refinements, ‘isPartialAdaptationOf’ and ‘isFullAdaptationOf.’

 

The complete list of DRD properties is in the DRD Information Model document. Refinement relationships are described in this section.

Figure 6.2 AccessForAll 3.0 Digital Resource Description Properties

 These refinements allow metadata editors to provide specific detail about the extent to which a resource adapts another. A “partial adaptation” is a resource that adapts only a part of another resource. Captions are a good example of this: a caption provides a text transcript of the audio content of a video, but does NOT provide any adaption for the visual content of the video. A partial adaption cannot be substituted for the original; it must be used to supplement the original.

A “full adaptation” is a resource that adapts all of the content of another resource. A screenplay, for example, could be a full adaption of a video if it contains descriptions of the visual elements such as scenes, action, etc. A full adaption can be substituted for the original resource.

When matching resources to a user’s preferences, an implementation must use this understanding of the meaning of these refinements to determine how to present the alternative resources.

6.3            Refining ‘atInteroperable’ with ‘apiInteroperable’

Figure 6.2 indicates that the property ‘atInteroperable’ has the refinement ‘apiInteroperable.’  ‘atInteroperable’ is used to indicate whether or not a resource is compatible with assistive technologies; it has a Boolean value (true or false).

The refinement, ‘apiInteroperable,’ provides metadata cataloguers the opportunity to provide more specific information about exactly what accessibility APIs the resource is compatible with.  If the ‘apiInteroperable’ property is present, an implementation can infer that the resource is compatible with assistive technologies on the platform on which it is supported. This means that the implementation can infer that ‘atInteroperable’ is true.

6.4            Refining ‘educationalComplexityOfAdaptation’ with ‘educationalLevelOfAdaptation’

Figure 6.2 indicates that the property ‘educationalComplexityOfAdaptation’ has the refinement ‘educationalLevelOfAdaptation.’  ‘educationalComplexityOfAdaptation’ is used when the adaptation presents the content in either a simplified or enhanced form as compared to the original resource. The vocabulary for this property contains only those two values: simplified and enhanced.

The refinement ‘educationalLevelOfAdaptation’ provides metadata cataloguers the opportunity to provide more fine-grained information about educational level of the adaptation. Cataloguers are expected to use vocabularies drawn from the context of the implementation, for example, the ASN Educational Level Vocabulary[2].

By comparing the ‘educationalLevelOfAdaptation’ of the adaptation to the educational level of the original resource, a matching algorithm can infer whether the resource presents the content in a simplified or enhanced form. This technique can be used to respond to the preferences of a user who has only specified the ‘educationalComplexityOfAdaptation’ required (i.e. the less fine-grained property).

 

7                  Matching Preferences to Resources

This section of the Best Practices document describes several basic scenarios of metadata and user preferences, exploring in detail how the matching process might proceed. The goal of this section is to help the reader understand

·         The relationships between related metadata properties, and between metadata and preferences;

·         The process of interpreting these relationships to identify resources that match user preferences.

In order to focus on the matching process, the examples in this section are drawn from basic, simple scenarios. Less common scenarios (such as modeling the usage context) are explored in Appendix B. The editors anticipate developing those scenarios more fully between this draft and the final release of the specification.

7.1            Matching Based on ‘accessMode

As described in Section 2, content, when delivered to the user, has one or more specific access modes. This scenario describes the use of ‘accessMode’ to determine what should or can be delivered to a user. Consider the following scenario:

 

Resource: resourceID1– a resource with a visual access mode. The DRD for the resource would include[3]:

 

 

accessMode = visual

 

 

Adaptation: resourceID2a resource that is an adaptation of the first resource, providing a textual alternative to the visual content in the first resource. The DRD for the resource would include:

 

 

accessMode = textual

accessModeAdapted = visual

isAdaptationOf = resourceID1

 

 

User Preference: a preference that textual content be delivered instead of visual. This would be expressed in the user’s PNP as:

 

 

accessModeRequired.existingAccessMode = visual

accessModeRequired.adaptationRequested = textual

 

 

In this example, resourceID2 would be a suitable adaptation of resourceID1 for this user, based on the user's PNP. Matching and delivery would proceed as follows:

 

1.        The system discovers (from the resource metadata) that the original resource to be delivered (resourceID1) has visual content;

2.        The system considers the user’s preferences and discovers that for visual, the user prefers textual;

3.        The system identifies adaptations for the resource. In some cases the DRD for the resource would provide a reference to one or more alternative resources (‘hasAdaptation’). In this case, it does not, so the system must search metadata records to find resources that indicate they provide an alternative to resourceID1 (‘isAdaptationOf’). Such a search would find the resource with identifier resourceID2 (and possibly others);

4.        The system considers whether the ‘accessMode’ of the identified adaptation provides a textual alternative to visual. In this case, it does: it can be delivered to the user. (If more than one adaptation matches, it is not specified which is to be delivered);

5.        If no matching adaptations had been found, the system might then have considered whether the requested modality could be generated through an automated transformation or a human-mediated service.

7.2            Matching Based on ‘adaptationType

As described in Section 2, AccessForAll supports the use of the ‘adaptationType’ property to provide more specific information about an adaptation than can be conveyed with ‘accessMode’ alone. Information about the adaptation type can be useful even when the resource metadata does not include ‘accessMode’. This scenario describes the use of ‘adaptationType’ to determine what should or can be delivered to a user.

Most adaptation types imply information about the access mode of the adaptation itself and about the access mode being adapted.  The definition of each ‘adaptationType’ incorporates these implications and can be useful for matching in cases when direct access mode information is not provided. Table 7.1 provides a summary of these definitions, with notes on best practices for their use. This table should be used as a guide by:

  • Developers of personalized content matching systems;
  • Individuals who maintain resource metadata repositories.
Table 7.1 ‘adaptationType’ matching.

adaptationType

accessMode

accessModeAdapted

Notes

alternativeText

textual

visual

Short text.

audioDescription

auditory

visual

 

captions

textual

auditory

When audio and visual are synchronized together.

e-book

textual and/or auditory

visual

Has sub-types in ‘adapatationMediaType’

haptic

tactile

visual, auditory, textual

 

highContrast

visual

visual

Fixed visual alternative

longDescription

textual

visual

longer text.

signLanguage

visual

auditory

 

transcript

textual

auditory

When audio is presented without visual.

 

 


Consider the first row of Table 7.1:

 

adaptationType

accessMode

accessModeAdapted

Notes

alternativeText

textual

visual

Short text

 

The row should be interpreted to mean:
 

A resource with an ‘adaptationType’ of alternativeText provides a textual alternative to the visual content in the adapted resource.

 

Consider the following scenario:

Resource: resourceID3 – a resource with a visual access mode. The DRD for the resource would include:

 

 

accessMode = visual

 

Adaptation: resourceID4 – a resource that is an adaptation of the first resource, providing alternative text for content in the first resource. The DRD for the resource would include:

 

         

adaptationType = alternativeText

isAdaptationOf = resourceID3

 

User Preference: a preference that textual content be delivered instead of visual. This would be expressed in the user’s PNP as:

 

 

accessModeRequired.existingAccessMode = visual

accessModeRequired.adaptationRequested = textual

 

 

In this example, “resourceID4” would be a suitable adaptation of “resourceID3” for this user, based on the characteristics of alternativeText defined in the table. Matching and delivery would proceed as follows:

  1. The system discovers (from the resource metadata) that the original resource to be delivered (“resourceID3”) has visual content;
  2. The system considers the user’s preferences and discovers that for visual the user prefers textual;
  3. The system identifies adaptations for the resource, including “resourceID4”;
  4. The metadata for “resourceID4” does not include the ‘accessMode’ property nor the ‘accessModeAdapted’ property, so the system examines the ‘adaptationType’ property;
  5. Based on Table 1 above, the system knows that ‘alternativeText’ replaces the ‘accessModevisual with textual. This matches the user's preference, so resourceID4 can be delivered to the user.

These examples are necessarily simple. In reality, scenarios are likely to be more complex; several resources may supply the required modality, or the delivery context (what devices and assistive technology are available) may bear on the matching process. Appendix B provides more complicated examples with discussion of how matching can be done.

7.3            Use of ‘hasAdaptation’ and ‘isAdaptationOf’

AccessForAll 3.0 offers two ways to associate resources with adaptations: one recorded in the DRD of the original resource, and one recorded in the DRD of the adaptation.

 

7.3.1           hasAdaptation

If a metadata author is aware of an alternative at the time the metadata for a resource is authored, he can include the attribute ‘hasAdaptation.’ For example:

Resource: resourceID5 – a resource with a visual access mode. The metadata author knows that resourceID6 provides an adaption. The DRD for the resource would include:

 

 

accessMode = visual

hasAdaptation = resourceID6

 

7.3.2           isAdaptationOf

If the alternative is created later, or if a later metadata author wishes to indicate that an independent resource can serve as a useful alternative, then the DRD of the alternative can include the attribute ‘isAdaptationOf.’ For example:

Adaptation: resourceID6 – a resource suitable for use as a textual alternative to the visual content in resourceID5

 

 

accessMode = textual

accessModeAdapted = visual

isAdaptationOf = resourceID5

 

Note that while ‘hasAdaptation’ and ‘isAdaptationOf’ are alternative ways of conveying the same relationship and can be used independently, it is perfectly acceptable to use both properties in the same implementation.

7.4            Internal and External Adaptations

In some cases, adaptations may be provided within the media file of the original resource. In other cases, adaptations will exist in a separate file. Which approach is used may depend on the media format involved, the workflow for authoring adaptations, or even the ownership of the materials.  In either case, the DRD describing the adaptation should be stored in a separate metadata record from the DRD describing the original resource.

7.4.1           Example: Internal Adaptations

Consider a media library that decides to provide closed captions for all videos in its collection by embedding the captions in the video format. The captions would be turned on and off by the system or by users with a control on the player interface. In this case, each resource would have two DRD records: one describing the video, and one describing the caption.


Original resource with adaptation included: resourceID7 – the DRD describing the original resource would include:

 

 

accessMode = visual

accessMode = auditory

hasAdaptation = resourceID7

 

A separate DRD describing the caption would include:

 

 

isAdaptationOf = resourceID7

adaptationType = captions

 

Things to note in this example:

  1. The first DRD, which describes resourceID7, references itself in the ‘hasAdaptation’ property;
  2. The second DRD, which describes the caption embedded within resourceID7, references itself in the ‘isAdaptationOf’ property.

7.4.2           Example: External Adaptations

Some workflows or media types do not support including adaptations within the original resource. For example, if an organization is producing alternative materials for resources available on the public web, they may not be able to alter either the original resource or its metadata. In that case, the DRD for the adaptation might be the only source of accessibility information. It might have a DRD like this:

Adaptation separate from resource: resourceID10

 

 

isAdaptationOf = resourceID9

adaptationType = longDescription

 

In this case, the DRD for resourceID9 would not contain any reference to the adaptation, resourceID10.

 

8                  Examples of AccessForAll Use in Systems

The following examples show AccessForAll being used. In the absence of documented implementations of this first draft of AfAv3.0, examples are drawn from implementations of AfAllv2.0 and earlier versions; the principles of use are the same, and the examples have been updated to use the AfAv3.0 information model. The examples show both how preferences might be entered using a wizard and the effects of those preferences in the content delivered.

The examples are:

                Visual Impairment/Low Vision Adaptation in an LMS;
        Colour Blindness Adaptation in a Digital Library;
        Hearing Impairment Adaptation in an LMS;
        User Interface Example.

8.1            Visual Impairment/Low Vision Adaptation in an LMS

The following example[4] comes from The Inclusive Learning Exchange system (TILE) developed by the Adaptive Technology Resource Centre, University of Toronto.

A learner with a visual impairment requiring the use of a screen reader is studying a course on Globalization and International Migration containing an illustration of the concepts of restricted migration (shown in Figure 8.1).

 

Diagram of immigration restrictions which is described in the alt text for Figure 8.3

Figure 8.1 TILE screenshot of resource with text and Flash animation.

The learner has a Personal Needs and Preferences (PNP) profile that defines his/her preferences relating to vision requirements. In this profile, the learner identifies a need for alternatives to visual content. The learner has requested audio descriptions, short alternative text and long descriptions as his/her preferred adaptation methods for visual content (PNP settings shown below in Figure 8.2).

 

The TILE user interface, which allows users to indicate that they wish to use audio description or text as alternatives to visual resources. Audio description can be selected in five languages. Text can be short description or long description.

Figure 8.2 TILE screenshot of Alternatives to Visual preference editing

A diagram shows three entitites: "Less Developed Country" (on the left), "International Migration Policies" (in the middle), and "More Developed Country" on the right. More Developed Country is depicted with two employment categories. The major portion is called "Legal Employment". The second smaller portion is labeled "Sweatshops, Illegal Employment, etc.". There is an arrow depicting the flow of workers from the Less Developed Country to the More Developed Country. This arrow splits into two as it travels towards More Developed Country. One arrow, labeled "Skilled Worker", intersects the "Legal Employment" portion of the More Developed Country. The second arrow, labeled "Unskilled Workers", intersects International Migration Policies. From International Migration Policies, a portion of the unskilled workers flow back to their origin - the Less Developed Country, while another portion of the unskilled workers flows from International Migration Policies to the "Sweatshops, Illegal Employment, etc." portion of the More Developed Country. Source: Created by NEXTMOVE Inc. for Alan Simmons

Figure 8.3 TILE screenshot of resource with Flash animation substituted with text alternative

To accommodate this learner, the original resource (shown above in Figure 8.1) needs to be replaced by an adaptation. When the learner requests to view the course on Globalization and International Migration containing the original image, the system recognizes that the learner requires alternatives, discovers that a resource with the appropriate characteristics exists and is able to substitute the original with a long description for the image (shown below in Figure 8.3).

To properly replace the original resource with an appropriate alternative for the learner, metadata describing the learner’s preferences and both the original and adapted resource needs to be correctly defined. Based on the learners’ preferences for textual adaptations to visual content, the following metadata might be included in their Personal Needs and Preferences (PNP) file:

 

 

accessModeRequired.existingAccessMode = visual

accessModeRequired.adaptationRequested = textual

 

adaptationTypeRequired.existingAccessMode = visual

adaptationTypeRequired.adaptationRequested = audioDescription

 

adaptationTypeRequired.existingAccessMode = visual

adaptationTypeRequired.adaptationRequested = alternativeText

 

adaptationTypeRequired.existingAccessMode = visual

adaptationTypeRequired.adaptationRequested = longDescription

 

The equivalent AfA PNP XML instance is:

 

 

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllUser xmlns="…/imsafa3p0pnp_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafa3p0pnp_v1p0 imsafa3p0pnp_v1p0.xsd">
   <accessModeRequired>

       <existingAccessMode>visual</existingAccessMode>
      <adaptationRequest>textual</adaptationRequest>
   </accessModeRequired>
   <adaptationTypeRequired>
       <existingAccessMode>visual</existingAccessMode>
      <adaptationRequest>audioDescription</adaptationRequest>
   </adaptationTypeRequired>
   <adaptationTypeRequired>
       <existingAccessMode>visual</existingAccessMode>
      <adaptationRequest>alternativeText</adaptationRequest>
   </adaptationTypeRequired>
   <adaptationTypeRequired>
       <existingAccessMode>visual</existingAccessMode>
     <adaptationRequest>longDescription</adaptationRequest>
   </adaptationTypeRequired>
</accessForAllUser>

 

 

When the learner accesses the Globalization and International Migration course, the system will check the learner’s profile and discover that they require alternatives to visual content. It will then check the resource the learner is requesting (the original image) to see if it is a visual attribute. As this resource is visual, the system will look for alternatives that match the learner’s preferences.

The original image in the Globalization and International Migration course has the following metadata defined for it:

 

 

accessMode = textOnImage[5]

hasAdaptation = someAdaptationSet#45678

 

The equivalent AfA DRD XML instance is:

 

 

01

02

03

04

05

06

07

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllResource xmlns="…/imsafav3p0drd_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafav3p0drd_v1p0 imsafav3p0drd_v1p0.xsd">
   <accessMode>textOnImage</accessMode>
   <hasAdaptation>someAdaptationSet#45678</hasAdaptation>
</accessForAllResource>

 

As the metadata of the original resources references an alternative, the system will review the metadata of the alternative to determine if it meets the needs of the learner.

The following metadata is provided for the adapted resource:

 

 

isAdaptationOf = someObjectSet#12345

accessMode = textual

accessModeAdapted = textOnImage

adaptationType = longDescription

 

The equivalent AfA DRD XML instance is:

 

 

01

02

03

04

05

06

07

08

09

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllResource xmlns="…/imsafav3p0drd_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafav3p0drd_v1p0 imsafav3p0drd_v1p0.xsd">
   <accessMode>textual</accessMode>
   <accessModeAdapted>textOnImage</accessModeAdapted>
   <adaptationType>longDescription</adaptationType>
   <isAdaptationOf>someObjectSet#12345</isAdaptationOf>
</accessForAllResource>

 

Based on the metadata of the adaptation, the system is able to determine that it is an appropriate alternative to the original image and displays it to the learner in the Globalization and International Migration course.

8.2            Colour Blindness Adaptation in a Digital Library

This example describes the implementation of AccessForAll in Teachers' Domain, a digital library of educational multimedia materials created by WGBH Educational Foundation. AccessForAll implementation in Teachers' Domain was supported by U. S. National Science Foundation funding to the WGBH National Center for Accessible Media. Approximately seven percent of men and boys are unable to distinguish red and green, or see red and green differently than others.[6] While students with colour blindness can use most visual materials normally, they will struggle to understand images that rely on the use of red or green to code information. A learner with this problem might not consider themselves disabled, and might not use assistive technology, but they might have a PNP that includes the following:

 

 

        accessModeRequired.existingAccessMode = colour

        accessModeRequired.adaptationRequested = textual

 

The equivalent AfA PNP XML instance is:

 

 

01

02

03

04

05

06

07

08

09

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllUser xmlns="…/imsafa3p0pnp_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafa3p0pnp_v1p0 imsafa3p0pnp_v1p0.xsd">
   <accessModeRequired>

       <existingAccessMode>colour</existingAccessMode>
      <adaptationRequest>textual</adaptationRequest>
   </accessModeRequired>
</accessForAllUser>

 

The term “colour” is a refinement of "visual." However, this learner does not need all visual materials to be replaced by text, only those that rely on colour vision. When searching a digital library for materials on Photosynthesis, the student might find the animation shown in Figure 8.4.

The use of colour (the red, blue, and green dots to indicate the different molecules involved in photosynthesis) makes it impossible for this student to understand the animation independently.  The DRD for the resource would indicate the use of colour with this metadata:

 

 

accessMode = colour

 

The equivalent AfA DRD XML instance is:

 

 

01

02

03

04

05

06

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllResource xmlns="…/imsafav3p0drd_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafav3p0drd_v1p0 imsafav3p0drd_v1p0.xsd">
   <accessMode>colour</accessMode>
</accessForAllResource>

 

A girl breathes out red molecules. A plant is spotted with blue molecules as it sits in a sunny window. Green molecules flow away from the plant. Legend: Red = CO2; Blue = H2O; Green = O2.

Figure 8.4 Animation of a child, a plant, sunshine, and the molecules that are exchanged during photosynthesis. © WGBH Educational Foundation, used with permission.

The student’s profile indicates that this causes an accessibility problem. The system might first search for an adaptation that addresses the colour issue directly. However, if an alternative to the colour information is not available, the system can use the refinement rules of AfAv3.0 to widen the search. Because colour is a refinement of visual, an alternative to visual would also be appropriate for this student, even if it contains more information than necessary.

If a text description of the animation is available, as an alternative to colour or to the whole visual access mode, it can be provided in addition to the visuals to allow the student to interpret the animation. If not, the problem can be noted so that the student can ask a teacher or a peer for help.

8.3            Hearing Impairment Adaptation in an LMS

The following example[7] is from The Inclusive Learning Exchange system (TILE) developed by the Adaptive Technology Resource Centre, University of Toronto.

A learner with a hearing impairment and difficulty understanding English is studying a course on Globalization and International Migration containing a video lecture by Professor Stephen Castles. Like most videos, it contains both visual and audio information (shown in Figure 8.5).

The learner has a PNP profile that describes his/her preferences for alternatives to audio content and English content. In this profile, the learner requests verbatim captions as his/her preferred adaptation methods for audio content (PNP settings shown in Figure 8.6).

 

A video window is embedded in a course page. It does not show captions for the professor who is speaking.

Figure 8.5 TILE screenshot of video with no captions

The TILE user interface offers users a choice of sign langauge and visual captions as alternatives to auditory materials. Users can choose which sign language, though only ASL is available here. Users can request verbatim or reduced reading level captions, five languages for captions, and can specify a caption rate of one to 300 WPM.

Figure 8.6 TILE screenshot of Alternatives to Auditory preference editing

To accommodate this learner, the original resource (shown in Figure 8.5) needs to be replaced by an adaptation. When the learner requests to view the course on Globalization and International Migration containing the original video, the system recognizes that the learner requires alternatives, discovers that a resource with the appropriate characteristics exists and is able to substitute the video with a captioned alternative (shown below in Figure 8.7).

To properly replace the original resource with an appropriate alternative for the learner, metadata describing the learner’s preferences and both the original and adapted resource needs to be correctly defined.

Based on the learners’ preferences the following metadata should be included in their PNP file:

 

The video shown in Figure 8.5 now displays captions. The professor is saying 'I don't want to go into a definition of globalization.'

Figure 8.7 TILE screenshot of video with captions

 

 

accessModeRequired.existingAccessMode = auditory

accessModeRequired.adaptationRequested = textual

 

adaptationTypeRequired.existingAccessMode = auditory

adaptationTypeRequired.adaptationRequested = captions

 

languageOfAdaptation = en

 

adaptationDetailRequired.existingAccessMode = auditory

adaptationDetailRequired.adaptationRequested = verbatim

 


The equivalent AfA PNP XML instance is:

 

 

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllUser xmlns="…/imsafa3p0pnp_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafa3p0pnp_v1p0 imsafa3p0pnp_v1p0.xsd">
   <accessModeRequired>
       <existingAccessMode>auditory</existingAccessMode>
      <adaptationRequest>textual</adaptationRequest>
   </accessModeRequired>
   <adaptationTypeRequired>
      <existingAccessMode>auditory</existingAccessMode>
      <adaptationRequest>captions</adaptationRequest>
   </adaptationTypeRequired>
   <languageOfAdaptation>en</languageOfAdaptation>
   <adaptationDetailRequired>
      <existingAccessMode>auditory</existingAccessMode>
      <adaptationRequest>verbatim</adaptationRequest>
   </adaptationDetailRequired>
</accessForAllUser>

 

The term “colour” is a refinement of "visual." However, this learner does not need all visual materials to be replaced by text, only those that rely on colour vision. When searching a digital library for materials on Photosynthesis, the student might find this animation:

When the learner accesses the Globalization and International Migration course, the system will check the learner’s profile and discover that they require adaptations for auditory content. It will then check the resource the learner is requesting (the original video) to see if it is an auditory component. As this resource does contain audio information, the system will look for alternatives that match the learner’s preferences. The original video in the Globalization and International Migration course has the following metadata defined for it:

 

 

accessMode = visual

accessMode = auditory

hasAdaptation = someAdaptationSet#45678

 

The equivalent AfA DRD XML instance is:

 

 

01

02

03

04

05

06

07

08

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllResource xmlns="…/imsafav3p0drd_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafav3p0drd_v1p0 imsafav3p0drd_v1p0.xsd">
   <accessMode>visual</accessMode>

   <accessMode>auditory</accessMode>
   <hasAdaptation>someAdaptationSet#45678</hasAdaptation>
</accessForAllResource>

 

As the metadata of the original resources references an adaptation, the system will review the metadata of the adaptation to determine if it meets the needs of the learner.  The following metadata is provided for the adapted resource:

 

 

isAdaptationOf = someObjectSet#12345

accessMode = textual

accessModeAdapted = auditory

adaptationType = captions

languageOfAdaptation = en

adaptationDetail = verbatim

 

The equivalent AfA DRD XML instance is:

 

 

01

02

03

04

05

06

07

08

09

10

 

 

<?xml version="1.0" encoding="UTF-8"?>
<accessForAllResource xmlns="…/imsafav3p0drd_v1p0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="…/imsafav3p0drd_v1p0 imsafav3p0drd_v1p0.xsd">
   <accessMode>textual</accessMode>
       <accessModeAdapted>auditory</accessModeAdapted>
       <adaptationType>captions</adaptationType>
       <isAdaptationOf>someObjectSet#12345</isAdaptationOf>
       <languageOfAdaptation>en</languageOfAdaptation>
</accessForAllResource>

 

Based on the metadata of the adaptation the system is able to determine it is an appropriate alternative to the original video and displays it to the learner in the Globalization and International Migration course.


8.4            User Interface Example

Some systems implementing AfAv3.0 might have a direct interface for editing users’ PNPs while others might depend on external tools or services to obtain preferences. Figure 8.6, in the previous example, shows one example of a user interface. Figure 8.8, below, shows an example of preference editing in the Teachers’ Domain digital library of educational resources developed by the WGBH Educational Foundation.

 

The Teachers' Domain user interface, which offers users these access preferences: captions for video and audio; transcripts for video and audio; audio description for video; and text description for images. It also allows users to request a warning if a rource contains a flashing hazard; sound hazard; or motion simulation hazard. A warning can be requested if a resource requires color vision, keyboard control, or mouse control.

Figure 8.8 User interface example.

9                  Future Topics

This section is included in this working draft to give a sense of the range of ways that AfAv3.0 can be used. In future releases of this document, this section will provide guidance based on actual implementations in the field.

Mechanisms involved in associating Digital Resource Description metadata with learning resources or other resources vary with the technology and organizational context. They might include mechanisms for:

  • Embedding metadata with a resource;
  • Building metadata repositories which describe resources separately from those resources;
  • Creating wizards and user interfaces for the entry of metadata;
  • Incorporating metadata within media production or adaptation production processes;
  • Providing alternatives through use of automated or human-mediated transformations or services.

We expect some of these topics to be addressed in implementations between this Public Draft and the Final Release.

Appendix A – Terms

 

Application Profile

A set of data elements chosen to address a particular need or a set of use cases.

Data Element Specification

A set of attributes and attribute value rules characterizing a set of data elements.

Digital Resource Description (DRD)

A collection of information that states how a digital learning resource can be perceived, understood or interacted with by users, or points to an alternative resource that provides other modes for perception, understanding or interacting, for the purpose of meeting the needs of users. (AfAv2.0).

Elements

also known as data elements, attributes, or properties; the basic units of data in this specification, used to define values for preferences or resource descriptions.

Personal Needs and Preferences statement (PNP)

A collection of AfA needs and preferences for control flexibility, display transformability and content with respect to the accessibility of a resource (AfAv2.0).

Properties

Attributes an object possesses. In this specification, attributes, properties, and elements are used interchangeably because all of our elements are used as properties. However, the reverse is not the case. In this specification properties are analogous to RDF properties.

Resource Description Framework (RDF)

A standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed. (http://www.w3.org/RDF/).

 

Appendix B – Advanced Material

B1     Introduction

This version of the AccessForAll specification is a draft intended for implementation, experimentation and community feedback.  It is not complete. It lays the groundwork for a new approach, and the editors seek community feedback on this approach, including the need for development of particular ontologies described below. We intend that in the final version of the specification, some of these ideas will be developed in full based on your feedback.

To further these aims, this Appendix sets forth material that is more advanced in nature and less complete than in the main documents. If you want to implement only the Core Application Profile, you may not need this Appendix; you may only wish to review it for guidance on where things are going.  We include here several kinds of material:

·         More advanced matching examples;

·         Partial specification of additional relationships between terms defined in vocabularies in the specification;

·         Technical discussion and requests for community comment;

·         Some different ways to use the Metadata terms.

B2     Advanced Matching Relationships

B2.1       Example with Adaptation Type

In Section 6 of this Best Practices Guide, we presented some simple examples of the use of the table of ‘adaptationType’ definitions in matching. Here, we present a more complex example and suggest an approach to determining which of several possible adaptations to deliver when a user's preference is unclear.  Considering the following example (rows taken from Table 6.1):

 

AdaptationType

AccessMode

AccessModeAdapted

Notes

transcript

textual

auditory

When audio is presented without visual.

captions

textual

auditory

When audio and visual are synchronized together.

The first three cells of the transcript row above should be interpreted to mean:

A transcript provides a textual accessMode that could be used instead of an auditory accessMode. It is not specified whether delivery should involve replacement or augmentation.

The first three cells of the captions row specify that

Captions provide a textual ‘accessMode’ which could be used instead of an auditory ‘AccessMode.’ It is not specified whether delivery should involve replacement or augmentation.

The fourth column gives extra information not yet formalized in AccessForAll 3.0.

Use of the Table for Matching

Here, we present one scenario to show how a system might use this information to match a user’s preferences. Consider the following:

Resource: resourceID1 – a resource having an auditory access mode.  The DRD for the resource would include[8]:

 

 

accessMode = auditory

 

User Preference: a preference for textual be delivered instead of auditory.  This would be expressed in the user’s PNP as:

 

 

accessModeRequired.existingAccessMode = auditory

accessModeRequired.adaptationRequested = textual

 

and adaptations that include the following metadata:

Adaptation: resourceID2

 

 

accessMode = textual

accessModeAdapted = auditory

adaptationType = transcript

isAdaptationOf = resourceId1

 

Adaptation: resourceID2

 

 

accessMode = textual

accessModeAdapted = auditory

adaptationType = captions

isAdaptationOf = resourceId1

 

 

In this example, the ‘accessMode’ of textual is explicit in the metadata both for the transcript and for the captions adaptation.  If the ‘accessMode’ was not stated explicitly, we could infer it from the ‘adaptationType’ (note that a particular ‘adaptationType’ may have several ‘accessModes’, but in this case there is only one).  We expect matching and delivery to proceed at run time is as follows:

  1. The system discovers (from the resource metadata) that the resource to be delivered has auditory content;
  2. The system considers the user’s preferences and discovers that for auditory, the user prefers textual;
  3. The system assesses whether the resource requested may also support textual content either through an incorporated adaptation or by transformation to generate the required modality. If so, the original resource is sufficient;
  4. The metadata for the original resource might contain properties identifying adaptations, such as:

 

      hasAdaption = resourceIdAdapt1

      hasAdaption = resourceIdAdapt2

 

 

  1. If this is not the case (i.e. these properties are not present), we must search for resources that identify themselves as adaptations for resourceID1 using the ‘isAdaptationOf’ property;
  2. By searching the metadata of other resources, we identify that resourceID2 and resourceID3 are adaptations for resourceID1 and that they both match the preference for textual.
  3. We consider the two adaptations to determine which is best.  Here, the choice is between a textual mode that is delivered synchronously with visual (such as a video with captions) and one that can be delivered independently. The user’s preferences as stated in this example are inadequate to determine a selection. We expect that the matching system will consider other factors such as media type, delivery environment, etc. to make the choice.

While these other factors (media type, delivery environment, etc). are only considered informally in this draft, we may model them more formally in the final specification. Examples include formally modeling the ‘accessModes’ of ‘adaptationType’ values and device properties that enable decisions about whether a particular resource can run in a particular environment.

Multiple Adaptations

In general, multiple adaptations may be available in more than one way.  A user may, for example, have more than one preference for alternative ‘adaptationTypes’ for a single ‘accessMode’ and adaptations may be available matching more than one such preference.  For example, a user may indicate that they can use both captions and transcripts as alternatives to auditory resources. In such a case, the choice is determined by the delivery system and is not defined here.  In future drafts, the editors will consider a precedence hierarchy that could be specified by the user or pre-defined by matching systems. Community feedback is sought on whether this would be useful.

In other cases, multiple different forms of adaptations may exist that meet a user’s preferences. For example, a user may request textual for visual, and there may be both alternativeText and e-Book available. Decisions such as this – about which can be delivered and which are best suited to the context – can be made by reference to system and context properties that, at present, are modeled only informally in AccessForAll 3.0.  This specification is moving towards formally modeling those properties.  In particular, it may be desirable to construct related ontologies, including an ontology of relationships between ‘adaptationTypes’, their ‘accessModes’ and the Access Modes they adapt to provide a model that an implementation can consult in taking decisions automatically. We seek community feedback on the wisdom of such an approach. The current table of relationships should be regarded as an informal model of the same information.

B2.2       Relationships involving ‘adaptationDetail’

The ‘adaptationDetail’ property has a special relationship to ‘adaptationType’: it further clarifies the meaning of what is required or available. Some of the values of ‘adaptationDetail’ are only relevant for particular values of ‘adaptationType.’

Many terms have “refinements” that further clarify the meaning of what is required or available (some systems describe these as parameters). These are not fully modeled in this version of AfAv3.0, though some of them are coded in the names of terms. At present, an implementation may be required to deal with some of these informally in order to more closely match resources to preferences. We expect to incorporate these refinements into the model before the final AfAv3.0 specification.

Table B2.1 lists some of the AdaptationDetail values and describes which AdaptationType it refines. Where the text is grey (and the row begins with an ‘*’), the nature of the relationship is under discussion.

 

Table B2.1 AdaptationDetail Refinement of AdaptationType.

AdaptationDetail

AdaptationType

AccessMode

Notes

enhanced

audioDescription, captions

 

May be useful in future work on additional adaptation types

verbatim

captions, transcript

textual

Most commonly for captions

reduced

captions

textual

 

realTime

captions

textual

 

*symbolic

e-book

textual or visual

Could be symbolic material for non-readers or could be symbolic text (i.e. MathML, ChemML)

recorded

audioDescription, 
e-book

auditory

 

synthesized

audioDescription, 
e-book

auditory

 

 

We are considering different ways to formalize the description of these relationships. One possibility is a formal ontology. Another, for some of the terms, is as refinements to ‘accessMode’ terms.  However, this is far from being a trivial task since the complexity of relationships is difficult to describe using only refinement. We seek community feedback on what would be an appropriate formal modeling technique for this information.

B2.3       Relationships Involving ‘adaptationMediaType’

In matching resources to preferences, it is often necessary to informally take into account properties of the media involved, such as whether it can be disassembled and reassembled in an augmented form for a particular modality, what kind of modalities it can contain, etc. For some use cases, it is useful for that knowledge to be explicit and to know the circumstances under which particular media types can replace or augment others.

We have begun the process of making this information explicit by identifying relationships in which particular media types participate, and we expect to complete this between public draft and final specification. Currently, the information is modeled informally, but as with the examples above, a formal model for this information may be possible. The table below lists several ‘adaptationMediaType’ values and the ‘adaptationTypes’ they are typically related to. Again, where text is grey (and begins with ‘*’), it denotes that it is under discussion.

 

Table B2.2 – ‘adaptationMediaType’ values.

AdaptationMediaType

AdaptationType

AccessMode

AdaptationDetail

Notes

Daisy

e-book

auditory and/or textual

recorded or synthesized

 

*braille

e-book

textual, tactile

 

 

NIMAS

e-book

textual

 

 

MathML

e-book

textual 

symbolic

 

ChemML

e-book

textual 

symbolic

 

LaTex

e-book

textual 

symbolic

 

*OEBPS

e-book

any

 

 

PDF

e-book

textual and/or visual

 

 

*LIT

e-book

any

 

 

*Nemeth

e-book

textual 

symbolic

also a sub-type of the AdaptationMediaType braille.

 

We seek community feedback on the usefulness of implementing a formal description of these relationships.

B3     Usage Scenarios

This section describes some potential scenarios and ways to use AfAv3.0 that are not covered in the main documents. These are included here for community information but are still under development.

B3.1       Using ‘accessModeRequired’

If ‘accessModeRequired’ is used to specify a preference, and we have metadata describing the access modes[9] of adaptations available, selection is straightforward. In deciding what adaptation should be delivered, we need look only at the AccessMode of a matched adaptation and ask whether it is deliverable in the environment in use at delivery time.

If we don’t know the access modes of adaptations but we do know the ‘adaptationType,’ we can use the relations specified in Table 6.1 to infer their ‘accessMode.’

B3.2 Using ‘adaptationTypeRequired’

If a user has requested a specific ‘adaptationType,’ and we have metadata describing adaptation types of the media available, then matching is straightforward. If we don’t know the ‘adaptationType,’ we might infer it from the ‘adaptationMediaType’ information specified in Table B2.2.

B3.3       Inferring Accessibility Information

In some cases, accessibility metadata may be limited or unavailable (such as when searching the Web in general). The authors expect that in the future it might be possible to infer accessibility metadata such as ‘accessMode’ or ‘adaptationMediaType’ from available information such as media type. For example, a resource that is known to have a MIME type of "image/jpeg" can be inferred to have an ‘accessMode’ of visual. It cannot be programmatically determined whether or not textOnImage or colour might also be applicable.  Such an inference engine would require a formal specification of the known relationships between ‘accessMode,’ ‘adaptationType,’ media types, etc.