Thursday, May 23, 2013

Born Digital, Born Accessible Learning Sprint - Day 1 (Toolbox)

This week a diverse group of educators, technologists, and accessibility specialists (30 of us!) gathered to envision and prototype end-to-end solutions for born accessible eBooks; from creating, to discovering, to learning from accessible, rich, interactive eBooks. We were there to learn from each other and sprint together to build prototypes while strengthening the collaborative possibilities between the groups.

For any readers unfamiliar with the term 'accessible', it means making the material usable by as many people as possible, especially including people with disabilities or special needs (See also the wikipedia article on accessibility). Not only is accessibility critical in education to give every learner the ability to reach their potential, but often the benefits of accessible content extend to all learners in the same way that curb cuts have made roller bags possible. Here are some examples.
  • If someone is blind or low vision, they are likely to use a screen reader. It is important that all controls are accessible via the keyboard, and that the structure of a document is easy to navigate. Videos important for learning need an audio description to replace important information from the visual field (which is different from captioning). Images need descriptions if they are important to the learning. All of this extra information benefits all learners because it makes resources easier to find because the text descriptions are searchable. 
  • If someone has a very limited range of motion, then controls must be usable via a switch interface or voice commands. All learners benefit because the same hooks can be used as shortcuts and automations.
  • If someone is deaf or hearing impaired, audio content needs transcripts and videos need captions. Simulations need to make sure that information conveyed through sound is available in another way. In addition to added searchability, anyone in a noisy environment will benefit from these features.  
  • If someone has a reading or learning disability, assistive technologies can read aloud and highlight text as it is read, but not if text is embedded in images. That might seem rare, but mathematics is often presented as an image only. Although still in research, mathematics that is text (rather than image) will also be explorable one day. Each part or term can be queried, annotated, and manipulated, benefiting all learners.
OERPUB (my Shuttleworth Foundation funded organization), Benetech, and the Bill and Melinda Gates Foundation each helped to bring the sprint to fruition.


participants watching demosThe sprint was two and half days long and it is going to take a few posts for me to get all the information out about the sprint, but I definitely want to share all of it, because the sprint was incredibly informative and productive. The first morning, each of the groups showed off relevant tools, technologies, and processes. We wanted to know who was looking for help making their sites and teaching resources accessible, and who was bringing tools to make content more accessible. In a sense, this part of the meeting was about showing what we already have in the toolbox for accessibility.

Demos:
  • Accessible Authoring: OERPUB editor design: To break the ice, I showed features of the oerpub editor designed to help authors create accessible content. We mark images that need descriptions and say 'thanks' when they are added. We create tables with a header row by default, and math is written in a format (MathML) so that screen readers can read it. I asked for help doing even more, especially for training authors while they are creating and for finding and including accessible movies and sims. You can see what we have released so far at remix.oerpub.org, which also includes importers for Word, OpenOffice, LaTeX, Google Docs, and web pages.
  • Accessible Videos: YouDescribe Owen Edwards from Smith Kettlewell showed YouDescribe, an experimental platform for crowdsourcing extended video descriptions. It is analogous to the Amara platform for crowdsourcing closed-captions.  Viewers pause videos and then record a narration of what they are seeing. Often parents and relatives do this already if someone in their family needs this. They are describing exactly what they know their relative needs to hear about the video. This would be a way to make that work benefit many more people.
  • Accessible Simulations: Ariel Paul of PHET (simulations for math, chemistry, and physics that make the invisible (like electrons) visible) is creating HTML5 versions of their simulations and taking the opportunity to make them accessible to more learners. Ariel demo'd an alpha version of an HTML5 tug-of-war simulation to show basics of forces and motion.  The new simulation could be operated via keyboard, switch devices, or voice activation. They are taking this rewrite to HTML5 as an opportunity to really think through accessibility. He was here to learn as much as possible from all the accessibility experts here.
  • Learner Controls and Accessible Video: Yura Zenevich and Joanna Vass of the Inclusive Design Research Center demonstrated Learner Options (example - show display preferences), Speak.js, and an accessible video player. Learner Options is a javascript library that gives learners a set of controls to adjust text size, button and link size, spacing, font, contrast, text-to-speech, navigation and layout. The video player has controls that are all keyboard operable, and it pulls in any corresponding captions it finds from amara (caption crowd sourcing).
  • Accessible Annotations: Jake Hartnell demonstrated Hypothes.is, a distributed, open-source platform for annotating the web. He asked for help in making annotations accessible -- both the discovery of annotations and the creation of them. In addition to seeking to make annotations more accessible, annotations are also a potentially powerful tool for accessible learning. Bookshare (an accessible online library) regularly receives requests for some way to take notes within books. Learners using accessible books need accessible ways to track their learning. Additionally, annotations might provide a way to request and receive help making resources useful to more learners. For instance an annotation on an image with no description could provide a description.
  • EBook Authoring: Phil Schatz of Connexions demonstrated github-book (code):  an authoring system for books that uses the OERPUB editor for each chapter and automatically creates an EPUB ebook as a result. Versions of the book are all stored in github and people can easily make their own copy of a book and adapt it.
  • Accessible Math and Chemistry: Volker Sorge of University of Birmingham and Google demonstrated ChromeVox which can read mathematics on the web, demonstrated a system that can analyze an image of a molecule and outputs three structured text format alternates for the molecule, and finally demonstrated Maxtract which converts PDFs created using OCR (optical character recognition) to LaTeX or HTML.
  • Accessible EBOOK Reading, Image Captioning, and Text-to-speech with highlighting: Gerardo Capiel from Benetech demonstrated an accessible version of the Readium EPUB3 reader, POET (an image description tool), BeneSpeak and Accessibility Metadata (a11ymetadata) for Schema.org. The readium version uses special tags so you can navigate using Safari, IE 9 and 10, Firefox, and Chrome. With POET, an entire book in DAISY format is uploaded and then all of the images can be described or marked as decorative. Soon POET will support books in EPUB3 format. BeneSpeak does word-level highlighting in conjunction with the Chrome specific TTS APIs speech engine. The highlighting helps readers with learning disabilities like dyslexia or who are learning a second language follow and comprehend. The accessibility metadata Capiel showed has been proposed to Schema.org, based on the a11ymetadata project. It will make it easier to find accessible education resources. Examples -- you can indicate that a resource can be used via keyboard only, or via mouse only. You can indicate that a resource has described images, transcripts for video, etc.


Thursday, May 16, 2013

Testing of the editor designs at the Connexions Conference

The OERPUB UI team, Adrian Garcia and Max Starkenburg, were hard at work during the sprints after the Connexions Conference testing  the usability of new features that have been designed for the editor. The testing, as usual, was highly informative.

They were testing this mockup:

mockup shows a left toolbox of pedagogy templates, a fairly standard editing toolbar, and a learning document on properties of exponents.
 

New Features

It has several new features or combinations of features since the last set of testing:
  • Pedagogy templates in the toolbar. This was implemented for instances in which an organization embeds the editor but doesn't have screen real estate for the toolbox. Some organizations may even want both.
  • An inline menu. This will enable users to make key terms, code font, or foreign text, and to remove formatting. This inline menu appears when authors hover over highlighted text, over styled text (which they could convert to a more semantically rich choice), or over key terms, etc.
  • Pedagogy options menu. This menu provides a list of all possible pedagogy templates and allows users to select which ones are visible.
  • An icon for inserting videos into documents. This will enable users to search for videos hosted on various sites and embed them in their document.
  • Quotation template. These enable authors to create quotations.

Questions and Answers

  1. Can users discover important features of the editor on their own? The pedagogy toolbox is much more discoverable than the pedagogy menu. In the menu, "Add a new" sounded like a toolbar configuration rather than something to include in the author's content. We were trying to avoid "insert" which other research has shown to be confusing to authors, but we will need to work on the wording of this menu and make it more inviting to try.
  2. How discoverable is the inline context menu? The inline context menu was the least discoverable of the features. Perhaps changing the color of the menu's icon so that is is brighter will increase discoverability. It may also help if it appears more quickly. This mockup shows a possible fix.
  3. Will it be a surprise to users that the inline menu does not provide any styling options? Might the inline menu distract from styling? This doesn't appear to be a problem for authors. They seemed fine accessing styling exclusively from the top toolbar. Keyboard shortcuts for bold and italics should be supported (and they are in the implemented editor!)
  4. Are participants able to use and understand the pedagogy? Can they successfully customize it? No change from prior tests. Authors are able to use the tools easily. Sometimes they bypass the templates and just type things in. In this test, authors struggled with the widget for altering the labels (exercise->question for instance).
  5. Is it confusing to have two pedagogy menus in the editor? Basically, yes. The confusion abates after experimenting with both. Most authors concluded they were the same, but we don't know how high their confidence was. We aren't sure whether we will try to fix this problem.
  6. Are participants able to properly insert videos? Yes, the workflow worked well. The icon was hard for authors to find, but they did find it. It would be good to create a better icon, but we can probably live with this one for a while.
  7. Are participants able to properly insert quotations? Will they have issues with their appearance? The workflow was fine. Participants wanted them to be styled differently. They will be more likely to use the semantic template if it is visually pleasing. This is an easy fix, but will only affect the content in the editor and when saved from the editor. Repos may style quotes differently.

The Full Report

You can download the full report in PDF format. We are working on a more remixable format for the report.