It Takes a Village to Get Accessibility Right
Bill Kasdorf
Principal, Kasdorf & Associates, LLC
Cofounder, Publishing Technology Partners
It wasn’t long ago, even by internet standards, when most publishers thought of accessibility, if they thought of it at all, as something that they should probably look into some day. That was considered progress: at least those publishers realized that they should be doing something about accessibility, when they could get around to it. Many or most of their peers confused “accessibility” with “access,” as some still do, and thought they had it covered because their content was online.
The situation has taken a very significant turn for the better in the last few years. Today, especially in publishing sectors like higher education, trade, and scholarly publishing, many publishers have worked hard to make sure their publications are properly accessible. This has been given a push by both emerging regulations and peer pressure.
On the regulatory front, US publishers have long been aware of Section 508, which requires government publications (among other things) to be accessible. This section was updated a few years ago so that now its requirements are based on WCAG (Web Content Accessibility Guidelines), from the W3C (World Wide Web Consortium), which governs most web standards. At the same time, regulations in other countries began to align as well; as a result, now most of them are based on WCAG, which makes it much easier for publishers to ensure that publications that meet domestic regulations will also meet international ones.
And we’re no longer just talking about government publications being regulated. One of the most significant regulatory developments in recent years has been the EU Accessibility Act, which will require publications of basically any sort to be accessible by 2025 in order to be sold in the EU. Since publishing has become global, that is making a big impact.
But peer pressure has played a role too. Benetech, one of the most important nonprofits working in the accessibility and technology space, developed a program called Global Certified Accessible (GCA), which certifies publishers (mostly book publishers so far) as reliably publishing accessible books. When one of the big five US higher education publishers, Macmillan Learning, was the first to attain GCA status, the four others quickly got on board. To their credit, they had all been working on accessibility as well, but getting that GCA seal of approval could be decisive in a procurement decision. (As of this writing, Pearson has become certified, and the other three are in the certification process.) When Simon & Schuster got GCA certification, it put pressure on the other big trade publishers to do the same. A host of smaller publishers have been certified or are working on certification. And Benetech is now endorsing vendors who’ve proven to be able to get accessibility right. This is a very significant watershed.
So whose job is accessibility, anyway?
The work many publishers are doing now to make their publications accessible is commendable. But it can be a significant amount of work, and it can be expensive. This doesn’t need to be the case.
A key issue is that many of the participants in the publishing workflow and supply chain think that accessibility is somebody else’s job. The work is unevenly distributed, which means that certain staff—often production or digital staff—have to work harder than they should need to, to fix content that could have been in better shape before getting to them. And it is still extremely common for publishers to just keep doing the work the way they always have, producing either an inaccessible publication or an insufficiently accessible publication and then paying a vendor or contractor to fix it.
We also tend to think of editorial and production workflows as linear. The work gets handed off from one party to another, and those parties historically are often siloed, so things that could have been done better upstream have to be fixed or worked around downstream. Once a manuscript is accepted for publication, it’s handed off to production, and there, it gets copyedited and then handed off to production editors, who hand it off to typesetters (sometimes in-house, often outsourced to vendors), who then hand back proofs that, by their very existence, discourage refinement, because changes at that stage incur delay and expense. When the pages—yes, we’re talking about print pages—are finalized, they’re sent off to a printer. Sometimes, the typesetter (most often a vendor) has done some work on the digital files, but usually more work needs to be done to them to make them accessible, often with little visibility or feedback upstream.
It doesn’t need to be this way. Instead of attempting to assign who should do each workflow step in making content accessible, it’s better to look at how any of the parties in the content creation, development, refinement, and dissemination workflow can help get things right. The best example of that is one of the most common gaps in most editorial and production workflows: image descriptions.
Images need descriptions. Good descriptions.
If there are images to be published, they need to have good descriptions so that people who are unable to see the published images don’t just encounter blank holes in the content; they get as close an approximation as possible as text to what a sighted user gets visually. Sometimes, “alt text,” a brief text-only description, typically 150–250 characters long, is sufficient, but if an image conveys richer or more complex information, an “extended description” (previously referred to as a “long description”) needs to be provided.
This is one of the most common shortcomings of most digital files. First of all, alt text is often inadequate, redundant, or just plain wrong. To appear to have alt text for every image (because the alt attribute, alt=”[alt text goes here]”, is required in every tag by HTML), sometimes the alt text is the filename of the image (which is useless to a screen reader user), or repeats the caption (very annoying—the screen reader user has already heard the caption), or is missing altogether in what is called a “null alt,” alt=”” (which is just plain cheating, unless the image is purely decorative and conveys no information whatever). And extended descriptions are rarely present unless the publisher has really put a priority on getting accessibility right.
There are service providers who are experts in writing good image descriptions, including some of the same vendors who provide the typesetting and digital files, but that adds time and expense. While that may be time and money well spent, the job of those services is made harder if there is no involvement in the description process upstream.
The image description process should involve all parties in the content development workflow—ideally, even the author. Who knows better what the images in the manuscript convey than the author? While there is an understandable reluctance to ask authors to do more work, many publishers are finding that authors cooperate if they are given good guidance and understand why this is important: they want the descriptions to be right, after all. Admittedly, in trade and educational publishing, the images often don’t originate with the authors, but even there, the authors should be in the loop to ensure that the descriptions align with their intentions.
It should not be a surprise to anybody reading this that the descriptions, whether originating with the authors or not, need to be edited. This means that developmental editors need to be educated in what is needed so they can coach the authors in the first place; editorial assistants often relish the task of developing or refining descriptions because it’s creative and rewarding work; when copyeditors understand what is needed, they can edit the descriptions along with the text and the captions; and production needs to integrate this process into the workflow so that it becomes a natural part of how the content is prepared for publication.
Even if the actual description writing is outsourced, this upstream work is very helpful. For example, when those writers are given images like charts and graphs, it is very likely that those images were created from data in a spreadsheet like Excel in the first place. If the description writers only get the image, they often have to guess at what the values of the bars and lines in the chart or graph refer to. If they are provided with those Excel files, they don’t have to guess, and the numbers in their descriptions will be precise.
Getting accessibility right is everybody’s job
Almost every aspect of getting accessibility right benefits from a look upstream: Who needs to understand what’s needed so things are in better shape in the first place? Here are some more examples.
Users of assistive technology need headings to be structured properly. They typically jump from heading to heading to find their way through the content; the heading levels communicate how the content is structured: which sections are siblings, which sections contain other sections, and which sections are contained by other sections. Sighted people take these cues for granted based on the appearance of the headings. But users of assistive technology require the markup of the headings to be correct—nested properly, with no skipped sections—or they can become confused about where they are in the content. Author guidelines should emphasize this need; it’s a fundamental editorial task to get this right; and production should ideally not have to jump through hoops to fix the problems.
Tables are another big issue. All too often, they’re just provided as images, and while good extended descriptions of charts and graphs can usually convey the necessary information, users of assistive technology are shortchanged if that’s all they get for tables. Instead, proper HTML tables need to be provided. If the first row consists of column headings or the first column consists of row headings, they need to be tagged as such. Plus, merged cells really throw the user of assistive technology out of whack: they perceive one cell when it may take up the space of three, which throws off the structure of the table. Addressing this problem starts with the author, because authors have a tendency to put information in tables that doesn’t actually need to be tabular: lists, for example, can often convey the same information in a much more accessible way. It’s the editor’s job to intercept those tables-that-don’t-need-to-be-tables. And when tables are needed, it is helpful to production for the editor to point out which tables have column headers or row headers; this isn’t always obvious to a typesetter.
And I don’t want to leave out the designers! Obviously, they have a hand in designing how the headings and the tables appear in print; often, the same designers design the e-book (or the publisher wants the e-book to “look like the book”). If designers are aware of these accessibility concerns, they can get things on the right track early in the workflow. Another important design issue is color, and color contrast. Accessibility isn’t just about blind people; there are lots of colorblind people, so information should not be conveyed solely through color. And people with low vision have a hard time with subtle color variations, so care needs to be taken that the color of type and the color of its background have sufficient contrast, especially for small type. Once designers understand these issues, it doesn’t make their job any harder, but it can make the job of the folks downstream a lot easier.
Accessibility is a team effort
When everybody in a publisher’s editorial and production workflow understands what to do to ensure that they’re contributing to good accessibility, it can reduce work or rework downstream. That makes it much more likely that folks who need accessible content can get it at the same time, at the same price, and from the same products everybody else gets. The whole workflow works much more smoothly when it produces “born accessible” content. Folks are often surprised that it’s not as complicated as they may have thought. And publishers are often surprised that it’s so well accepted by their staff—because folks realize that it’s the right thing to do.
Soon, we hope, this will just be standard practice. Let’s make that happen!
Bill Kasdorf is Principal of Kasdorf & Associates, LLC, a consultancy focusing on editorial and production workflows, XML/HTML/EPUB content modeling, standards and best practices, and accessibility. He is a founding partner of Publishing Technology Partners. Active in the W3C Publishing@W3C activity, Bill is the W3C Global Publishing Evangelist. He is member of NISO, co-chairing two NISO Working Groups, and is an active member of SSP, BISG, IPTC, and the DAISY Consortium. Recipient of the SSP Distinguished Service Award and the BISG Industry Champion Award, he is general editor of The Columbia Guide to Digital Publishing, serves on Learned Publishing’s editorial board, and is a columnist for Publishers Weekly. Clients have included societies such as NEJM, NAP, IEEE, and ACP; MIT, Harvard, Cambridge, Toronto, and Columbia university presses; publishers like PLOS, SAGE, Norton, Cochrane, and Pearson; and the British Library, the World Bank, OCLC, ORCID, and the EU Publishing Office.
Disclaimer:
This is to inform readers that the views, thoughts, and opinions expressed in the article belong solely to the author, and do not reflect the views of Amnet.
Copyright © 2022 Amnet. All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other non-commercial uses permitted by copyright law. For permission requests, write to [email protected].