Download Slides (5.7 MB)

Many accessibility experts may know about decorative graphics – the graphics which are inserted to provide visual interest, but can be skipped by a screen reader. I myself have used decorative graphics (e.g. the background on my A Linguist in the Wild blog), so I have no objection to them.
But I believe I have discovered another kind of decorative graphic – the stealth decorative visual. These are elements which a platform may treat as a graphic, but which an editor may not realize are there. These can include:
Consider the following slide explaining how to begin a embroidered pendant. How many images are in the actual slide? I’ll let you think about this for a second, then you can see the answer below.
The following nine (9) images are on the slide
Here’s another slide with a possible stealth graphic – this time from a linguistics course. It looks like it’s a "text-only" slide, but in reality there is a big blue square in the upper right (it’s on almost every slide). Unless the editor goes into the slide master to mark the blue square as "Decorative" it could be read out as an image on some screen readers (sigh…).
From Corpus Inscriptionum Latinarum, Volume 4.
I.7.1 (in the vestibule of the House of Cuspius Pansa); 8075:
“The finances officer of the emperor Nero says this food is poison.”
II.7 (gladiator barracks); 8792:
“On April 19th, I made bread.”
VI.16.15 (atrium of the House of Pinarius); 6842:
“If anyone does not believe in Venus, they should look at my girl friend.”
The process of converting a document is especially perilous. Many items, such as textboxes can become <Figure> tags which contain lists or paragraphs…but no image ALT text. How is that handled in a screen reader?
Because of this, it is recommended that most "objects" in Adobe InDesign be pre-tagged to be exported as <Artifacts> instead, so they can be skipped by a screen reader. Unfortunately the same same options don’t always exist in other document tools.
Why hasn’t this topic been covered widely? I believe it’s because accessibility began as "web accessibility" within the HTML framework. HTML leverages CSS styles to allow formatting options, particularly borders and shading, to be added to different elements.
If you consider the image below from the Penn State Anthology Ally landing page, you will see that the actual code has no images at all.
This content is just a special DIV with rounded corners and a border with a custom H2 tag. The blue "button" is really an A link with border and shading added. The screen readers are designed to skip the CSS and proceed with the semantic structure only. Thus there are rarely any modern stealth graphics in modern HTML. Isn’t HTML cool?
First, let me I admit I have a PDF problem. When I see a random Web page with interesting content, I click the "Print to PDF" button so that it can be captured for eternity (just in case the Web page disappears someday).
This isn’t really an accessibility issue, unless I started sharing it. Specifically with students enrolled in a course I’m teaching. It makes sense to do this from an archival point of view, but that untagged PDF is now a major accessibility headache, especially when its combined with other third party PDF items like journal articles or other reports which may or (much more likely) NOT be accessible.
What an instructor do to mitigate this problem? Here are my recommendations below.
Most websites, at least the text part, are more accessible than a PDF printout. Most websites have headings, a lot of PDF files don’t even have that. Major news, academic and even gossip sites have really taken the time to include image alt text, table headers
Note that the Print to PDF function destroys any accessibility features that a web team may have implemented…leaving you with an untagged PDF.
Websites have also become more stable in recent years. For major web sites like the BBC, content published in the past is discoverable. Examples include the obituary or Margaret Thatcher (2013) or even the death of the Queen Mother (2002).
There are still times when you need a backup. For instance,
You you like the speed of "Print to PDF" and only want to worry about accessibility later. I understand that, but "later" for accessibility is coming sooner than you think. A recent DOJ announcement requires course content at Penn State to be accessibily by April 2026…even if there are no students requiring accommodations.
Why? Because a student requiring accommodations may experience significant delays waiting for content to become accessible at the last minute. This forces students with
Should you tag your files now? Maybe, but it can be a hassle. PDF files can come from multiple sources ranging from Photoshop to Python. Anything can be in there….
Alternatively, you can
If you decide to use it as a PDF, you can
If you are worried about speed, you can just do the initial copy and paste from the web site into Word. This lets you capture the data you need at the time you see the article.
Here are some files created from a blog entry on tennis ball colors. I wrote the blog, so gave myself permission to create PDFs here. Despite the entry being over five years old, many links have remained active.
If you are familiar with Adobe Acrobat, feel free to review them, and then to tag the Print to PDF version!
But wouldn’t moving content into a Word file violate copyright? Perhaps (I’m not a lawyer), so you may want to be cautious. Pointing people directly to the Web site instead of using a PDF avoids some of these issues (unless the Web site itself is inaccessible).
But consider what happens now when a PDF is made accessible via tagging within a tool like Adobe Acrobat..
Again, I’m not a laywer, but some things to consider would be:
What can AI (Artificial Intelligence) do for you? Apparently do a decent description of charts, graphs and infographics. I’m generally fairly skeptical about AI applications, but some tools such as the ASU Image Accessibility Creator tool and others can truly save time for someone needing a description.
But like all technology, you need to understand the limitations. Here are some of my observations about AI image description would could be useful.
Believe it or not one of the most tedious chores of image description is transcribing text. Sighted people can read text embedded in a photograph or graphic, but screen readers cannot.
Optical Character Recognition (OCR) tools have been developed to extract text, but traditional tools still have drawbacks including interpreting reading order of complex layouts such as in infographics, org charts or bar charts.
Many of the new AI tools to a fantastic job just with transcribing test. In comparison with more traditional OCR tools, they seem to do a better job of interpreting text, even in very small fonts, and understanding where the pieces go.
For English text, I haven’t found any major issues, even with less than stellar image quality. But I haven’t tested any phonetics or non-English yet.
See an example below of an infographic with a tricky layout about grand juries vs. trial juries.
Note: I asked the AI to not only transcribe the image, but to provide some of the output in an accessible HTML file. Although I had to do some tweaks, the process took about five minutes to complete. The original AI description included icon descriptions, but editors can determine if they are decorative (and can be omitted) or not.
Description: A comparison chart of Grand Jury vs Trial Jury in the Federal Court System, including jury size, question addressed, what happens, who presents evidence, and duration of service, with icons and descriptions for each aspect. See details below.
Caption: This is a teaching tool with information available at uscourts.gov Educational Resources. Source: <uscourts.gov> Educational Resources.
The image is an infographic titled “Grand Jury v. Trial Jury in the Federal Court System Simplified.” It is divided into two vertical sections: “Grand Jury” on the left and “Trial Jury” on the right. A series of horizontal comparisons are made between the two types of juries, highlighting differences in jury size, the questions they address, the process of presenting evidence, the parties involved, and the duration of service.
Infographic comparing Grand Jury and Trial Jury in the federal court system, covering jury size, roles, evidence presentation, involved parties, and service duration.
Editor’s Note: A side by side comparison chart has been converted to the table below.
Aspect | Grand Jury | Trial Jury |
---|---|---|
Jury Size | 16-23 Jurors | 6-12 Jurors |
Question | Decide if the Prosecutor Has Enough Evidence to Bring Charges | Find "Guilty/Not Guilty" (Criminal Cases) or Find "Liable/Not Liable" (Civil Cases) |
What Happens | The Government Presents Evidence in Closed Hearing | Prosecution and Defense Present Evidence in Open Court |
Who Presents Evidence | U.S. Attorneys (Prosecutors) | Attorneys for Both Sides: Prosecution and Defense |
Duration of Service | One Day a Month for Several Months | One Trial |
Another common task in academic image descriptions are technical diagrams such as bar charts, line charts, maps and all sorts of technical diagrams. So long as your charts aren’t too exotic, AI does a good job providing a description, including …transcribing the text of title, keys, legends, axis labels and other snippets. AI will also describe graphic elements, icons and colors, all of which are helpful when interpreting data.
One gap I have noticed is that AI doesn’t always estimate data values. One description of a Harrisburg climate chart mentioned a temperature peak in July, but not an approximate value. Data estimation based on visuals alone can be tricky though. It’s always good to have the data sourced as a table or some sort of spreadsheet/data file.
Image Description: The image is a combined bar and line graph representing the Monthly Climate Normals for the Middletown-Harrisburg Area, PA, from 1991 to 2020. The vertical axis on the left is labeled "Precipitation (inches)" and scales from 0 to 6 in increments of 0.6. The vertical axis on the right is labeled "Temperature (°F)" and scales from 15 to 90 in increments of 15. The horizontal axis lists months from January to December.
Green vertical bars illustrate total precipitation normal for each month, showing that the precipitation peaks around July [at about 4.5 inches]. There are three lines indicating different temperature measures: the blue line represents the mean minimum temperature normal, the brown line the mean average temperature normal, and the red line the mean maximum temperature normal. All temperature lines show a bell-shaped curve, with the higher temperatures in the middle of the year and lower temperatures at the beginning and end of the year. The min and max temps are listed below:
The lines display a clear seasonal pattern, with the mean max temperature (red) peaking in the middle of the year during the summer months, the mean avg temperature (brown) following closely, and the mean min temperature (blue) being consistently below the other two lines. The graph also contains a note "Click and drag to zoom to a shorter time interval," suggesting interactive functionality.
Alt-Text: "Graph of Monthly Climate Normals for Middletown-Harrisburg Area, PA, displaying precipitation as green bars and temperature with a blue, brown, and red line for mean min, avg, and max temperature normals respectively, showing seasonal variations."
T/Precip. | Jan | April | July | Oct |
---|---|---|---|---|
Total Precipitation (Inches) | 3.0 | < 3.6 | < 4.8 | > 3.8 |
Mean Max Temperature (°F) | 43 | 66 | 86 | 66 |
Mean Avg Temperature | 36 | 56 | 78 | 54 |
Mean Min Temperature | 18 | 42 | 66 | 45 |
I would recommend having an SME (subject matter expert) review anything very unusual. I fed AI a syntax chart from a paper of mine, and the results were good, but what took AI about 3-4 paragraphs to describe could be condensed by the SME to 1-2 sentences depending on the context. Below is another syntax tree which AI described well, but I did tweak the descriptions.
Image Description: The image depicts a syntactic tree diagram used in linguistics to illustrate the structure of a sentence. The tree is composed of green text and lines, representing different components of a sentence. At the top of the tree is the letter “S,” indicating the sentence level. Below “S,” the tree branches into “NP” (Noun Phrase) and “VP” (Verb Phrase). Further, the “VP” splits into “V” (Verb) and another “NP,” which is marked as containing a “Det” (Determiner) and “N” (Noun).
Alt-text: Syntactic tree diagram of a sentence [John hit the ball].
Text in Image:
A syntax tree for the sentence (S) “John hit the ball.” The top S node branches into an noun phrase (NP) ‘John’ and verb phrase (VP) ‘hit the ball’. The VP (corresponding to a traditional predicate) branches further into V ‘hit’ and NP ‘the ball.’ The last NP ‘the ball’ branches into a determiner (D) ‘the’ and noun (N) ‘ball.’
In an ideal world, there could b, but even in mainstream academia, there are technology limits to universal use of graphics. Fortunately, there are are often text-based alternatives that can be used. Here’s a linearized representation of the the tree above that could be used in some cases.
Sentence: John hit the ball.
[S [NP John] [VP hit [NP [D the ] [N ball.]]]]
Editing descriptions are also very important for repeated graphics in which maybe only one element is really important in a particular graphic to represent a change or focus element. Once a person has become familiar with the format, it’s more efficient to focus on the these first and skip what is already known.
Many people not directly involved in accessibility wonder why SME input is important at the beginning. Reviewing image descriptions in an academic context from both perspectives has made me realize that bridging the gap early makes accessibility more efficient for everyone.
Do you recognize at least one woman in the photo below? It’s a photo of First Lady Barbara Bush and Princess Diana taken in the White House. Many people are able to recognize celebrities, but the AI tools I have used are not able to do so. Instead they describe women sitting on a sofa and their outfits.
If you are in a situation where you cannot identify people or objects, I do recommend Google Reverse Image search or something similar to find other "matching" photos.
Although the ASU tool can theoretically generate MathML, it’s OCR engine has not produced optimal results. I would recommend I would still recommend Equatio or MathPix for equation OCR.
Note: I want to add that this functionality is slightly beyond what is intented. Image description can mean many things depending on the context.
According to
\sqrt[4]{\sqrt[5]{\sqrt[3]{\sqrt[17]{240}}}}=
Note: In my experience, more advanced math students, even those using screen readers, usually avoid "spoken" math. It’s easier to process math symbols, LaTeX or MathML/equation editor outputs.
This was an interesting area. For some fine arts, AI was able to make useful descriptions of color and composition, but often missed details that an art historian would probably comment on.
For example, in a piece of embroidery, AI might be able to identify that there was embroidery, but miss pretty key points like the presence of cross stitch elements or which animals and flowers were being depicted. Both of these are important for identifying artistic genres, periods or locations of art.
Adding a closeup of an embroidered piece provides a different perspective for AI. In the closeup, AI did pickup the cross stitch, but had problems with identifying the Runes as Runes – it did much better with the full picture of all the runes. Context does matter a lot.
AI: Image Description: The image shows a piece of white fabric with a cross-stitch design. The design features two rows of thick, intertwined red and blue lines at the top and bottom. In between these rows are two sets of letters. The upper set consists of large, blue letters spelled out in an unusual, varied typeface. Below these, there is a series of smaller, red letters in a simpler typeface. The fabric has a visible weave, typical of cross-stitch materials.
Text: Upper set: “MMROAN” (actually Runic Letters) Lower set: “EMLNGOD”
As wonderful as the AI tools are, you need to proof the output.
AI can "hallucinate," meaning it may make an incorrect assumption, and using that assumption, make further wildly incorrect statements.
I asked it to describe a modern Runic embroidered sampler (below) and it identified one motif as a "Viking ship" and mentioned it had four rows of oars. It also said the piece was a bookmark (it’s really six inches wide).
In reality, the Viking ship was a modified version of an abstract animal with a head similar to a ship’s head motif, but with actual legs and hind quarters. Not a bad miss in terms of shape, but very inaccurate conceptually.
As IBM notes in the link, hallucinations may be the result of incomplete exposure to training data in a given subject domain. Because academic research may be filling in current knowledge gaps, there may always be a slight lag in what AI can do with academic content.
A description could be very close to correct, but miss a crucial detail. In one case of an PowerPoint screenshot with a utility window open, the main screen was described, but not the window on the right. As with other OCR tools, you could upload just a cropped section and redo the scan to receive a transcript. Or you could tweak the prompt.
The AI tool described the slide content, but not the screen order elements on the right. Reloading just the mixing element will usually result in a usable description.
The image presents a question on whether the flag of Pennsylvania should be redesigned, showing three different flag designs.
Image description is one case where AI is coming into its own. But as with automatic speech recognition and other applications, AI output needs some auditing to be 100% accurate.
This session reviews different uses of map images and different strategies to make them accessible including tables, ALT text, long description and tactile/3D graphics.
Presented at the 2020 A Future Date Virtual a11y Event. Thanks to Matt May of Adobe, 3Play and other corporate sponsors for organizing this.
Download Slides No Audio with Notes
This post documents the experience of getting a print from the Penn State Maker Commons service.
The good thing about Maker Commons is that you don’t need direct access to a 3D printer nor do you need to learn how to load plastic media or make sure the nozzle is properly cleaned and aligned, because the Maker Commons staff is there to do that for you. However, you will still need to interact with some media software, most of free and you will need to an investment of time, especially for your first print. And there are some restrictions you might have to keep in mind.
If you’re interested in building a basic model, you may want to get a free Tinkercad account. It’s a cloud service which lets you build a model from scratch or import and edi a model/vector graphic file. MakerCommons has some Tinkercad tutorials on their tutorials page.
Another non-cloud option is Google SketchupMake. Maker Commons also has tutorials for this service. Or you can find and download a 3D model. Or you can use a graphics program which allows you to export a 3D model.
No matter how you create your model, you will need to import it into the free MakerBot Dekstop editor. This software is designed for the MakerBot printer and will let you resize your image, set print quality and export it for print.
It’s important to follow the guidelines from MakerCommons or your print could be rejected or ruined. An important one is a relatively new 100 gram plastic restriction. Another is a size restriction of 9.9 inches L x 7.8 inches W x 5.9 inches. Finally, your Quality setting needs to be “Low” or “Standard.”
You also need to make sure the print is to be centered and lie flat on the print bed. The guidelines show you exactly how to do this.
Here you will need to submit your model file through Makerbot Innovation Center with your Penn State email address. This will allow you to choose the model and even select the color plastic you want. Again, it’s important to follow all the instructions.
Ready to submit? Then click the Submit button on Maker Commons site. You will receive a message when the print file is submitted to the queue, when the job begins to print and when the print is complete.
Once you receive an email that the print is ready to be picked up, head to either Patee/Paterno at University Park or a location at another campus. You will need to bring your Penn State ID as verification.
If all has gone well, you will receive your print in a clean brown paper envelope. Take off the “raft” and enjoy.
As a leader, I will rely on the expertise of my team and colleagues but provide vision and direction to move us forward. I will strive to maintain calm in the face of chaos and communicate clearly in a sea of noise. Above all, I will be honest with my team and “customers” and treat them with integreity and respect. I will always seek to learn more about leadership needs, especially from my mistakes.
This book reviews the roles and uses people make of “narratives”, both the positives of using stories to teach cultural morals and negatives of people being deluded by their own stories and nightmares. I became interested in this topic because a lot of our knowledge is organized around different “stories” in ways we don’t always take into account as educators. The author of the book, Jonathan Gottschall, is also notable for being an English instructor working with Darwinian theory.
From a personal perspective, I have to say I agree with the idea of a universal storytelling instinct. How else could Joseph Cambpell have discovered The Hero with a Thousand Faces. Like many instincts though, it can serve people well in the sense that stories help people bond over a shared experience (e.g. Downton Abbey, Star Wars, Gone With the Wind) and serve as an organizer of information. I personally can recite the wives of King Henry VIII because I have seen lots of Tudor movies. I can also tell you a little bit about the Declaration of Independence based on the musical 1776. On the other hand, stories can also present and crystalize misinformation like “We only use 10% of our brain” (not true). Educators often use stories instinctively in the form of anecdotes, but do we understand how narratives and learning work in a structured way?
Gottschall, Jonathan (1994, 2012) The Storytelling Animal: How Stories Make Us Human New York: Mariner (Houghton Mifflin Harcourt). <br /?
ISBN 10: 0544002342
Not surprisingly, Gottschall discusses the universality of stories in human cultures in multiple medias. Stories began as an oral genre, but in the modern world they can be found in books, radio, movies and TV. They can even be found in role playing games and competition reality TV (e.g. WWF wrestling and The Bachelor). As we all know, there are lots of people trying to make money around the world providing narrative content for people to consume.
Gottschall also mentions that children like to make their own stories and adults can also make their own in the the forms of original stories or fan fiction type genres. What’s the evolutionary purpose of story? Gottschall discusses several theories but one he mentions is a connection to the mechanism of imagination. People do use scenario building as a way to play more mundane activities like food gathering/hunting or actual social events. Both are aided by trying to create a mental model of what people or animals are likely to need and do.
Interestingly, Gottschall spends a lot of time talking about the dark side of stories. One theme that emerges is that a lot of stories are actually about bad things that happen. Even if a story has a happy ending, there is typically some sort of bad situation that needs to be resolved. Gottschall presents some studies of children creating stories, and those too involve lots of bad events like kidnapping, murder, assault and theft.
In a similar vein, Gottschall presents research that most dreams are usually nightmares. Gottschall metions a model that one functions of narrative is to allow individuals to model different “dilemmas” and then possibly find solutions (or not). One offshoot of this theory are science fiction stories that explore potentials for modern technological developments. Mary Shelly questioned how far science should go in saving a life in Frankenstein much like robot stories question how artificial intelligence would impact our life in the future.
Another potential dark aspect of stories is that people also use stories to build conspiracy theories. Gottschall comments that every person considers him/herself to be the hero of their own story (even if no one else does). Normally the realities of life check our impulses to be too “heroic”, but a mental illness can often result in delusions of paranoia or grandeur where all the world is full of villains and only the hero knows the secret mission.
Similarly, people may build a story to connect inexplicable events in a “coherent” narrative. As Gottschall notes though, the resulting narrative can often be wildly inaccurate and can be politically dangerous. One interesting example was a conspiracy theory in West Africa that health care workers were actually creating Ebola (while many in the U.S. felt Ebola was a plot controlled by President Obama). In case you are wondering if the “educated” middle class is immune, consider the damage of the conspiracy theory that vaccines can cause autism. In late 2014, the drop in vaccination rates led to a measles outbreak in California. This was caused by one scientist’s hoax that a lot of people’s “intuition” felt was true.
As a final step, Gottschall notes that sharing a common story can bring people together in a common cause, but again he warns of the dangers of a common story becoming a negative force. A positive example might be people who believe in the values of the American democracy and may celebrate key moments like the signing of the Declaration of Independence. But the same story can also lead to prejudice against non-Americans (or not-yet-American immigrants). In the worst case scenario, Gottschall described how Hitler used a story with theatrical accompaniments to convince many Germans to follow his Nazi agenda. Scary stuff.
Returning to a more positive aspect, Gottschall notes that a benefit of reading others’ stories is that the reader can feel empathy for different people in situations they might not experience in real life. Some models suggest that stories can act as a social skills simulation zone. Etiquette lessons are often taught by showing what happens to heroes who follow the codes versus villains who break them. More importantly, a story can also show different points of view in a way that’s deeper than just presenting the information. Interestingly modern role playing games take that a step further by involving true social interaction within a fictional framework.
Gottschall’s book is a good introductory narrative on narrative that raises a lot of interesting points. Ultimately though, I think it’s important to dig a little deeper into what this means for cognition and pedagogy. One aspect that I think Gottschall missed is how narrative is used to structure “data”, particularly historical data or scientific data.
As I mentioned earlier, I was initially intrigued by the notion of using narrative to help learners build meaning. I remember novelist Morgan Llewelyn commenting that she couldn’t understand why students found history boring because “it’s all sex and violence.” That is, instead of concentrating on memorizing dates, it is likely more important to focus on the story behind the dates. I also think that story can be a good introduction to a particular field of study…although it can’t end there. One skill set that many students must learn is how to use data to build a quasi-accurate narrative, but they also need to learn caution not to extrapolate too much from their limited data. This is very important for fields like archaeology or psychology.
I was also intrigued by the idea of linking story and empathy. I think one of the hardest learning objectives to teach are affective goals, but stories appear to be a way to open a window into the mind (and soul) of a student. In the past, educators have talked about using games and simulations for the same effect, but is this really a byproduct of re-enacting a narrative?
Finally, I do think Gottschall points about the dark side of story are very important for education. I think all educators can think of examples of how scientific facts become distorted by meme simplification (aren’t memes just a really short story?). Similarly, they can all think of inaccurate stories that lead to prejudice and incorrect beliefs. Understanding how and why these stories are built can help us build better narratives for our students.