An accessibility audience within the WCAG Accessibility Guidelines are users who are designated as having “low vision.” Low vision can be caused by a variety of conditions including glaucoma, diabetic retinopathy, cataracts and others which distort the visual perception.
We think of these conditions as being permanent, but your vision can become distorted temporarily for other reasons such as dilation at the eye doctor, allergies, “glare” or a migraine. It was sitting at my computer after an eye dilation (to check for retinopathy), that I saw that sans-serif fonts were easier to read in that situation. I got a new appreciation for the importance of font selection.
Legibility When Blurred
Below is a sample of text explaining different types of allergy symptoms in sans-serif and serif fonts of different weights. I also through in some italics for comparison. Are there any that are particularly good or bad?
The Text Unblurred
My eyes are itchy today
My eyes are watering today
My eyes can’t stand the glare
My nose is runny today
My nose is red today
My head hurts today
In case you were wondering, I used Photoshop’s Motion Blur, 22px −30°. That seemed to be a more accurate simulation of my experience than the simpler Gaussian blur.
My Recommendations
My first recommendation is that there probably isn’t a universal font that suits everyone, but definitely:
Beware lightweight fonts – because any distortion, including pixelation on a monitor, will affect them the most. I’ve gotten reports that they “violate contrast” among other Reddit complaints and advisories from Pimp My Type. Lightweight fonts can make for elegant headlines though and can work well in print.
Make your italics bold – As you can imagine diagonals will also become worse when vision is distorted. For this reason, some people recommend never using italic text. However, there are conventions like book titles and foreign language phrases which are done in italics. Making this italicized text bold can be helpful though – although this might not help readers with dyslexia.
Serif vs. Sans Serif? – In this particular case, the text is large enough that the bold version of either text is discernable. The same is true for body text (see below). In terms of reading with blurred vision, weight may be more important than the presence or absence of serifs.
Dyslexia? – The recommendations above don’t necessarily apply to people who prefer certain fonts due to cognition and reading differences – but there is some overlap.
New Symbols? – I like sans-serif fonts, but when lots of technical symbols, like math and phonetics, are involved. I’ve had an instinctual preference for serif fonts, partly because there are some many hooks, bars, accents and rotations you need to focus on. Or maybe I just learned them first in a serif font.
What makes a video accessible? Definitely captions (for those who can’t hear audio), but also audio description (for those who can’t see the visuals).
Trying to avoid the cost of proper captions by removing narration? You’re actually making the situation worse and more expensive to fix. Pricing for inserting an audio description track can be 3-5 times the rate of captioning alone – see the Cornell 3Play pricing sheet for an example. Even worse, you’re preventing any student who prefers listening (vs. reading) from getting information.
If you’re looking for an accessible video, I recommend using one with some narration. It’s likely you might have less work to fix it.
Advantages of Embedded Narration
Inserting an audio description track after the fact is more complicated than captioning because you have to 1) write the description of the visual content and 2) find a time slot in the video where the description won’t interfere with original audio (including dialogue, key lyrics or sound effects. Media specialists might even have to lower the volume of the original audio during a description. Overall, the experience can sometimes sound clunky and even distracting to a sighted user.
But if you can use a narration that describes the images and reads the text aloud, the experience is more organic and useful for all students. If you find or create a video that has a good embedded narration, you only have to worry about the captions. Accessibility experts will tell you – adding a caption is much easier than inserting a second audio track.
Trigonometry Unit Circle Videos
For this exercise, I compared different YouTube tutorials on how different trigonometric functions are defined via the unit circle (a circle of radius = 1 unit). As you might expect, there was a wide range of techniques for producing the content. I’ve subdivided the results into those with “close narration”, “overall narration” and “no narration (cool graphics)” Which ones do you prefer?
“Close” Narration
By “close narration”, I mean that the narrator is describing text and visuals as they appear on the screen. It may catch 100% of everything, but it’s definitely 80-90%. I like these videos because I could point a student with visual or reading disabilities. Unfortunately, only one of these videos (the Khan Academy video) is fully captioned. It ranks as #1 for most accessible in this group.
The descriptions are really good, and it is captioned, but I don’t think the captions are fully edited. There is no punctuation or capitalization, and that makes processing them more difficult. I do recommend auditing captions in order to avoid weird errors. I am happy that this video transcribes
as “sine theta” and not “sign theta”…
View More Details
At this point the unit circle has several lines including the following:
A diagonal line of length one rising up and to the right to (x,y) [not actually labeled]. The angle between the diagonal and the horizontal line is θ (theta).
A horizontal line labeled cossine θ – going from (x.y) to the y axis.
A vertical line sine θ – going from (x.y) to the x axis.
A downward line labeled D goes from (x,y) down to the x-axis
The captain says: circle at the point and perpendicular to the radius let’s the line segment D
Overall Narration
In this case, the narration is present but may not always tie as closely to the visuals. In terms of visual description, there may still need to be something done, but maybe not as much as a video with no spoken narration. This can be a judgement call. A description that is adequate for an advanced student may be missing details for students not as proficient in math.
Actually the narration is pretty complete, but it includes phrases like “My sine is of 0 [radians] is 0 – look at the y-coordinate.” I think that would be sufficient for some students, but something more detailed “My sine is of 0 [radians] is 0 – look at the y-coordinate for the point (1,0).” I admit this is a little nit-picky, when the narration is on the more rapid, it’s always good to slow down. I know it’s good for me to slow down too.
In reality, the description is pretty good, but it’s not fully synced. But if you cannot see the content, that doesn’t always matter. Overall, this is pretty good in terms of narration. But there are some funky glitches in the automated captions like “0 0” for “(0,0).
No Narration (Cool Graphics)
Below are links to three sine wave videos with cool graphics, but no narration (and therefore no captions either). Some have background music though, but imagine a student expecting to listen to a math video and instead hearing instrumental music or the song “Circle of Life.”. It can happen…
What can an instructor do? One suggestion is to play it in class and do the narration that should be there. The main drawback is that the video wouldn’t be as effective outside of class.
P.S. Don’t Forget Color Accessibility
For complete video accessibility, make sure your visuals (e.g. title screens, onscreen text and graphs) meet contrast and color coding guidelines. You never know how modern video might be displayed – modern options include YouTube or streaming on a phone. According to the University of Central Florida, 74% of students used Zoom mobile as of 2023.
Many accessibility experts may know about decorative graphics – the graphics which are inserted to provide visual interest, but can be skipped by a screen reader. I myself have used decorative graphics (e.g. the background on my A Linguist in the Wild blog), so I have no objection to them.
But I believe I have discovered another kind of decorative graphic – the stealth decorative visual. These are elements which a platform may treat as a graphic, but which an editor may not realize are there. These can include:
Background shapes
Background textures
Tool lines
Textboxes (especially if they have borders and filler colors)
A PowerPoint Example
Consider the following slide explaining how to begin a embroidered pendant. How many images are in the actual slide? I’ll let you think about this for a second, then you can see the answer below.
View List of Images
The following nine (9) images are on the slide
The photo of the black felt tear drop – this has explanatory ALT text.
The background is a gradient, but some slides contain decorative compass diagrams
There are two pink lines on top of the photo – these are marked as "decorative"
The five circles were also added and marked as decorative.
Slide content and photo by Elizabeth J. Pyatt. Design by Tanja Berlin.
Here’s another slide with a possible stealth graphic – this time from a linguistics course. It looks like it’s a "text-only" slide, but in reality there is a big blue square in the upper right (it’s on almost every slide). Unless the editor goes into the slide master to mark the blue square as "Decorative" it could be read out as an image on some screen readers (sigh…).
Some authentic Latin inscriptions similar to 21st century social media posts.View Text transcript
I.7.1 (in the vestibule of the House of Cuspius Pansa); 8075:
“The finances officer of the emperor Nero says this food is poison.”
II.7 (gladiator barracks); 8792:
“On April 19th, I made bread.”
VI.16.15 (atrium of the House of Pinarius); 6842:
“If anyone does not believe in Venus, they should look at my girl friend.”
The Journey to PDF
The process of converting a document is especially perilous. Many items, such as textboxes can become <Figure> tags which contain lists or paragraphs…but no image ALT text. How is that handled in a screen reader?
Because of this, it is recommended that most "objects" in Adobe InDesign be pre-tagged to be exported as <Artifacts> instead, so they can be skipped by a screen reader. Unfortunately the same same options don’t always exist in other document tools.
HTML is Different
Why hasn’t this topic been covered widely? I believe it’s because accessibility began as "web accessibility" within the HTML framework. HTML leverages CSS styles to allow formatting options, particularly borders and shading, to be added to different elements.
If you consider the image below from the Penn State Anthology Ally landing page, you will see that the actual code has no images at all.
This content is just a special DIV with rounded corners and a border with a custom H2 tag. The blue "button" is really an A link with border and shading added. The screen readers are designed to skip the CSS and proceed with the semantic structure only. Thus there are rarely any modern stealth graphics in modern HTML. Isn’t HTML cool?
First, let me I admit I have a PDF problem. When I see a random Web page with interesting content, I click the "Print to PDF" button so that it can be captured for eternity (just in case the Web page disappears someday).
This isn’t really an accessibility issue, unless I started sharing it. Specifically with students enrolled in a course I’m teaching. It makes sense to do this from an archival point of view, but that untagged PDF is now a major accessibility headache, especially when its combined with other third party PDF items like journal articles or other reports which may or (much more likely) NOT be accessible.
What an instructor do to mitigate this problem? Here are my recommendations below.
Link to the Web Site (If You Can)
They’re More Accessible
Most websites, at least the text part, are more accessible than a PDF printout. Most websites have headings, a lot of PDF files don’t even have that. Major news, academic and even gossip sites have really taken the time to include image alt text, table headers
Note that the Print to PDF function destroys any accessibility features that a web team may have implemented…leaving you with an untagged PDF.
They’re More Stable
Websites have also become more stable in recent years. For major web sites like the BBC, content published in the past is discoverable. Examples include the obituary or Margaret Thatcher (2013) or even the death of the Queen Mother (2002).
But I Need a PDF Backup…
Why a PDF Backup?
There are still times when you need a backup. For instance,
Sites change – Wikipedia and other sites may change content depending on user input. If you want to capture content at a specific time, a PDF may help.
There’s a subscription/pay wall – maybe an instructor has subscribed to a news source, but should students subscribe just to read one article for a specific course? Opinions differ….
Making Accessible PDFs from Web Sites
Method 1: Print to PDF, then Tag in Adobe Acrobat
You you like the speed of "Print to PDF" and only want to worry about accessibility later. I understand that, but "later" for accessibility is coming sooner than you think. A recent DOJ announcement requires course content at Penn State to be accessibily by April 2026…even if there are no students requiring accommodations.
Why? Because a student requiring accommodations may experience significant delays waiting for content to become accessible at the last minute. This forces students with
Should you tag your files now? Maybe, but it can be a hassle. PDF files can come from multiple sources ranging from Photoshop to Python. Anything can be in there….
Method 2: Save Site Content in Word (As a Backup)
Alternatively, you can
Copy and paste the page content into a Word file.
If you decide to use it as a PDF, you can
Make the Word file accessible adding headings, image ALT text, table headers and so forth. This is much easier and quicker to do in Word versus PDF. Note: If the web site has image alt text, it will be carried over into Word with the image.
UseSave as PDF function in Word (vs. Print to PDF) to export a fully tagged PDF. All the effort you put into making an accessible Word file will be transferred over into the PDF. This saves you lots of time and grief in the long run.
If you are worried about speed, you can just do the initial copy and paste from the web site into Word. This lets you capture the data you need at the time you see the article.
Sample Files
Here are some files created from a blog entry on tennis ball colors. I wrote the blog, so gave myself permission to create PDFs here. Despite the entry being over five years old, many links have remained active.
If you are familiar with Adobe Acrobat, feel free to review them, and then to tag the Print to PDF version!
Copyright?
But wouldn’t moving content into a Word file violate copyright? Perhaps (I’m not a lawyer), so you may want to be cautious. Pointing people directly to the Web site instead of using a PDF avoids some of these issues (unless the Web site itself is inaccessible).
But consider what happens now when a PDF is made accessible via tagging within a tool like Adobe Acrobat..
Making an "accessible" PDF alters the content, even if it only affects screen reader users. Only the Word method would let you keep the deverloper’s original image ALT text.
You can edit text in Word, but actually you can also edit text in Adobe Acrobat.
Some repairs, liking fixing improperly encoded fonts (the kind where Greek β is read as a "b" in a screen reader) can done much more quickly in Microsoft Word than Adobe Acrobat. Older PDFs used lots of bad fonts.
Both processes cause distortions in the original look and feel of the file. But the Word process could let you remove any unwanted ads or menus.
Some caveats
Again, I’m not a laywer, but some things to consider would be:
Use a link to a web page as the primary source.
If the PDF backup is needed, restrict the distribution to those enrolled in a course or other appropriate location. Note: Most especially, no PDF should be in a for-fee course packet unless it’s properly licensed.
Include a note which includes a link original, creation date and edits you made.
What can AI (Artificial Intelligence) do for you? Apparently do a decent description of charts, graphs and infographics. I’m generally fairly skeptical about AI applications, but some tools such as the ASU Image Accessibility Creator tool and others can truly save time for someone needing a description.
But like all technology, you need to understand the limitations. Here are some of my observations about AI image description would could be useful.
OCR (Optical Character Recognition) Success!!
Believe it or not one of the most tedious chores of image description is transcribing text. Sighted people can read text embedded in a photograph or graphic, but screen readers cannot.
Optical Character Recognition (OCR) tools have been developed to extract text, but traditional tools still have drawbacks including interpreting reading order of complex layouts such as in infographics, org charts or bar charts.
AI Results
Many of the new AI tools to a fantastic job just with transcribing test. In comparison with more traditional OCR tools, they seem to do a better job of interpreting text, even in very small fonts, and understanding where the pieces go.
For English text, I haven’t found any major issues, even with less than stellar image quality. But I haven’t tested any phonetics or non-English yet.
Note: I asked the AI to not only transcribe the image, but to provide some of the output in an accessible HTML file. Although I had to do some tweaks, the process took about five minutes to complete. The original AI description included icon descriptions, but editors can determine if they are decorative (and can be omitted) or not.
Infographic: Grand Jury v. Trial Jury in the Federal Court System.
View ImageView Description with Table
Description: A comparison chart of Grand Jury vs Trial Jury in the Federal Court System, including jury size, question addressed, what happens, who presents evidence, and duration of service, with icons and descriptions for each aspect. See details below.
Caption: This is a teaching tool with information available at uscourts.gov Educational Resources. Source: <uscourts.gov> Educational Resources.
The image is an infographic titled “Grand Jury v. Trial Jury in the Federal Court System Simplified.” It is divided into two vertical sections: “Grand Jury” on the left and “Trial Jury” on the right. A series of horizontal comparisons are made between the two types of juries, highlighting differences in jury size, the questions they address, the process of presenting evidence, the parties involved, and the duration of service.
The top section includes the title, with an illustration of people and the word “Simplified.”
Below the title, a brief text indicates that the infographic is a teaching tool with more information available at “uscourts.gov” in the Educational Resources section.
The headings “Grand Jury” and “Trial Jury” have arrow-like borders pointing inward to the vertical line dividing the sections.
“Grand Jury” points include 16-23 jurors, deciding if there’s enough evidence to bring charges, government presenting evidence in closed hearings, U.S. attorneys (prosecutors) presenting evidence, and the duration of service being one day a month for several months.
“Trial Jury” points include 6-12 jurors, finding guilt or liability in criminal or civil cases, evidence presented in open court by both sides (prosecution and defense), attorneys from both sides presenting evidence, and the duration of service being one trial.
The visual line connecting these points includes icons like a question mark, stack of papers, and microphone, and ends with a calendar icon.
At the bottom, the source is mentioned as “<uscourts.gov> Educational Resources.”
Alt-Text:
Infographic comparing Grand Jury and Trial Jury in the federal court system, covering jury size, roles, evidence presentation, involved parties, and service duration.
Editor’s Note: A side by side comparison chart has been converted to the table below.
Comparison of Grand Jury vs. Trial Jury in the Federal Court System
Aspect
Grand Jury
Trial Jury
Jury Size
16-23 Jurors
6-12 Jurors
Question
Decide if the Prosecutor Has Enough Evidence to Bring Charges
The Government Presents Evidence in Closed Hearing
Prosecution and Defense Present Evidence in Open Court
Who Presents Evidence
U.S. Attorneys (Prosecutors)
Attorneys for Both Sides: Prosecution and Defense
Duration of Service
One Day a Month for Several Months
One Trial
Technical Diagrams – Pretty Darn Good
Another common task in academic image descriptions are technical diagrams such as bar charts, line charts, maps and all sorts of technical diagrams. So long as your charts aren’t too exotic, AI does a good job providing a description, including …transcribing the text of title, keys, legends, axis labels and other snippets. AI will also describe graphic elements, icons and colors, all of which are helpful when interpreting data.
Missing Data Points
One gap I have noticed is that AI doesn’t always estimate data values. One description of a Harrisburg climate chart mentioned a temperature peak in July, but not an approximate value. Data estimation based on visuals alone can be tricky though. It’s always good to have the data sourced as a table or some sort of spreadsheet/data file.
Harrisburg Climate Chart
Chart generated using data from the National Weather Service. Public Domain, but available on Wikimedia Commons.
AI Description
Open Image Description
Image Description: The image is a combined bar and line graph representing the Monthly Climate Normals for the Middletown-Harrisburg Area, PA, from 1991 to 2020. The vertical axis on the left is labeled "Precipitation (inches)" and scales from 0 to 6 in increments of 0.6. The vertical axis on the right is labeled "Temperature (°F)" and scales from 15 to 90 in increments of 15. The horizontal axis lists months from January to December.
Green vertical bars illustrate total precipitation normal for each month, showing that the precipitation peaks around July [at about 4.5 inches]. There are three lines indicating different temperature measures: the blue line represents the mean minimum temperature normal, the brown line the mean average temperature normal, and the red line the mean maximum temperature normal. All temperature lines show a bell-shaped curve, with the higher temperatures in the middle of the year and lower temperatures at the beginning and end of the year. The min and max temps are listed below:
The lines display a clear seasonal pattern, with the mean max temperature (red) peaking in the middle of the year during the summer months, the mean avg temperature (brown) following closely, and the mean min temperature (blue) being consistently below the other two lines. The graph also contains a note "Click and drag to zoom to a shorter time interval," suggesting interactive functionality.
Alt-Text: "Graph of Monthly Climate Normals for Middletown-Harrisburg Area, PA, displaying precipitation as green bars and temperature with a blue, brown, and red line for mean min, avg, and max temperature normals respectively, showing seasonal variations."
Text in the image:
Monthly Climate Normals (1991-2020) – Middletown-Harrisburg Area, PA (ThreadEx)
Click and drag to zoom to a shorter time interval
Total Precipitation Normal
Mean Max Temperature Normal
Mean Min Temperature Normal
Mean Avg Temperature Normal
Powered by ACIS
With Added Data Points
View Data Table
Approximate Data by Quarter
T/Precip.
Jan
April
July
Oct
Total Precipitation (Inches)
3.0
< 3.6
< 4.8
> 3.8
Mean Max Temperature (°F)
43
66
86
66
Mean Avg Temperature
36
56
78
54
Mean Min Temperature
18
42
66
45
SME Matters
I would recommend having an SME (subject matter expert) review anything very unusual. I fed AI a syntax chart from a paper of mine, and the results were good, but what took AI about 3-4 paragraphs to describe could be condensed by the SME to 1-2 sentences depending on the context. Below is another syntax tree which AI described well, but I did tweak the descriptions.
File from Wikimedia Commons, released into public domain.
AI and SME Description
Open AI Description
Image Description: The image depicts a syntactic tree diagram used in linguistics to illustrate the structure of a sentence. The tree is composed of green text and lines, representing different components of a sentence. At the top of the tree is the letter “S,” indicating the sentence level. Below “S,” the tree branches into “NP” (Noun Phrase) and “VP” (Verb Phrase). Further, the “VP” splits into “V” (Verb) and another “NP,” which is marked as containing a “Det” (Determiner) and “N” (Noun).
Alt-text: Syntactic tree diagram of a sentence [John hit the ball].
Text in Image:
S
NP
VP
V
NP
Det
N
Open SME Description
A syntax tree for the sentence (S) “John hit the ball.” The top S node branches into an noun phrase (NP) ‘John’ and verb phrase (VP) ‘hit the ball’. The VP (corresponding to a traditional predicate) branches further into V ‘hit’ and NP ‘the ball.’ The last NP ‘the ball’ branches into a determiner (D) ‘the’ and noun (N) ‘ball.’
Non-Visual Representations
In an ideal world, there could b, but even in mainstream academia, there are technology limits to universal use of graphics. Fortunately, there are are often text-based alternatives that can be used. Here’s a linearized representation of the the tree above that could be used in some cases.
Linearized Tree
Sentence: John hit the ball.
[S [NP John] [VP hit [NP [D the ] [N ball.]]]]
Repeated Graphics
Editing descriptions are also very important for repeated graphics in which maybe only one element is really important in a particular graphic to represent a change or focus element. Once a person has become familiar with the format, it’s more efficient to focus on the these first and skip what is already known.
Many people not directly involved in accessibility wonder why SME input is important at the beginning. Reviewing image descriptions in an academic context from both perspectives has made me realize that bridging the gap early makes accessibility more efficient for everyone.
Weaknesses
Facial Recognition
Do you recognize at least one woman in the photo below? It’s a photo of First Lady Barbara Bush and Princess Diana taken in the White House. Many people are able to recognize celebrities, but the AI tools I have used are not able to do so. Instead they describe women sitting on a sofa and their outfits.
Wikimedia Commons: First Lady Barbara Bush and Princess Diana have tea in the Yellow Oval Room and Mrs. Bush gives the Princess a copy of "Millie’s Book as Dictated to Barbara Bush." October 5, 1990 Photo Credit: George Bush Presidential Library and Museum. Public Domain.
AI Caption: Two women seated on a yellow sofa, one with short white hair in a blue outfit, and the other with short blonde hair in a pink suit, looking at a book.
Google Reverse Image Search
If you are in a situation where you cannot identify people or objects, I do recommend Google Reverse Image search or something similar to find other "matching" photos.
Math Equations
Although the ASU tool can theoretically generate MathML, it’s OCR engine has not produced optimal results. I would recommend I would still recommend Equatio or MathPix for equation OCR.
Note: I want to add that this functionality is slightly beyond what is intented. Image description can mean many things depending on the context.
Original Equation (MathML)
According to
ASU LaTeX Raw (Spaces Removed)
\sqrt[4]{\sqrt[5]{\sqrt[3]{\sqrt[17]{240}}}}=
ASU MathML
Note: In my experience, more advanced math students, even those using screen readers, usually avoid "spoken" math. It’s easier to process math symbols, LaTeX or MathML/equation editor outputs.
Art Objects
This was an interesting area. For some fine arts, AI was able to make useful descriptions of color and composition, but often missed details that an art historian would probably comment on.
For example, in a piece of embroidery, AI might be able to identify that there was embroidery, but miss pretty key points like the presence of cross stitch elements or which animals and flowers were being depicted. Both of these are important for identifying artistic genres, periods or locations of art.
Closeups
Adding a closeup of an embroidered piece provides a different perspective for AI. In the closeup, AI did pickup the cross stitch, but had problems with identifying the Runes as Runes – it did much better with the full picture of all the runes. Context does matter a lot.
Closeup of one band of a modern Runic band sampler.
AI: Image Description: The image shows a piece of white fabric with a cross-stitch design. The design features two rows of thick, intertwined red and blue lines at the top and bottom. In between these rows are two sets of letters. The upper set consists of large, blue letters spelled out in an unusual, varied typeface. Below these, there is a series of smaller, red letters in a simpler typeface. The fabric has a visible weave, typical of cross-stitch materials.
As wonderful as the AI tools are, you need to proof the output.
Hallucinations
AI can "hallucinate," meaning it may make an incorrect assumption, and using that assumption, make further wildly incorrect statements.
I asked it to describe a modern Runic embroidered sampler (below) and it identified one motif as a "Viking ship" and mentioned it had four rows of oars. It also said the piece was a bookmark (it’s really six inches wide).
Embroidered abstract animal design misidentified as a ship. Design and phot by Elizabeth J. Pyatt
In reality, the Viking ship was a modified version of an abstract animal with a head similar to a ship’s head motif, but with actual legs and hind quarters. Not a bad miss in terms of shape, but very inaccurate conceptually.
As IBM notes in the link, hallucinations may be the result of incomplete exposure to training data in a given subject domain. Because academic research may be filling in current knowledge gaps, there may always be a slight lag in what AI can do with academic content.
Missed Details
A description could be very close to correct, but miss a crucial detail. In one case of an PowerPoint screenshot with a utility window open, the main screen was described, but not the window on the right. As with other OCR tools, you could upload just a cropped section and redo the scan to receive a transcript. Or you could tweak the prompt.
The AI tool described the slide content, but not the screen order elements on the right. Reloading just the mixing element will usually result in a usable description.
AI Description
Image Description:
The image presents a question on whether the flag of Pennsylvania should be redesigned, showing three different flag designs.
Open Image Description
To the left is the "Official (1799)" flag, featuring a dark blue background with a detailed crest in the center that includes a bald eagle, horses, and a shield with a sailing ship, plow, and three sheaves of wheat.
In the middle is a modern alternative design titled "Keystone + State," which displays a simplified solid blue field with a central orange keystone shape containing three white stars above and four below.
To the right is another alternative titled "Keystone Stripes," depicting a horizontal tri-color flag with blue, white, and gold stripes, and a centered keystone emblem.
The image source credits are at the bottom with URLs and the names "Jeffcook2756" and "Marmoceet". Images from vexillology.fandom.com (Pennsylvania) Licensed by Creative Commons.
Summary
Image description is one case where AI is coming into its own. But as with automatic speech recognition and other applications, AI output needs some auditing to be 100% accurate.
This session reviews different uses of map images and different strategies to make them accessible including tables, ALT text, long description and tactile/3D graphics.
This post documents the experience of getting a print from the Penn State Maker Commons service.
The good thing about Maker Commons is that you don’t need direct access to a 3D printer nor do you need to learn how to load plastic media or make sure the nozzle is properly cleaned and aligned, because the Maker Commons staff is there to do that for you. However, you will still need to interact with some media software, most of free and you will need to an investment of time, especially for your first print. And there are some restrictions you might have to keep in mind.
Build a Model in Tinkercad or Wherever
If you’re interested in building a basic model, you may want to get a free Tinkercad account. It’s a cloud service which lets you build a model from scratch or import and edi a model/vector graphic file. MakerCommons has some Tinkercad tutorials on their tutorials page.
Another non-cloud option is Google SketchupMake. Maker Commons also has tutorials for this service. Or you can find and download a 3D model. Or you can use a graphics program which allows you to export a 3D model.
Edit in MakerBot
No matter how you create your model, you will need to import it into the free MakerBot Dekstop editor. This software is designed for the MakerBot printer and will let you resize your image, set print quality and export it for print.
It’s important to follow the guidelines from MakerCommons or your print could be rejected or ruined. An important one is a relatively new 100 gram plastic restriction. Another is a size restriction of 9.9 inches L x 7.8 inches W x 5.9 inches. Finally, your Quality setting needs to be “Low” or “Standard.”
You also need to make sure the print is to be centered and lie flat on the print bed. The guidelines show you exactly how to do this.
Submit the Print
Here you will need to submit your model file through Makerbot Innovation Center with your Penn State email address. This will allow you to choose the model and even select the color plastic you want. Again, it’s important to follow all the instructions.
Ready to submit? Then click the Submit button on Maker Commons site. You will receive a message when the print file is submitted to the queue, when the job begins to print and when the print is complete.
Pick up the Print
Once you receive an email that the print is ready to be picked up, head to either Patee/Paterno at University Park or a location at another campus. You will need to bring your Penn State ID as verification.
If all has gone well, you will receive your print in a clean brown paper envelope. Take off the “raft” and enjoy.
As a leader, I will rely on the expertise of my team and colleagues but provide vision and direction to move us forward. I will strive to maintain calm in the face of chaos and communicate clearly in a sea of noise. Above all, I will be honest with my team and “customers” and treat them with integreity and respect. I will always seek to learn more about leadership needs, especially from my mistakes.