3D Print Test Run

This week, I commissioned a 3D model of a trigonometric curve from the ETS 3D Print shop and was able to do a quick usability test with some volunteers from the TLT accessibility team on what kinds of information could be gleaned from it. As most people would expect, the test was reasonably successful, but I thought I would document the process to give people a sense of what was involved in making the model.

The Curve

For a 3D printer, you need a file (e.g. STL file) which specifies the dimensions of the object. For this example, I went online and looked for mathematical models and found this 3DPlot function utility for Open SCAD by dnewman on MakerBot Thingverse, one of the various 3D model download sites online now.

In this case, the curve I chose was (in MathML):

z(x,y)=cos ( x 2 + y 2 )

Mathematica and MatLab

If an instructor would need a curve and had access to either Mathematica or MatLab, the more recent versions of these programs should be able to export a generated curve to an STL file.

Other Modeling Programs

To create other types of models (e.g. a gear or pencil jar), a 3D modeling program which exports to and STL or other 3D printer compatible file is needed. The Lynda.com at Penn State service includes some 3D modeling courses.

Making The Print

I sent the STL file I found to our 3D printing gurus in ETS and after they verified the file, the were able to print a 2 inch x 2 inch version of the model. The resolution was quite good, but there were some caveats.

  1. Beware of any model with a sharp point. If a blind person is handling them, you don’t want any punctured fingers. In this case, I first covered the point with tape, but discovered you could “sand” down the point by pressing it on on a desktop. It removed some of the extruded plastic and rounded out the tip without too much distortion.
  2. Depending on your printer, there may be ridges created by the extruded plastic. Your model needs to account for any resolution issues. Expanding the scale could be one option although it will require a longer print time. Another is to select a finer resolution/nozzle – which will also result in a longer printing time.


A question about 3D printing is whether the expenditure is justified by a blind (or sighted) person being able to handle a 3D model. In this case, I would say yes. The model was usable enough so that the testers were able to quickly determine the same (circular ridges with a peak in the center). In fact one person immediately proclaimed “It’s a bullseye!” For the record, the original model was developed precisely to explain math curves to a blind student.

That’s not to say that you might not need some tweaks. One tester asked if there should be labeled axes (i.e. x and y axis). This could be done easily with ether a notch or some Braille labels (in the model or with some other Braille label). If a math problem were done based on the graph, it would be necessary to provide the same key data (such as a “y-intercept” point or values at key coordinates) that a sighted student would have.

You could also begin thinking about how many models would be needed. Would one basic model of a curve be sufficient to describe variations that would occur due to other parameters? At which point would a new curve be needed? And how does 2D tactile printing fit in to the discussion?

It’s an interesting topic and a great justification for experimenting with 3D printing

Posted in Accessibility | 1 Comment

Why Can’t Academic Publishing Be More like iTunes?

Kindle works fairly well for novels and light non-fiction, but I have always wondered why academic publishing can’t be more like iTunes or even like journals in the Libraries. Why can’t we allow users to purchase chapters? And why to prices of eBooks remain high if we’re not killing trees and are saving on printing costs?

Holy High Prices!

The Kindle/iBook price for an academic book are still ridiculously high. For example
the Kindle edition of the Oxford Handbook of Chinese Linguistics is a whopping $139.99 (i.e. $140). In Amazon’s defense, it IS a discount from the hardcover price of £115 (ca. $175) which happens to translate to $174.99 in the iBooks store.

BUT…do I really want to pay $175 for essentially a PDF (or ePub file) that just lives in a tablet device? Not really. For $175 (or more like $65), I actually want a real book that I can stick on a shelf.

But why $175 in the first place? No linguistics book I have ever seen has had color graphics. The main “cost” would be the special fonts, but costs have dropped for those. I realize that the expectation is that the market is limited to Libraries and Chinese linguistics instructors, so prices traditionally were high to cover production costs. But does it have to be that way? I think people interested in linguistics or people of Chinese heritage would be a potential market, if prices were more in line with a book like The Early Chinese Empires: Qin and Han (History of Imperial China) ($9.99 on Kindle). These days, that’s the price of a paperback.

Note: If we compare the book market to the music market, we can think on album prices. No one would think of buying an indie or world music album for over $100 when best selling albums run between $8-15 on iTunes. Economics can be strange.

Why Can’t we Buy Chapters?

If we can’t lower academic book prices due to mysterious economic forces, can we at least buy chapters? Unlike novels, many academic books can be easily broken into chunks. For instance the $175 Handbook of Chinese Linguistics is actually composed of fifty-five (55) chapters on topics including the Sinitc language family, topic prominence in Chinese syntax, Middle Chinese phonology (with a separate chapter on old Chinese), Chinese-Japanese language contact, tone perception and even the Taiwan sign language.

Buying the whole book would be fantastic, but there’s a chance a researcher really only need 1-2 chapters to complete a research project. Wouldn’t it be great if we could purchase JUST THAT CHAPTER? Dividing $175/55 is about $3.19 per chapter. What a deal! It would actually be cheaper and less time consuming to purchase a $3.19 chapter than to (ahem) photocopy the chapter in the library…assuming your library has the book.

Ironically, this is how journal subscriptions in the libraries work. Anyone in the state of Pennsylvania can get access many journals one through the Penn State libraries Web site, and once an article is found, it can be downloaded. However, chapters in “handbooks” can cover materials in a way articles don’t. I would love to give students a good reading on the history of Chinese languages in digital format, but it’s not really available through the Libraries. Right now, the best best is Wikipedia.

Can we Print What We Buy?

In iTunes, we can play songs we buy on multiple devices, including audio CDs in the car on CD players at home. But you can’t easily print a Kindle book unless you have a wireless printer and a Kindle device (forget if you happen to have Kindle for iPad). Reading material just on a device is OK for novels, but hard for academic works where you need to highlight text, add notes, tab with Post its or just quickly page through to look up data.

Even working with recipes on a Kindle can be a challenge (do I really want my iPad in the kitchen while I cook?). The Kindle has notes tools, but they are not as efficient as the pens. I also want to be able to quickly access a library and not worry about shuffling memory on a device (which is still less than a desktop or whatever you can get on Dropbox or an academic server). It’s for these reasons that students still buy print textbooks, even if the electronic version might be a little cheaper.

If we can download music with DRM information but play it on multiple devices, why can’t we do the same thing with PDF/ePub files? Are publishers really that worried about blackmarket PDF’s on Chinese tones? If they are, offering separate chapters for $3.19 would help I think. I think it would generate more income for the publishers instead of for copy shops.

A Democracy Issue

Academics are often concerned about the lack of basic knowledge in their individual areas, and traditionally, the K-12 system has traditionally been called to account. But I think pricing of good academic books has also been an issue. Adults, more than children and teens, can begin to understand the importance of self-study, but sometimes resources are lacking.

Suppose I decide I AM interested in Chinese tones. If I’m an academic, I could get some basic information from the library bookshelves, but not all libraries have these resources. The last time I checked at the local library, the pickings on the phonology of tonal languages was quite slim.

It would be great if I could order a book on Chinese tones or even a chapter at a reasonable price, or if the library could provide digital access for me (since print book shelf space is scarce).

Digital Books in the Library

The idea of the library housing digital books is starting to happen. I was looking into William Labov’s Atlas of North American English: Phonetics, Phonology and Sound Change. It’s not really in stock at Amazon, but if you want it used, you can expect to pay between $700-$1100. Fortunately, a digital version is available at the Penn State University Libraries. You can even select and print individual chapters for review. What a deal!

Posted in Library Services, Teaching Notes | Leave a comment

Why Don’t Students Like School? (Book Review)

About the Book

Willingham’s goal for this book is explain how findings from cognitive science can be used to explain how learning works. Topics include memory, a review of learning style theory (partly to debunk it), and the importance of understanding expert vs. novice thinking.

ISBN and Chapter Info

Willingham, Daniel T. (2010) Why Don’t Students Like School: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom. San Francisco: Jossey-Bass
ISBN 10: 047059196X

The table of contents of the book is:

  • Chapter 1 Why Don’t Students Like School?
  • Chapter 2 How Can I Teach Students the Skills They Need When Standardized Tests Require Only Facts?
  • Chapter 3 Why Do Students Remember Everything That’s on Television and Forget Everything I Say?
  • Chapter 4 Why Is It So Hard for Students to Understand Abstract Ideas?
  • Chapter 5 Is Drilling Worth It?
  • Chapter 6 What’s the Secret to Getting Students to Think Like Real Scientists, Mathematicians, and Historians?
  • Chapter 7 How Should I Adjust My Teaching for Different Types of Learners?
  • Chapter 8 How Can I Help Slow Learners?
  • Chapter 9 What About My Mind?

My Takeaways

In many ways Willingham’s outlook on learning is very traditional. For instance, he actually recommends a return to an emphasisis on some memorization and drills for novice learners as well as a focus on content as well as skills. He also definitively focuses on an individual mind rather than a mind in a social or cultural setting. I have to confess that this point of view agrees with both my sensibilities as a linguist and a lot of what I observe in the classroom. But if I were a strict constructivist, I might be a little frustrated. And yet, I believe that some insights of learning within a social context can be incorporated into Willingham’s insights.

Memory and Learning

A major focus of this book is the role of memory in learning. At the most basic level, learning represents a change in a person’s long term memory. But how does this change come to pass? According to Willingham, one key aspect is that some sort of perceptual signal (written or spoken text, a visual, a motion, smell or non-text audio signal) must be analyzed within working memory and transformed into a part of long term memory. Part of this transformation usually involves establishing a connection between previously existing memories and often between elements in the new memory cluster.

Willingham notes that researchers have long known that there are limitations in short term memory (hence we speak of 7-min video attention spans or the 7±2 memory limits). However, there are ways to maximize short term memory including chunking and sufficient practice to automate enough processes to free up additional memory. I would also argue that connecting elements into a narrative is another strategy that instructors use.

I find the discussion of memory particularly valuable because Willingham provides a concrete definition of what learning entail in the brain and therefore the cognitive task that must be accomplished in instruction. I have always found it frustrating that more pedagogical articles do not define learning at this level, instead treating the mechanics of learning as some sort of "low-level" almost magical process. Although not all learning theory need focus on memory, I do think it is important to remember that it is a factor of learning that should be investigated.

Expert vs. Novice Minds

A point that I felt was very important was that novice minds differ greatly from expert minds. In particular, experts are better able to see abstract patterns because they have more experience processing a certain type of data. In addition, experts have in many cases automated lower level tasks leaving more cognitive capacity open to higher order congitive tasks.

One approach to helping students achieve expert mastery has been to introduce expert methods of learning new content. However, Willingham argues that novice minds may not actually be ready for expert methods in the beginning and actually need alternate forms of presentation.

A particularly interesting example can be seen in an anlysis of reading. Expert readers familiar with a large set of vocabulary can quickly scan a page of text and process entire words. This has led to the insight that expert reading proficiency requires whole word recognition (vs. sounding words out aka "phonics"). However, novice readers may actually NEED to go through through a "sounding out" phase before they gain enough proficiency to read in the whole word method. An expert English reader may re-encounter this when a new language is learned and becomes a novice reader again. Phonics becomes important again, particularly if a new script is being learned.

I think Willingham’s point is that scaffolding is more important that modern instructional theory may realize. While everyone agrees that using higher order analytic skills is important, Willingham argues that they cannot be mastered without mastering the lower-level skills first. This pattern can be found in many other areas such as motor development and language acquistion. When children learn their first language, they typically go through a series of stages in sequence before becoming master native speakers. Similarly, traditional mathematical instruction is sequenced so that students learn addition and subtraction before multiplication and division, algebra before calculus and so forth.

Practice and Motivation to Practice

One tenet that Willingham emphasizes repeatedly is the importance of practice in mastering a skill with memorization being an important component. For instance, some argue that a child may not need to memorize a multiplication table – only understand how to apply it. Willingam would argue that multiplication is so integral to other mathematical operations that NOT memorizing part of a multiplication table slows down a student when attempting a more complex operation involving multiplication (e.g. many algebraic equations).

The effort used to look up or manually calculate a multiplication could be used to for additional cognitive processing IF the times table has already been memorized sufficiently. Memorizing the multiplication table also allows an individual to spot a pattern in numeric data tha might not otherwise be obvious.

On the other hand, Willingham points out that "curiosity is fragile" and practice can be boring. The eternal challenge for instructors has been to persuade reluctant learners to practice a skill they may not see a need for at the moment. Game mechanics may provide some answers for this or providing "real world" problems. However, not even Willingham has all the answers here.

Getting from Expert to Novice?

A question that has intrigued me and others is how to we help learners transition from novice mental models to expert mental models. I do concur with Willingham that introducing beginning students to expert techniques is often counter-productive because they don’t have the background to utilize them. I also concur that practice (or a bit of "drill and kill") plus required memorization can help students free up additional cognitive capacity needed for higher order analysis. But beyond that Willingam does not connect the dots well.

At a simplistic level, it is possible to teach learners a series of rules used to guide best practice. For instance, educated English speakers learn about standardized spelling, punctuation and academic vocabulary. Unfortunately, learning these rules does not guarantee that a student will become an effective writer, much to the dismay of those grading written assignments. Writers also need to learn about audience, switching between genres and even "voice."

Practice and Acquisition

One insight is that probably learning from experience does matter, but not necessarily identical experiences. The book I previously reviewed, InsideOut: Strategies for Teaching Writing, discusses precicely the need for practice, but also the need to give varied writing assignments. The authors also emphasize the need for writers to critique other authors (including their peers). This is related to the idea of using case studies to show how different principles interact in real-life situations. Critiques also gives learners to troubleshoot which is also an important skill in higher order thinking.

Willingham also notes that it’s important to provide opportunities to practice old skills after they are first taught. For instance, a math class might teach fractions and provide many exercises in which fraction skills are practiced. However, it is important to remind students that fraction skills might be needed in relation with more advanced problems and perhaps provide additional practice in those contexts.

I will say that in some cases, some skills can "emerge" or be "acquired" just from exposing learners to data. For instance, vocabulary can be learned with minimal instruction just by observing how it’s used or with only a short defnition. This is how most slang or colloquialisms are learned, but experienced readers of academic language can learn new academic terms also. I’ve also seen students learn the distinction between different levels of phonetic description just by guiding them through practices involving dialectal differences in English. And I have learned to distinguish between different comic book artists just be reading lots of comics! This type of learning requires the least intervention, but probably the most cogntive architecture to accomplish.


Another technique that Willingham does not discuss much, but I think is important is the narrative, that is packaging an insight in the middle of a story. An example from childhood are Aesop’s fables in which a story (e.g. a hare starts out quickly in a race but soon wears himself out and loses to the tortoise who maintains a slow but steady pace) is used to demostratate a concept ("pace yourself"). But narratives can be used to demonstrate abstract concepts in action or to tie seemingly disconnected facts together. Narratives can provide concrete examples for a novice learner of an abstract concept needed for expert mastery.

For instance, memorizing the names of Henry VIII’s six wives boring. Mentioning that the British Protestant reformation happened in his era adds confusion. Watching him woo and discard successive spouses in a televized drama (or through reading his love letters) is much more memorable. We also get to SEE Henry VIII decide to reject the Pope and embrace Protestant theology in order to facilitate one of his divorces. We also see how the divorces affect the religious leanings of his children/future rulers of England which in turn affects political life of the country. The narrative truly does bring the past to life.

This lesson does not only apply to history, but to any discipline. When I was working on a thermodynamics course, much of the content was a learning and applying a series of equations. However, the instructor had some experience working in power plants and she said that steam above a certain temperature became an invisible gas and that a leak in the pipes could kill plant workers. I now know about the difference between visible low temperature steam as a vapor and high temperature steam as a gas.

In addition to traditional stories, the right graphic or muisc can grab student’s attention in much the same way. As Willingham notes, this is why television is as popular as it is. Instructors may not be able to make all instruction television or even game friendly, but it’s something to always consider.

Learning Style Debunked?

One of the more important chapters to me are the ones evaluating "multiple intelligences" and "learning styles". Both concepts relate to the idea that students come with individual talents and "quirks". The question has been how to cater to these individual differences, and Willingham argues that the differences, particularly for learning styles, may not be as profound as first thought. Although he argues that there are differences, he feels that cultural factors may override "genetic" factors. For instance, he proposes that tall people may become better basketball players because they are encouraged to play the sport, not because they are tall per se.

I would agree that while it is impractical and ultimately counterproductive to create entire lessons geared for different styles, it is worthwhile to consider how to assist students consume content. An extreme example are students with different cognitive issues such as dyslexia where reading text may be difficult. These students benefit immensely from consuming content in alternative formats (e.g. in audio format from a screen reader). Similarly a "poet" in a biology course may learn more by creating a rhyme of key topics, while a visual person in a literature course could benefit from diagramming key events in a novel.

I also wonder if Willingham does underestimate "genetics" to some extent. An interesting case I always think is assuming that people with low academic vocabulary are automatically deficient in verbal ability. In fact a person who can compose either hip hop or country music lyrics is extremely proficient in using words…just not words that are found in the SAT. While I agree that formal education, including recognizing SAT type words, benefits everyone, it is important to understand how a talent may manifest itself in different sociological environments.

Where’s the Social Connection?

One aspect of modern pedagogical theory that Willingham mostly ignores is the "social aspect" of learning. To be honest, I do believe that considering brains individually is important to understanding learning and that placing students in collaborative settings will not always improve learning. After all "teams don’t learn, individuals do. (Kozlowski and DeShon 2001, Developing Adaptive Teams)."

Still one can consider how social interaction with peers can help a student learn within this framework.

  • A person who is not an expert, but a little ahead of someone else may have insights on overcoming novice preconceptions
  • Teaching itself facilitates learning since the instructor is forced to create a "narrative"
  • Larger scale projects may provide opportunities for learning not available in smaller projects
  • If nothing else, team projects help students practive team skills

To Conclude

I admit to appreciating this book for providing evidence that some traditional practices such as memorization, drilling and learning in context can be shown to be relevant in a modern pedagogical framework. But I think the more important lesson is understanding how profoundly different a novice mental model is from an expert mental model. Helping instructors understand their novice learners is a start to improving instruction.

Posted in Cognition/Linguistics, Teaching | Leave a comment

InsideOut: Strategies for Teaching Writing (Book Review)

ISBN and Chapter Info

Kirby, Dawn Latta and David Crovitz (2013) Inside Out, Fourth Edition: Strategies for Teaching Writing. Portsmouth, NH: Heinemann
ISBN-10: 0325041954


The table of contents of the book is as follows:

  • Chapter 1 Finding a Footing for Teaching Writing Well
  • Chapter 2 Fluency and the Individual Writer
  • Chapter 3 Establishing the Classroom Environment and Building Community
  • Chapter 4 Writing Pathways
  • Chapter 5 Identifying Good Writing and Emphasizing Voice
  • Chapter 6 Authentic Writing
  • Chapter 7 Crafting Essays
  • Chapter 8 Responding to Student’s Writing
  • Chapter 9 Revising Writing
  • Chapter 10 Publishing Writing
  • Chapter 11 Grading, Evaluating, and Testing Writing
  • Chapter 12 Writing About Literature and Other Texts
  • Chapter 13 Engaging with Nontraditional Texts
  • Chapter 14 Mediating Literate Lives: An Argument for Authentic Education
  • Chapter 15 Conversations with Teachers

About the Book

Inside Out: Strategies for Teaching Writing is a resource book for writing instructors, particularly those in the K-12 grades. This book is a fourth edition of Inside Out by Dan Kirby and Tom Liner first published in 1981, but this edition has been substantially revised to include modern issues such as the diverse classroom, discussions of standardized writing tests and new social media genres. The current authors, Latta & Crovitz, are former high school writing instructors and their expertise is clear from the anecdotes and samples they share.

Note: For some reason, the Amazon.com description indicates that it is for 10-17 year olds (i.e. students), but this is clearly a help guide for K-12 school writing instructors.

Why This Book

I chose this book because of my role as a Turnitin service administrator and instructional design consultant. Turnitin is geared for assessing writing assignments, so I thought a book about issues in the writing classroom would be beneficial.


The concept of the book is to help teachers create an environment in which students feel comfortable and are encouraged to develop an authentic voice as a writer. In fact, I would skip first to chapter five in which Latta & Crovitz define good writing as “full of detail, strong verbs, flare [sic], and an identifiable voice in multiple contexts.” (p 97)[1] as this summarizes the aspirations for their student.

[1] I would guess “full of flare” (196,000 on Google) would be “full of flair” (1,430,000 results on Google) in Standard English. Just saying….

The first parts of the book focus on the classroom environment, including suggestions of background music and rearranging space for peer editing/collaboration. Later chapters suggest different types of journaling and writing practice exercises which give students practice in different genres, but are more appealing than an average writing assignment. A typical freewriting assignment is one in which students pick four adjectives they feel now, then write a paragraph to explain them.

The additional parts of the book explores different writing genres and practices and provides examples of exercises and practical wisdom on issues to consider. For example, it suggests starting a school literary journal, but only if students are interested in contributing.

What Works

First, the authors practice what they preach in that they have developed a well-written, conversational tone which passes on their insights in an efficient yet elegant fashion. They also have some sage advice on how to assess students in a way that is helpful to students yet not overly discouraging. For instance, the recommend NOT correcting journal writing but rather focusing on questions a reader might have (e.g. “Why did that happen?”) or on structure. They also emphasize the need to point out when a student is writing well and why.

Latta & Crovitz also do touch on some modern issues such as new writing genres and especially their concerns for modern standardized testing on writing skills. They lament that many K-12 writing assessment tests are too focused on the “five point essay” so that writing instructors do not feel as free to allow students to explore other writing genres such as fiction or poetry. While the book does include tips for teaching to this kind of essay, it also encourages students to find topics that attract them so that students can feel more emotional investment in their writing. They also argue that focusing on fluency across multiple genres will allow students to enjoy the process more and gain fluency in different contexts.


One issue of particular interest to users of Turnitin is combatting plagiarism in the classroom. A quote that exemplifies their philosophy is “When writing is authentic, there’s little, if anything, to plagiarize” (p. 132). In other words, developing assignments in which students feel a connection with the topic or content will motivate many students to focus on the assignment. The book also discusses the need to teach citation and paraphrasing skills, comparing it to sampling in hip-hop. Other recommendations include helping students with drafts and timelines and adjusting assignments to be less generic, thus requiring efforts no paper mill can easily duplicate.

Suggested Improvements

Genres and Assessing the Testing Issue

A major theme of this book is the challenges standardized testing of writing brings to the modern writing instructor. I do agree that focusing too much on one format (their “five point essay (FPE”) will stifle any creative writer. In fact, I recall our yearbook faculty advisor noting that many students are so trained in this genre that they cannot reproduce the breezy conversational style needed for yearbook copy. Latta & Crovitz also ironically note that the testing may not be effective in producing the skills the “working place” needs.

I will assume that testing is flawed and do agree that their focus on voice and fluency is important. What’s ironic though is that one reason that teaching the five point essay fails is that it is actually a genre not found in real life.

Based on my own experience with workplace writing, these are the genres that are often seen:

  • Instructions or Documentation
  • Research and Progress Reports
  • Marketing or Journalism
  • Academic Articles (rare)

Indeed these are often formulaic because they are often replicating a “corporate” voice or need to be in a format somewhat familiar to users. For instance, straying too far from a standard instruction format could be disorienting. As the authors indicate, even these dry genres can be given life when applied to topics of interest to students. It is worth considering if a multi-genre test or (ahem) portfolio would be a more appropriate assessment. Badges related to different genres are another interesting possibility. Based on my experiences with editing documentation, I think this is a genre that could use more instructional focus.

Linguistically Diverse Classroom

When reading this book and other books on writing, I regret that there is not a better bridge between writing instructors and sociolinguistics. Some of the issues concerning teaching students of different backgrounds might be clarified in a sociolinguistics lens.

For sociolinguists, a linguistically diverse classroom would not only include English non-native speakers but also speakers of regional and working class dialects such as Appalachian English or African American Vernacular English. Both audiences need to learn more features of Standard English in order to be considered to be an articulate writer by other educated readers.[2]

[2] In fact most middle class English speakers speak with “colloquial” forms that diverge from written English.

An interesting side point though is that a person can be a good or even an excellent writer in their dialect or language but less skilled in Standard English. Examples would include AAVE speakers who can compose hip hop lyrics or speakers who can employ “down-home” rhetoric. Several examples that Latta & Crovitz cite show that part of having an authentic voice is conveying your cultural linguistic conventions, including non-standard grammar (p. 172) and even swear words (p. 175)!

Linguists have been advocating that teachers consider forms diverging from non-standard English to be different, but not necessarily inferior or having an illogical grammar. A writing instructor would argue that a good writer can master several forms, and I concur. What’s sometimes missing is the appreciation of non-standard English forms as being artistically valid. Many African American politicians find it crucial to master BOTH standard English and AAVE forms. If that person only knows Standard English, he or she may not be considered to be “authentically” black. This is a tension that few middle class Anglo students or instructors can fully appreciate.

Another sociolinguistics consideration for writing is that some conventions of writing occur because the audience is not participating in a face-to-face conversation. Many linguists assume that human language is optimized for face-to-face speech and that participants can rely on gestures, facial expressions and vague pronominal references to quickly convey meaning. A listener would also have opportunities to quickly correct misunderstandings. There are fewer options for that in a written text so some conventions such as clarifying references or using clear punctuation are necessary. This is a somewhat theoretical point, but could help instructors and students place their writing in a context of a communicative goal.


If you are a writing instructor, this book will give you some good ideas about interjecting some “life” into a classroom and making writing instruction enjoyable for both students and teachers. Its true strength is in helping students become authentic writers of fiction, poetry or literary essays. Having said that, there are some interesting writing assignment ideas that could be introduced into non-fiction writing, communications and even some social science courses.

However, there are some issues I wish explored more such as the nature of academic and corporate writing, how standardized “testing” (assessment) of student writing could be rethought and how to understand a classroom where not all students are white and middle class.

Posted in Uncategorized | Leave a comment

Descriptive vs Prescriptive Standards

Descriptive and Prescriptive Grammars

A useful distinction linguists make in describing grammars is “prescriptive” vs. “descriptive” grammars. Simply put, the difference is the prescriptive grammar is what your teacher tells you to do, but descriptive grammar is what actually happens. Sometimes they overlap, but sometimes it’s different, and generally speaking linguists will say that the descriptive is the one that’s “correct”.

A classic example of prescriptive grammar is to “not split an infinitive” meaning that the classic Star Trek phrase “to boldly go” is considered “ungrammatical” by some teachers. The reality is that it’s descriptively fine which is why it made it to the airwaves. Indeed, I would argue that the alternative “to go boldly” sounds clunky and “Boldly to go” sounds even worse.

As it turns out, “to boldly go” is perfectly consistent with spoken English grammar and has been around since Middle English. It was only later grammarians who declared this construction to be “wrong”…even though it has been in active use in spoken English.

Descriptive and Prescriptive Standards

Lately, I’ve been finding the same happening in some standards discussion, and I think it’s useful distinction there as well. Most Web and accessibility standards are important to maintain as there are real world consequences for violating them, but there are a few I would say are more “prescriptive”. They were invented with the best of intentions, but for whatever reason are too confusing for developers or not actually filling a need we thought was needed. In addition, the real world consequences for using one of the other other may be minimal.


One of these in my opinion is the infamous STRONG vs B debate (paralleled by the EM vs I debate). The theory is that B/I are visual formatting instructions only while STRONG/EM are levels of emphasis which could be indicated by bolding/italicized OR by a change in pronunciation in a screen reader.

The reality though is that almost all real world implementations treat B/STRONG and I/EM as tag synonyms. The most egregious cases are WYSIWYG editors which do “correctly” use STRONG and EM tags, but the icons are actually B and I. In other words, the non-saavy editor is just randomly adding visual formatting according whatever editorial rules are in place but never really distinguishing use cases. That editor is not really thinking semantically at all…because it’s a bit confusing to most editors. (BTW – This is not my original observation…but I really can’t find the person who pointed this out on a blog. It’s an excellent point though).

In another twist, screen readers also treat B/STRONG and I/EM as synonyms. In almost all modern screen readers, these tags are generally ignored by default – that is the words are pronounced with no distinction from regular text. There are modes in which formatting changes may be identified…but then either tag is picked up.

From a descriptive point, there is no distinction between the two tags and I am not sure one is needed by either the sighted or unsighted community. bolding/italics is a benefit for the sighted, but may be a distraction on a screen reader (just like color coding). In any case there is no distinction and there are plusses and minuses for each tag set. STRONG/EM is a good choice if you are translating texts between languages where formatting conventions can vary, but you have to admit that B/I is a lot leaner code-wise.

The truth is that I don’t have a recommendation, not even consistency because I am in a system which doesn’t really enforce it. I would however like this issue to just be permanently tabled until an actual real-world scenario is found.

Headings – Descriptively Important

I would contrast the above situation with the “heading” (H tag) situation in which standards experts note that section headings in a digital document should be specifically marked or tagged as a heading (vs. changing font formatting). This IS important for screen reader users who rely on the ability of their screen reader to jump between tagged headings. Screen readers recognize tags, but not formatting changes as valid headings.

Other Overly Prescriptive Standards?

I don’t have an exhaustive list, but I can detect the symptoms. If you are having a hard time explaining the importance of a standard or finding any real-world consequences, you may have a standard which is prescriptively valid but descriptively irrelevant, at least for everyday use.

One that comes to mind is the ACRONYM (NASA) vs. ABBREVIATION (Dr.). Aren’t these synonyms? Well, yes and no. Both acronyms and abbreviations are shortened written forms of longer phrases or words, but if the shortened word can be pronounced as a new word (e.g. NASA /næsa/ and not “N-A-S-A” for “National Aeronautics and Space Administration”) THEN it’s an acronym.

To be honest though, I’ve never heard phonologists (linguists who work with sound) much less everyday citizens make a systematic distinction, and usually it’s not that important, unless you are phonetics. While you do want to make sure you pronounce any abbreviated form correctly, knowing which is which is fairly trivial in most cases.

HTML gurus will know there are two gags – ABBR and ACRONYM. Unsurprisingly though, there are wide variations in how they are treated by different browsers with Internet Explorer only supporting ACRONYM at one point. Ironically though only ABBR is in the HTML 5 spec.

Posted in Accessibility | Leave a comment

Accessibility Workflow in Higher Ed vs Federal Agencies

We’ve been trying out a few products to accessify documents, particularly PDF documents. We have to admit the products work and the vendors are very supportive, but I have been struck at how vendors are assuming a particular model for an accessibility workflow – one that works well for governmental agencies, but probably NOT so well for higher education.

What Vendors Assume

Governmental agencies produce many PDFs, including the kind that are educational, and vendors assume the following steps in the production:

  1. Create initial document in Word/PowerPoint/Indesign
  2. Trained document specialist adds tags to PDF
  3. Trained accessibility expert adjusts and tests PDF depending on source of document (8+ hrs in training)
  4. Document posted online after an approval process

What Happens at Penn State

For what it’s worth, the process above can work well….but is probably not what happens at a lot of Penn State units. I rather think it’s like one of these below:

  1. Instructor teaches class (or is taking a class if he or she is a TA)
  2. Instructor creates/revised class materials the night before class
  3. Instructor prints to PDF just before class, maybe with some tags

Or maybe this happens:

  1. Staff person receives report from manager
  2. Staff person told to post as a PDF by the end of the day
  3. Staff person prints to PDF, maybe with some tags

The pattern is that, in many cases, document preparation is very fast paced, much faster than the ideal workflow for accessibile documents would allow. Documents have also been prepared and posted by individuals with a wide variation in technical know how and knowledge of standards.

We can move towards a more structured version of PDF document production, but that requires a major cultural shift – just ask anyone involved in developing an online course.

In the meantime, I would ask vendors to SERIOUSLY think about product usability. The day when most Penn State document authors will still still for an 8-hour training session in the complexities of document accessibility are still in a very distant future.

We need solutions which produce accessible documents with a minimum number of extra steps, none of which are buried in the “Advanced” features. Dreamweaver, Microsoft and ANGEL have been taking that approach, and it makes training much faster (90-120 min usually) and implementation much easier.

Posted in Accessibility | Leave a comment

Random Word Accessibility Tips

There are lots of standard strategies for making more accessible Word files, but even so I have learned some new tricks this week.

Tip 1: Right Click to Modify Heading Styles

Using heading styles in Word such as Heading 1 (document title), Heading 2 (major section titles), Heading 3 (subsections) and so forth is a very important technique for making documents accessible. But it’s also rare that the default heading styles Word gives you are the ones you want.

The trick here is to change one piece of text to the format you like, then you modify or update the style to the appearance you want. Here are some quick and dirty instructions that work in recent versions of Word for Mac and PC.

  1. Type sample text and put into a heading style (e.g. Heading 2)
  2. Use formatting tools to adjust appearance
  3. Move cursor the list of Styles in the Ribbon
  4. Right click on style name (e.g. Heading 2) in the ribbon.
  5. Select option for updating the style to match selection. Your formatting is changed.

If you are very ambitious you can learn and/or assign a keyboard command for key styles including heading. On my Mac Office 2011, Command+2 in Word is the Heading 2 Style and Command+N is Normal and I can change both by going to the Tools menu and selecting Customize Keyboard and saving it to my Normal template. It’s saved many seconds for me.

Tip 2: Learn Advanced Formatting WITHOUT Text Boxes

Sadly, once you insert a text box into Word or PowerPoint it places a protective force field which the screen reader (i.e. JAWS) cannot penetrate. This is a problem because text boxes were a go-to tool for adding colored boxes, special 3D text effects and drop caps.

Fortunately, the newer versions of Word let you do more formatting without resorting to a text box. A lot of 3D effects are available in the Advanced character formatting options, not to mention text outlines, shadows, and gradients.

Similarly the paragraph formatting tools allow you to add borders, background colors and padding above and below. If you use borders, you may need to adjust the margins for each paragraph to the appropriate width, but it can be done without sacrificing accessibility. You can learn how by either searching Google and Penn Staters can find Word tutorials in Lynda.com.

Tip 3: Don’t Merge Table Cells

Tables can be a chore to deal with, but are made infinitely worse when you begin merging cells. A screen reader assumes that cells should be read left to right, top to bottom, but when cells are merged, it can get lost. I had one case where a merge caused the cursor (and the screen reader) to mysteriously jump to the right column, then back to the left. Our tester was definitely confused.

Reformatting the table so that the cells were larger, but not merged made the reading more coherent.

Tip 4: Reveal Mystery Table Layout

Speaking of tables, I was having problems troubleshooting why the cursor was reading items in a particular order because the table had hidden most of the borders. To reveal the structure, I simply went to the table formatting menu and made that borders were set to All to reveal borders around all cells. That’s when I found how cells have been merged.

Posted in Accessibility | Leave a comment

Pack and Play Brownbag on Social Network Analysis

ETS is happy to announce a "Pack and Play" brownbag session on Social Network Analsysis in which Elizabeth Pyatt will introduce general concepts of social network analysis.

About Pack and Play

"Pack and Play" brownbags is a new brownbag event designed to explore different topics and facilitate creativity/problem solving. They are currently being administered by Kate Miffitt (kem32@psu.edu).

About Social Network Analysis

Social network analysis (SNA) is the study of analyzing social connections between individuals and how this contributes to the overall community structure. This session will introduce concepts of social network analysis such as centrality, outliers and brokers and applications of SNA in fields such as sociology, politics, linguistics and epidemology. The session will include a brainstorming discussion of how SNA can be incorporated into educational technology, particularly analytics.


Posted in Cognition/Linguistics | Leave a comment

Clearing the Clouds of Plagiarism 2013

Clearing Plagiarism Clouds SP 2013 – Now with better citation examples.

Posted in Uncategorized | Leave a comment

Quick and Dirty Accessible Self-Check Quiz with Just CSS and Tabindex

Something instructional designers like to do is build in self-check quizzes into online tutorials. It’s just a quick way to help learners determine if they are “getting” it or not.

Unfortunately interactive elements have been notorious for presenting problems to different audiences. The issue is generally making sure a blind user can see the answer once it’s revealed. Fortunately, advances in CSS and browser technology allow one to create better self checks.

In fact, I’ve updated one self-check in a database tutorial so that the answer is disguised in a box where the color the text and background match – until a mouse or tab key hits it to reveal the answer.

Hiding and Revealing Answer

The answers are visually hidden and revealed based on CSS of the background color. In the “hidden” state, the text and background are the same color and in the revealed state, the colors are different.

To enact hover states, the entire answer was placed in a link tag. The crucial parts of the style are display:block (to make the link act like a paragraph box), text-decoration:none and matching the color and background-color attributes. Add padding and margins as needed.

View the CSS

display: block;
background-color: #369; color: #369;
text-decoration: none;
padding: 0 3px 0 3px;
border: 1px solid #369;
padding: 3px

Note that it doesn’t necessarily “hide” text from blinder users on a screen reader, but since text is presented in linear order, that is not as serious an issue as it could be IMHO. I would however ensure that 1) the question is an H tag and that the answer text in the link begin with “Answer”. And then you can hush the screen reader.

To reveal the answer, it’s important to create an appropriate CSS for BOTH the a:hover (mouseover) and a:focus (keyboard focus). Here is the CSS below.

background-color: #DDD; color: #000;
text-decoration: none;
padding; 3px; border: 1px solid #369;


Assuming the answer is in a link, most browsers will allow you to tab right to the answer and hit ENTER or the DOWN ARROW (Chrome) to reveal the answer. But what about the question?

You could embed the question in a link, but another option is to embedd the tabindex="0" attribute in whatever tag the question lives in (it could be an H tag or P tag). The Tab Index is a signal to “stop here”, but setting the value to “0” doesn’t make any changes to order.

You can also add a tabindex to the answer, but since I buried it in A tag to use its hover effects, it’s redundant. So let’s look at the HTML for this.

View the HTML

<h4 tabindex=”0″>Should all a:hover style have an a:focus counterpart?</h4>

<p><a href=”#” class=”answer”>Answer: You bet it should!</a> </p>

Posted in Uncategorized | Leave a comment