Over the last 8-10 months, a handful of folks from the Schreyer Institute, Teaching and Learning with Technology and the Office of Institutional Planning and Assessment discussed and researched the topic of learning analytics. If you never heard the term learning analytics before, the easiest way to explain it is by looking at companies like Netflix and Amazon. These companies leverage your personal renting or buying habits to compare you to hundreds (or thousands) of similar users to provide you with recommendations on what to rent or purchase next. Learning analytics is the application of these same practices, but in support of education. Specifically, learning analytics is:
“the use of analytic techniques to help target instructional, curricular, and support resources to support the achievement of specific learning goals”. (Barneveld et al., 2012)
Many of these early efforts, such as the Signals project at Purdue University
, live within a University’s course management system. These tools generate a risk assessment for each student in a course by taking into account a student’s demographic and historical data (age, gender, past GPA, SAT scores, etc) and then combines CMS activity data (how many logins, grade book data, number of forum posts, etc). A faculty member usually initiates the risk assessment and then receives a ‘risk level’ for each student. At this point, the faculty member can intervene with those at high risk of not succeeding in a course.
Overall, I think this is a wonderful idea, and the folks at Purdue ran a few studies that illustrate how effective this system can be at keeping students at a “C” or above. A new system from Austin Peay State University
just crossed my desk that exists outside of the CMS. This system resides at the course registration level. When students login to register for a course, they are presented with two ratings:
- The highest rated courses you should be taking (based off your major, semester courseload and other data)
- Your predicted grade for the course (this is generated by comparing your historical transcript data to 10 years worth of other students that have similar characteristics).
This is a very interesting idea, but how will students interpret the data? Will they intentionally register for courses where the system predicts a “B” or above, even if they are not interested in the content? Will students only register for highly recommended courses, and not pursue other interests due to the system’s recommendations? A colleague mentioned that this could spiral into a self fulfilling prophecy for students rather quickly, where they take the prediction data as fact and don’t deviate from any of the recommendations. Scary stuff.
With any learning analytics system, a key challenge will be educating and training the end users on how to best leverage the data. In many instances, that means being skeptical of the data, and using it as one of many different factors that contribute to a decision that, in the end, contributes to student success.