Self-harm monitoring software programs have not proven their ability to protect students, explain Sara Collins of Public Knowledge and her coauthors in a recent report released by the Future of Privacy Forum. Collins and her coauthors argue that absent careful regulation, this software may have unintended negative consequences for student privacy, equity, and mental health.
“Absent other support, simply identifying students who may be at risk of self-harm—if the system does so correctly— will, at best, lead to no results,” and at worse, could trigger a harmful response, warn Collins and her coauthors.
Collins and her coauthors analogize self-harm monitoring software to the reviewing, flagging, and alerting done by credit card companies for suspicious transactions. Self-harm monitoring software used by schools works by scanning student activity across school-issued devices including email, social media, documents, and other online communications. The software then either keeps a record of all activity for school administrators to search through or only collects flagged activities. When the software identifies a self-harm risk, it reports the activity to school employees, and, in some cases, the software alerts third parties such as law enforcement.
Read more: