Posted on March 13, 2019
“ALGORITHMS FIGHTING TERRORIST CONTENT – A THREAT TO OUR BASIC FREEDOMS?”
ALGORITHMS FIGHTING TERRORIST CONTENT – A THREAT TO OUR BASIC FREEDOMS?
The Islamic State efficiently weaponized social media with their hashtag #alleyesonISIS and the publication of thousands of Youtube videos.[1] The marring videos of beheadings and other ghastly executions trolled the Internet to the inspiration of some, and the abhorrence of most.[2] Their social media propaganda recruited more than 40 000 foreign fighters from 110 countries.[3]
The Internet is a very efficient propaganda machine because content uploaded to one webpage spreads like wildfire to other platforms. This is why heads of states from all over the world have called for the industry to do more and faster to strike down terrorist content with new technology.[4] However, the current algorithms for detecting terrorist and extremist content do not have the same ability as humans to distinguish between legal and illegal content.[5] The question thus becomes: Are we ready to trade our right to freely express and receive information in exchange for security?
In this blogpost I want to address the tension between our efforts to win the online war against terrorism and the responsibility to respect and protect our right to freely express and receive information. I do not aim to answer the questions that arise, but rather, to highlight some of the challenges that must be addressed. I start by looking briefly at international and European human rights law. Then I turn to a recent legislative proposal from the European Union that calls for the development and use of automatic detection tools to rid us of “terrorist content”. Further I look at how this pressure from world leaders and legislators to take action has impacted the conduct of companies with YouTube as the example. Finally I offer some thoughts on the limitations of the current technology and how the rush to use it may seriously impact our fundamental freedoms.
1. The Law
Freedom of expression is a fundamental right enshrined in the constitution of most democratic states. It also forms an integral part of international human rights treaties. According to the Universal Declaration on Human Rights article 19, ”[e]veryone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”[6] This right was further codified in the International Covenant on Civil and Political Rights (ICCPR) article 19, which means that the Covenants 172[7] state parties are obligated to respect, protect and fulfil this right.[8] As the quoted article emphasises, the right to freely seek and receive information is an integral part of the human right to free expression.
Freedom of expression also holds a central place in the European Convention on Human Rights and Fundamental Freedoms (ECHR). Similarly to the ICCPR article 19 nr. 3, the ECHR article 10 provides that any restriction that removes information or access to it must be “prescribed by law and necessary in a democratic society” as well as protect a legitimate interest listed in article 10 nr. 2 such as “national security” and “public safety.”[9]
As the prime interpreter of the ECHR, the European Court of Human Rights has repeatedly emphasised the importance of scrutinizing national decisions to censor the publication of information in whatever form. In the case Yildirim v. Turkey, the Court held that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court, (…) for news is a perishable commodity and to delay its publication, even for a short period, may well deprive it of all its value and interest.”[10] The Court further stated that “a prior constraint is not necessarily incompatible with the Covenant as a matter of principle” but that any such restraint must be subject to a legal framework to ensure ”both tight control over the scope of bans and effective judicial review to prevent any abuse of power”.[11] This legal test is just as important to uphold in regard to traditional media as it is on the Internet. The Court acknowledged the special role of Internet in today’s information environment in the case Times Newspapers Ltd v. the United Kingdom. In their reasoning the Court stated that “[i]n the light of its accessibility and its capacity to store and communicate vast amounts of information, the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general.”[12]
These human rights obligations demand that governments strike a fair balance between protection of free speech and the need to curtail terrorist propaganda in order to prevent terrorist activities. Governments are not released from their responsibility where they demand that private companies effectively provide censorship on their behalf.
2. The Rush to Crack Down on Offensive Content
For years the online platform providers have worked both separately and together to remove online terrorist and extremist content.[13] The pressure to take action has come from all corners of the world. Back in 2016 the Obama administration made it quite clear that Silicon Valley should do more to contribute to combat terrorists utilizing their online platforms.[14] Since then leaders from all over the world have joined up to pressure private companies to act faster and with more vigour to crack down on terrorist propaganda.[15]
In response to this pressure the tech-giants are taking steps to repeal unwanted content such as terrorist propaganda. As the mother company of Youtube, Google reports that it removed 1,667,587 channels and a baffling number of 7,845,400 videos during a three-month period in 2018 alone.[16] The videos are removed because they breach the YouTube Community Guidelines that prohibit content with incitement to violence, harassment, pornography or hate speech.[17] Of these removals, 6,387,658 videos were removed by automated flagging and 74,5 % of that number was removed before receiving any views, effectively prohibiting publication.[18] Although nowhere near the height of their power and popularity, ISIS supporters still managed to upload 1,348 YouTube videos and generated 163,391 views between March and June 2018.[19]
Google and Facebook have previously stated that human beings review whether to remove the content or not.[20] The reality, however, is that large parts of the removals are effected by automated or semi-automated decisions.[21] As I shall highlight below, the sophistication of this technology becomes very important for whether any laws obliging the companies to continue this practice complies with the right to express and receive information.
- EU Proposal: Legal Duty to Proactively Eradicate Terrorist Content
Deciding that the private companies responsible for these platforms are not doing enough, the European Union has decided that the voluntary measures are insufficient to win the battle against the terrorist propaganda. In September 2018 the European Commission launched a hard-hitting new regulation targeting “terrorist content” specifically.[22] According to the press release this term refers to “material and information that incites, encourages or advocates terrorist offences, provides instructions on how to commit such crimes or promotes participation in activities of a terrorist group.”[23]
The private companies obligated by the proposal are “all hosting service providers offering services in the EU”.[24] This definition is so broad that it is likely to affect all servers hosting user content no matter where they are based as long as they provide services in the European Union.[25] The provisions are aimed at compelling the companies to take both proactive and reactive measures to reduce the amount of terrorist content online.
The reactive duty includes the duty to remove any content flagged by the relevant member state authority within 1 hour of notice.[26] A daunting fine of up to 4 % of global turnover is what looms in the background if a company systematically fails to comply with these “removal orders” in time.[27] The one-hour limit might sound short, but if the goal is to take down the terrorist content before it creates too much damage it may even be too long. A report from 2018 spells it out for us: during one minute more than 2.5 quintillion bytes of data are created, we make more than 3,877,140 Google searches, watch more than 4,333,560 Youtube videos, and send more than 473,400 tweets.[28] When contemplating these numbers one can only imagine how much impact a video or tweet from ISIS could have in 60 minutes.
In recognition of these statistics the legislative proposal includes a duty for the private companies to deploy “automated detection tools where appropriate and when they are exposed to the risk of hosting terrorist content.”[29] These automated detection tools are algorithms that can sift through an amazing amount of data in a very short amount of time. The caveat is that applying automated detection tools to differentiate between what is “terrorist content” and what is merely “the expression of radical, polemic or controversial views in the public debate on sensitive political questions”[30] may fail. There is a risk that the algorithm detects and flags perfectly legal content.
The Commission is aware that this automated process must comply with the human rights legal framework protecting the freedom to express and receive information. The proposal therefore includes several provisions that try to ensure these rights. Examples include the duty to ensure “oversight and human assessment” of the content detected and enforcement of “effective safeguards to ensure full respect of fundamental rights, such as freedom of expression and information.”[31]
On the face of it seems like a great way to strike the balance between waging war on terrorist content and respecting fundamental human rights. The problem is that the legislation sets a lot of store by the sophistication of the automatic detection technology. It is hard to believe that the European Commission actually thinks that there can be human oversight over all content flagged by an algorithm. The details of the proposal are not yet out, so it is still unclear whether this would be an obligation or not. In any case, the use of automated detection tools to both detect and make the decision to remove content to comply with the regulation is very tempting if the goal is to take down the terrorist propaganda before it spreads.
Whether these automated detection and decision tools are within the limits of the human rights legal framework depends, as I see it, on at least three questions: 1) whether the state of technology is so sophisticated that the algorithm can differentiate between “terrorist content” and other content 2) whether the algorithm can ensure the human rights balancing test when it makes a decision to remove content at the cost of freedom to express and receive information, and 3) even if it can, whether the use of automated decisions will make it impossible for judicial review because the algorithm may not be able to provide an explanation of its legal analysis that humans can understand.
- Misconceptions on the Sophistication of the Technology
So how sophisticated is this automated detection and decision technology? Is it up to the job of replacing a human that can decipher legal from illegal content and make the requisite human rights legal analysis? A number of voices within the legal tech-community appear to think that it is not. The Center for Technology and Democracy (CTD) published a report on the limitations of automated social content analysis where they emphasised that the technology is not sophisticated enough to comprehend “the nuanced meaning of human communication or to detect the intent or motivation of the speaker.”[32] It is therefore important that politicians and legislators understand these limitations before they make statements or enact legislation that calls for action that cannot be done without compromising our basic human rights.
Stakeholders all over the world have reacted to the European Commissions press release on the new “terrorist content”-legislation with a message of caution and warning. This includes three United Nations Special Rapporteurs, the Council of Europe, private companies and NGOs.[33] One of these organisations, The Global Networking Initiative (GNI), stated as part of a lengthy article that they believe the proposal as it stands “could unintentionally undermine [the shared objective of tackling dissemination of terrorist content online] …by putting too much emphasis on technical measures to remove content, while simultaneously making it more difficult to challenge terrorist rhetoric with counter-narratives.”[34] In addition, the GNI expressed concerns that the regulation would place significant pressure on the affected companies to ”monitor users’ activities and remove content in ways that pose risks for users’ freedom of expression and privacy.”[35] As many other stakeholders share this concern, it indicates that they do not believe the European legislator understands the limitations of the technology when they propose this duty to put in place “proactive” measures.
- Blindly Trading Liberty for Security?
The possibilities for using machine learning to automate decision-making can turn out to be both a blessing and a curse. The pressing need for our politicians and jurists to have in-depth knowledge on emerging technology is mounting. The fight on the online battlefield against terrorism demonstrates the stakes we are facing. Striking the balance between liberty and security is difficult, but at least up until now it has been an issue where we could understand in what direction the wind is blowing when reviewing new legislation. The duty to use automated detection and decision-making tools may shake this safeguard. We must therefore ask our selves whether we are about to, or perhaps already did, enter an era where we unintentionally and unknowingly trade our fundamental right to express and receive information in exchange for security.
[1] Singer, P.W. and Emerson T. Brooking. LikeWar, New York: Houghton Mifflin Harcourt, 2018, p.5.
Greenemeier, Larry. Social Media’s Stepped-Up Crackdown on Terrorists Still Falls Short. (2018), https://www.scientificamerican.com/article/social-medias-stepped-up-crackdown-on-terrorists-still-falls-short/ [Cited 02/14/2019]
[2] Koerner, Brendan I. Why ISIS is Winning the Social Media War. (2016), https://www.wired.com/2016/03/isis-winning-social-media-war-heres-beat/ [Cited 02/14/2019]
[3] BBC.com. IS foreign fighters: 5,600 have returned home – report. (2017) https://www.bbc.com/news/world-middle-east-41734069 [Cited 02/05/2019]
[4] Sengupta, Somini. World Leaders Urge Big Tech to Police Terrorist Content. (2017)
https://www.nytimes.com/2017/09/21/world/internet-terrorism-un.html [Cited 02/05/2019]
[5] Center for Democracy and Technology. Mixed Messages: the Limits of Automated Social Media Content Analysis. (2017), https://cdt.org/insight/mixed-messages-the-limits-of-automated-social-media-content-analysis/ [Cited 02/05/2019]
[6] United Nations. Universal Declaration of Human Rights. (Date unknown), http://www.un.org/en/universal-declaration-human-rights/ [Cited 02/05/2019]
[7] United Nations Treaty Collection. International Covenant on Civil and Political Rights. (2019), https://treaties.un.org/Pages/ViewDetails.aspx?chapter=4&clang=_en&mtdsg_no=IV-4&src=IND [Cited 02/09/2019]
[8] United Nations Human Rights Office of the High Commissioner. International Law. (Date Unknown), https://www.ohchr.org/en/professionalinterest/Pages/InternationalLaw.aspx [Cited 02/09/2019]
[9] Convention for the Protection of Human Rights and Fundamental Freedoms, Rome, 4.XI.1950, Article 10 nr. 2.
[10] Case of Yildirim v. Turkey, Application no. 3111/10, 12/18/2012, paragraph 47.
[11] Case of Yildirim v. Turkey, Application no. 3111/10, 12/18/2012, paragraph 64.
[12] Case of Times Newspapers Ltd (nos 1 and 2) v. the United Kingdom Application no. 3002/03 and 23676/03, 03/10/2009, paragraph 27.
[13] Greenemeier, Larry. Social Media’s Stepped-Up Crackdown on Terrorists Still Falls Short. (2018), https://www.scientificamerican.com/article/social-medias-stepped-up-crackdown-on-terrorists-still-falls-short/ [Cited 02/14/2019]
[14] Handeyside, Hugh. Social Media Companies Should Decline the Government’s Invitation to Join the National Security State. (2016), https://www.justsecurity.org/28755/social-media-companies-decline-governments-invitation-join-national-security-state/ [Cited 02/14/2019]
[15] Sengupta, Somini. World Leaders Urge Big Tech to Police Terrorist Content. (2017)
https://www.nytimes.com/2017/09/21/world/internet-terrorism-un.html [Cited 02/05/2019]
[16] Google. Transparacy Report: YouTube Community Guidelines enforcement. (2018), https://transparencyreport.google.com/youtube-policy/removals?hl=en&total_removed_videos=period:Y2018Q3;exclude_automated:&lu=total_removed_videos [Cited 02/05/2019]
[17] Ibid.
[18] Ibid.
[19] Greenemeier, Larry. Social Media’s Stepped-Up Crackdown on Terrorists Still Falls Short. (2018), https://www.scientificamerican.com/article/social-medias-stepped-up-crackdown-on-terrorists-still-falls-short/ [Cited 02/14/2019]
[20] Council of Europe. “Algorithms and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing Techniques (in particular algorithms) and Possible Regulatory Implications.” Council of Europe Study DGI(2017)12, p. 18.
[21] Ibid.
[22] European Commission. State of the Union 2018: Commission proposes new rules to get terrorist content off the web. (2018), http://europa.eu/rapid/press-release_IP-18-5561_en.htm [Cited 02/06/2019]
[23] Ibid.
[24] European Commission. State of the Union 2018: Commission proposes new rules to get terrorist content off the web. (2018), http://europa.eu/rapid/press-release_IP-18-5561_en.htm [Cited 02/06/2019]
[25] Bennett, Owen. The EU Terrorist Content Regulation – a threat to the ecosystem and our users’ rights. (2018), https://blog.mozilla.org/netpolicy/2018/11/21/the-eu-terrorist-content-regulation-a-threat-to-the-ecosystem-and-our-users-rights/ [Cited 02/06/2019]
[26] European Commission. State of the Union 2018: Commission proposes new rules to get terrorist content off the web. (2018), http://europa.eu/rapid/press-release_IP-18-5561_en.htm [Cited 02/06/2019]
[27] Ibid.
[28] DOMO. Data Never Sleeps 6.0: How Much Data is Created Every Minute? (2018/2019), https://www.domo.com/learn/data-never-sleeps-6 [Cited 02/06/2019]
[29] European Commission. State of the Union 2018: Commission proposes new rules to get terrorist content off the web. (2018), http://europa.eu/rapid/press-release_IP-18-5561_en.htm [Cited 02/06/2019]
[30] Ibid.
[31] Ibid.
[32] Center for Democracy and Technology. Mixed Messages? The Limits of Automated Social Media Content Analysis. (2017) https://cdt.org/insight/mixed-messages-the-limits-of-automated-social-media-content-analysis/ [Cited 02/06/2019]
[33] See for example: 1) Open Letter from the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, 12/07/2018, https://spcommreports.ohchr.org/TMResultsBase/DownLoadPublicCommunicationFile?gId=24234, and 2) Council of Europe. Misuse of anti-terror legislation threatens freedom of expression. (2018),
https://www.coe.int/en/web/commissioner/blog/-/asset_publisher/xZ32OPEoxOkq/content/misuse-of-anti-terror-legislation-threatens-freedom-of-expression https://edri.org/terrorist-content-regulation-warnings-from-the-un-and-the-coe/
[34] Global Network Initiative. GNI Statement on Europe’s Proposed Regulation on Preventing the Dissemination of Terrorist Content Online. (2019), https://globalnetworkinitiative.org/gni-statement-draft-eu-regulation-terrorist-content/#_ftn15 [Cited 02/06/2019]
[35] Ibid.
Follow Us!