Russia Blocks Telegram (and takes Google and Amazon with it, too)

Russia has never quite been a bastion of free speech. Its authoritarian government works to ensure that its citizens only hear and say things that it approves of. With the rise of technology and the internet, this has become increasingly difficult. The Russian government has an office that is responsible for the censorship of electronic communication, called Roskomnadzor. Within the past few days, Roskomnadzor took offense at a messaging app called Telegram.

Telegram is a messaging app that made waves when it launched in 2013. The app features a custom security protocol that allows for end-to-end encryption, meaning that every message is completely encrypted from the moment it leaves one device until the moment it arrives on another. Telegram is not owned by any larger parent company (a la WhatsApp or Facebook Messenger), and the messages and other data send through Telegram cannot be accessed by anyone except the sender and recipient, not even the creators of the app. Telegram’s secure and encrypted nature obviously puts it at odds with a government that is heavily involved in censoring its citizens, so a few days ago, Russia decided to block Telegram (or, at least, they tried…)

Russia’s argument for blocking Telegram is that its focus on privacy makes it the perfect tool for terrorism. Indeed, this is a legitimate issue. Any app that features full privacy is going to attract those who have a good reason to keep their activities private (that is, they are doing something illegal). A similar problem exhibited itself in Indonesia last year, where the Indonesian government issued Telegram an ultimatum: block terrorist propaganda or the app would be banned. Telegram ended up complying with the request. It formed a team that combed through Indonesia’s Telegram communities and removed terrorist-related content. Although Russia does have a legitimate excuse for wanting to block Telegram, one has to wonder if there are ulterior motives. This also goes back to the age-old question of privacy vs. protection. Are people willing to give up their privacy in order for the government to (ostensibly) provide more protection and be more proactive in stopping issues before they start? In the United States, the answer is no, and I’d imagine that many Russians would agree. Nonetheless, the Russian government still went forward with blocking Telegram.

Russia’s Telegram blocking was done in perhaps the worst way possible. Telegram assets and services are hosted on Google’s and Amazon’s cloud platforms, and so rather than identify specific Telegram servers, the Russian government issued blanket bans on IP addresses associated with either cloud hosting provider. In total, the Russian government blocked 15.8 million IP addresses (each one potentially a different website or web service) in the name of blocking Telegram. Many websites completely unrelated to Telegram have gone down as a result, including the popular messaging app Viber, used by many Russians. This blanket ban is the equivalent of finding one weed in a flowerbed and then deciding that the best course of action would be to dig up the entire garden.

The Russian government’s clumsy ban shows just how far it is willing to go in its aim of censorship. It is willing to take out large sections of the internet with little thought just to (try to) stop one encrypted messaging app from functioning. There are, of course, ways around the ban, and many are willing to risk potentially getting caught so that they can continue to message in private. Only time will tell if the Telegram ban holds, and where the Russian government will focus its censorship efforts next. For now, the Telegram ban reveals that the Russian government is deathly afraid of giving its citizens a voice, lest it hear what they have to say.

 

Sources:

https://www.theverge.com/2018/4/17/17246150/telegram-russia-ban

https://www.msn.com/en-us/finance/technology/google-amazon-drawn-into-telegram-ban-as-russia-blocks-millions-of-ip-addresses/ar-AAvYOxj

http://piunikaweb.com/2018/04/17/telegram-is-still-alive-in-russia-despite-huge-collateral-damage/

https://www.theverge.com/2017/7/17/15980948/telegram-indonesia-isis-terrorism-moderation-ban

The Dark Side of Self-Driving Cars

At about 10 pm on Sunday, March 18th, a woman named Elaine Herzberg was hit by a car. Although the driver sustained no injuries, the pedestrian died hours later in the hospital due to the injuries that she sustained. As sad as this may be, it’s not exactly the most uncommon occurrence in today’s society, and hardly national news. There was, however, one aspect of the crash that was different: the driver wasn’t actually driving the car, because the car was an Uber self-driving car.

The crash happened in Arizona, one of the places where Uber has been testing its self-driving car technology, along with San Francisco, Pittsburgh, and Toronto. In response, Uber has halted its testing of self-driving cars until the police investigation into the accident is resolved. Toyota, another company developing self-driving car technology, also decided to temporarily suspend its testing because “[they] feel the incident may have an emotional effect on [their] test drivers.”

The Uber test driver, a 44-year-old woman, claimed that Herzberg appeared suddenly out of the shadows, and that the crash happened before she realized there was any danger. It is also true that Herzberg did not cross the street at a crosswalk. However, one would hope that all of the fancy technology that a self-driving car is outfitted with would be able to detect a pedestrian crossing the street.

Because the event only occurred a few days ago, and because the investigation is still ongoing, there is not much information available to the public. However, we do know that this is the first incident of a pedestrian being killed by a self-driving car. Whatever results from the police investigation will be a precedent for future event, which unfortunately are almost certain to occur. After reviewing the footage captured by the car’s cameras, police will determine how much, if any, blame should be allotted to Uber and the safety driver.

Because self-driving car technology is so new, the application of current traffic laws to self-driving car incidents is difficult. Although the safety driver’s car hit the pedestrian, the safety driver wasn’t actually driving the car. Could this still be considered a case of vehicular manslaughter? Or is it simply a case of negligence?

Regardless of who may be at fault, Uber’s case will now remain in the public eye for the next few weeks, or at least until it’s resolved. It is important to remember that of the total number of vehicle-related deaths, self-driving cars contribute to very few. Of course, no one should have to die in a vehicle-related accident, but some context is important when considering the Uber situation. In 2016, there were 34,439 fatal crashes, up from 32,539 crashes in 2015. Of those 2016 crashes, self-driving cars were involved in very few, most likely no more than 10. One publicized incident in 2016 was the death of a Tesla driver who had Autopilot activated and crashed into a trailer. For the thousands of car-related deaths that never made national news, and may have been mentioned in passing on local stations, the Tesla incident was highly publicized.

Some are calling for self-driving car companies to stop testing their autonomous vehicles on public roads. However, self-driving cars will only improve with testing, and when human-driven cars are phased out, vehicle fatalities will plummet. Perhaps the Uber incident was human error on the part of the safety driver. Perhaps it was an unfortunately coincidence. Or perhaps it truly was a shortcoming of Uber’s technology which caused the death. Elaine Herzberg should not have been hit and killed on the night of Sunday the 18th, but hopefully this incident will give pause to the companies working on self-driving car tech, so they can reflect on the enormous responsibility they have when they put their cars on the streets. And in a few decades, much-improved self-driving car technology will make our roads safer and more efficient than we can possibly imagine.

Sources:

https://www.theverge.com/2018/3/19/17139518/uber-self-driving-car-fatal-crash-tempe-arizona

https://www.theverge.com/2018/3/19/17140936/uber-self-driving-crash-death-homeless-arizona

https://www.theverge.com/2018/3/20/17142672/uber-deadly-self-driving-car-crash-fault-police

https://www.theverge.com/2018/3/20/17143838/toyota-av-testing-uber-self-driving-car-crash

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812451

https://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s

Deepfakes: The Morals of Swapping Faces

Machine learning and artificial intelligence have been the hottest topics of the tech world for quite awhile now. With the power to emulate human thought, computers can now tackle a whole new range of problems. Computers can safely navigate cars through major cities, predict emergency room waiting times, and even play the ancient Chinese game of Go better than any human alive. But with any technology, there is always the chance, perhaps even the inevitability, that it will be used for less-than-favorable purposes. Thus is the case with FakeApp.

The idea is simple: take a source video of a person and replace their face with someone else’s. This can be done through a machine learning algorithm which analyzes the source video and a large collection of photos of the target face, and figures out how to map the target face onto the source video. The resulting videos are called “deepfakes”. If you’re interested, you can download FakeApp here, but the goal of this post is to examine the implications of this technology rather than to explain how it works.

First, we should discuss feasibility. FakeApp is free to download and there are plenty of mirrors, so even if one file-sharing website removes it, it’ll be available somewhere else. However, to get convincing results, the user needs a large collection of photos of the target face from many different angles. They also need a powerful computer in order to train the model, or else the process would take too long to be feasible. Even still, the training process takes hours. In short, anyone with a good computer, a lot of pictures, and a lot of time could make deepfakes.

The technology is very new. It was launched by Reddit user deepfakes (who has since been banned) on December 11th, 2017. This user posted a few videos on various subreddits (communities within Reddit), and piqued interest before eventually releasing his tool. Since its release, it has been used for some very funny projects, including one which fakes Nic Cage’s face onto a variety of famous scenes from different movies:

However, deepfakes are most commonly used for a more questionable purpose: fake pornography. Deepfake creators gather large datasets of famous people (generally actresses) and fake them onto pornographic videos with similar-looking performers. This raises concerns, both morally and legally.

The primary question deals with consent. Obviously, the famous person did not consent to having their face used in a pornographic video. However, is consent needed? The famous person isn’t actually doing anything, they are just having their likeness used in an unflattering way. They could attempt to sue for defamation, but it is difficult for celebrities to win with this argument because being in the public’s eye is the nature of their career.

Then there is the question of should this be allowed to happen. On one hand, the argument could be made that it’s not really hurting anyone, but on the other hand, there is something fundamentally messed up about making it look like someone did something compromising that they didn’t really do.

And there lies the major issue. Because while deepfakes thus far have mainly involved celebrities, there is no reason that someone with enough photos of a coworker or ex couldn’t make a deepfake with their likeness. These deepfakes, if they are convincing enough, could be used as revenge or blackmail. This once again prompts the moral and legal questions, but the context changes as the defendant is a regular citizen. While they could sue for defamation, the legal area for deepfakes is obviously very murky. Since they haven’t actually been put into a compromising position, they would have to carefully craft a case to show clear evidence of defamation or other personal stress and issues. By that point, perhaps the person who created the deepfake will have gotten what they wanted.

As of late, many platforms have banned deepfakes, and have worked to remove them from their websites. These platforms include Discord (a chat service targeted at gamers), YouTube, Twitter, Reddit (where it all began), and even Pornhub. However, deepfakes will never fully go away. Deepfake creators have their own secretive communities where they share videos, photo collections, and tricks, and it’s fairly straightforward for anyone to download the app and make a deepfake themselves. Perhaps in the near future we’ll see the first deepfake-related case go to court, or perhaps the deepfake creators will retreat to their secret communities and remain there without provoking attention. Regardless, this application of machine learning is remarkable in its relative simplicity and the depth of moral and ethical issues it manages to raise.

 

Sources:

https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn

https://www.theverge.com/2018/1/30/16945494/deepfakes-porn-face-swap-legal

https://www.theverge.com/2017/12/12/16766596/ai-fake-porn-celebrities-machine-learning