Civic Issues 8: Google, Fitbit, and the Messy World of Bioinformatics Privacy

Google is buying Fitbit. In this blog post, I’ll touch on why they are doing this, how the deal is working out so far, and why it hasn’t been closed yet. More importantly, I plan to dive into the big picture plan for Google when it comes to bioinformatics, as well as the future of privacy policies when it comes the the data that is (literally) closest to your heart: health and fitness data.

In late 2019, Google announced they were buying out Fitbit, a company that has been independently building fitness tracking equipment for twelve years. They are following Apple, Samsung, and other tech firms in the wave of interest in fitness, health, and body tracking as the future of technology. Specifically, investing in wearables, like the trackers Fitbit makes, is a sensible move for Google, which currently does not provide many competitors to Apple and Samsung’s smart watches. At the surface, this looks like a simple product buy-out: now Google will have wearable fitness tech to compete in the burgeoning market. With the smartwatch industry looking to double by 2023, and with wearable technology increasingly becoming a part of daily life, it’s a business-savvy move to acquire Fitbit’s product line and team.

But there’s more to it than that. Google already has the brilliance and the hardware and software resources to begin creating its own wearables, and there’s plenty of intellectual property room in the field for new innovations. In 2019, Google paid $40 billion for workers and technology from the Fossil Group’s watchmaking research and development team. They already have plenty of innovative software and hardware teams that have worked on myriad Google products in the past. If Google really wanted to, they could create their own competitive wearables. Besides the Fitbit name and product line, what’s in it for them?

The health data of millions of Fitbit users is one possibility. Though Google representatives have stated they will not use this user health data for targeted advertising, there are plenty of other profitable (and, sometimes, ethically questionable) uses for this sort of data. This is especially true when the data is information as vulnerable as that relating to health and fitness- just think of the legal tangle that is the healthcare privacy and insurance ecosystem in the United States.

When shared with employers, insurance companies, or government entities, health data is something that could potentially decide whether or not employees got a health bonus of some kind, what rates they pay on insurance, what jobs they can hold, and what sorts of government benefits they might have access to. Most of this is speculation, but it’s speculation based on a sticky slope we are already on when it comes to sharing health data. Employer health and wellness incentives are becoming more and more popular- which is great!- and they sometimes use easy tracking programs to encourage employees- which is worrying. A proposed bill could allow employers to make employees to choose between a mandatory DNA test (handing the results over, of course) or heavy fines. many companies, such as Levi Strauss, are offering employees free DNA testing as a health and wellness perk.

Although medical professionals have warned that these tests are not particularly useful for predicting the variety of genetic diseases employees go looking for in their DNA, the tests continue to be popular. (A single gene mutation that happens to be the same as a gene mutation of someone who is high-risk for breast cancer, for example, does not usually indicate that an otherwise low-risk person is suddenly high risk. For complex illnesses such as these, such diagnostic testing can do more harm than good, some doctors say, sowing seeds of unneeded anxiety.) Although I don’t have time to delve into the questionable privacy details of genetic testing the the future of the network of tested DNA that is being developed, the rise in popularity of such tests as a perk is just one sign that employers are getting more involved in employees’ health data management.

So, everyone wants as much of their own health data as they can get, and everyone else- corporations, employers, insurance companies- would like to get their hands on it as well. What could Google, specifically, be hoping for?

For one thing, healthcare systems all over the world are a complicated mess- especially in the United States. Any corporation that can make this complicated, expensive world easier for the everyday user to navigate could reap great reward. Fitbit currently has direct relationships with governments, insurance providers, and employers all over the world, providing discounted trackers as part of special benefit packages. By buying Fitbit and its data, Google inherits these connections, landing their influence much closer to the individual than they might have gotten as part of a colossal corporate conglomerate charging forth with a new product. It’s difficult to build those sorts of relationships, and having them gives Google a head start. For this reason, I speculate that Google is making this acquisition primarily to inherit the name and established, trusted network of the Fitbit company. This merger is taking a while to go through, because both companies are colossal and have a variety of issues to sort out (in fact, Google is facing several antitrust lawsuits already). More specifically, lawmakers are still concerned about allowing the already-giant Alphabet conglomerate (Google’s parent company) swallow up Fitbit. The reason many of them are concerned? The data acquisition. Paired with Google’s recent interest in health artificial intelligence, it looks suspicious to policymakers worried for individual health privacy. Nonetheless, as of right now, is seems as if the deal will likely, eventually, go through.

Do I see anything directly morally questionable in the desire to obtain Fitbit’s name and network? Probably not. It’s a strategic business move. However, it signals that Google wants to get in the nitty-gritty of the healthcare system, communicating directly with employers, governments, and insurance providers. This means technology companies are going to be more and more inextricably linked with our health information, not just through the devices they manufacture and the services they maintain, but through the race to give healthcare a Silicon Valley makeover. When the companies creating the healthcare interfaces we use are no longer just focused on health, it becomes more and more likely each day that the lines will blur between internet behavior pattern data and the newly acquired medical data.

Google already uses search patterns to help track disease spreading, and those models could probably be made much more effective with the (hopefully differentiated) reporting of average heart rate, step count, and so on in an area. The widening horizons of health artificial intelligence also suggest the data could be used to train algorithms. The road going forward is uncertain, and health privacy is precious. We must all remain vigilant and watch out for privacy pitfalls, remaining aware of who has the valuable information our bodies create.

Sources:

https://time.com/5717726/google-fitbit/

https://fortune.com/2018/04/19/levi-strauss-ceo-dna-testing/

https://www.forbes.com/sites/janetwburns/2017/03/14/gop-bill-could-force-employees-to-undergo-dna-testing-or-pay-thousands/#47171bb171fe

https://www.fool.com/investing/2020/02/10/google-buying-fitbit-is-not-a-done-deal-in-2020.aspx

https://www.forbes.com/sites/kateoflahertyuk/2020/02/21/googles-2-billion-fitbit-deal-time-to-quit-your-smartwatch/#633016ce3108

https://gizmodo.com/surprise-fitbits-first-new-product-under-google-is-a-f-1842564469

Civic Issues 7: What to Expect When You’re Connecting: Consumer Engagement Companies and You

Have you used an online service managed by Paypal, Venmo, OKCupid, or Grindr? What about HBO, Disney, Postmates, or Urban Outfitters? Microsoft? Citi Bank? KFC? Domino’s Pizza? Nascar?

If so, an elusive company called Braze has a file of data on you, including your geolocation and your associations (the people you’ve interacted with).

If this reminds you of the authoritarian pandemic control developments I’ve talked about in previous blogs, you’re not alone, but this is a perfectly legal, common practice. Introducing: the vaguely named “consumer engagement company”. While corporations calling themselves this may have different functions, it is often a euphemism for a company whose job is to collect key points of individual personal data, bundle them up, and use them to deliver targeted marketing campaigns. Braze is but one such corporation, which refers to the above retailers as its “customers” and has a website perfectly tailored to the needs and humanization of their corporate partners, faux-touching “read their stories” testimonials and all. Your data is the product. Dozens of brands are buying and middle-man companies like this one are selling.

Consumer engagement companies assure the public that they keep data secure, but generally provide no specific details on end-to-end encryption or some other process that is in place. Braze, in particular, says it only shares the data of willing individuals, but individuals who have reached out to the company to have their information deleted have not gotten any sort of response. Most users have never heard of Braze, or of the other companies in the same niche, and the entire field relies on this lack of user awareness. However, even when users reach out, there seems to be nothing they can do. Users are the harmed party in this situation.

Why should you care? Firstly, as we’ve discussed, targeted advertising doesn’t just manifest out of thin air. I don’t have enough space to tackle every problem with the system ever in this blog post, but I’ll raise a few issues. Your data is a product, and corporations you have never heard of, which offer you no service and do not see you as a customer, are selling it. If nothing else, does this not inspire spite? Do you believe that, at least, there ought to be more transparency regarding who is selling your online patterns, locations, and contacts, and how to effectively keep them from doing so? Secondly, the targeted advertising complex brings its own list of ethical quandaries. Sure, it could just be making regular advertising more efficient. Or it could be linking you directly to options that are worse than the ones outside your current advertising scope. Or, maybe, your advertisements on social and political issues are feeding into an echo chamber of seeing only beliefs we agree with, which can veer dangerously towards misinformation and real-world extremism. Finally, the ad-revenue-based economy is reshaping the entire nature of the Internet. As attaining clicks, and thus, more advertisement views and better search engine optimization spots, becomes the priority of every content creator on the Internet, large and small, including advertisers, the usefulness and quality of content often goes down. If a misleading, oversimplified, confusing, or inaccurate ad will get more clicks, it’s going to go out there. If, looking at your past patterns, a company like Braze thinks this sort of low-information, low-utility ad will get more ads and clicks from you, they will serve it to you, regardless of its quality. And if you continue to engage with such content, the cycle continues.

The standard issues of targeted advertising, cookies, and online tracking aside, these companies frequently have your geolocation and your associations. Even if you still refuse to care about your internet usage patterns being tracked, I hope you can see why someone might be concerned about their location and their lists of contacts being put in the hands of these companies and passed along to make advertising decisions. I’m sure you want to protect information about your friends and family from being seen by dozens of pairs of corporate eyes. Maybe you see an ethical issue with getting ads that target you based on a profile of what your associations and people in your area seem to enjoy. For one thing, you might not even end up getting ads for things you like. Ever gotten an advertisement for something your friend adores, but you’ve never looked into? This is one of the ways that happens. Another issue: if your information was obtained from an app handling a great deal of sensitive information about you, such as a financial app like Venmo or a dating app like OKCupid, there are personal and financial ramifications to any leak that are greater than just the risk of targeted advertisement.

And a leak doesn’t even have to occur for this sort of data sharing to hurt people. One recent study revealed that many dating apps share data that includes geolocation and sexual preference with advertising partners. For instance, closeted gay and bisexual men likely would feel less comfortable using Grindr if they knew information about their location and sexual orientation, which could potentially endanger them or expose them to prejudice in the offline world, was being used to profile them for targeted advertisements. They would feel even less secure if they knew how much of their data the app shared, through supposedly safe, but unspecified, channels with corporations for advertising purposes.

In fact, any sort of leak is an issue when it comes to lists of associations, geolocations, internet tracking data, and any other data that may or may not be handled by these companies in the process to selling data to companies like Braze. There are always going to people out there who will be in a real-life dangerous situation if their location or someone’s contact list falls into the wrong hands. And, whether or not KFC knowing about your online behaviors and interests directly puts your life in danger, I’d argue it’s a matter of principle: you have a right to keep advertisers you have never met from knowing more about you than most of your work acquaintances for their own gain if you don’t want them to do so.

So what can users do? Keep pestering corporations, spreading education and awareness, and shunning non-essential services that sell their information to such corporations. Above all, publicly discuss, petition, and vote in order to get new user-protecting privacy laws into place. When the Federal Communications Commission repealed net neutrality, it wasn’t just net neutrality that went, but an entire bundle of user-privacy-protecting rules and regulations. Such rules are frequently allowed to expire among seemingly low civic engagement against them. The only solution right now seems to be civic persistence and education. I think right now the most important issue is educating people on how the current targeted advertising system works, the lack of privacy regulations in place, and the ramifications of the way things are and might continue to be.

Here are some links to sources used to inform this blog post:

https://www.usatoday.com/story/tech/2020/02/24/why-okcupid-venmo-grindr-send-braze-personal-info/4845495002/

https://www.usatoday.com/story/tech/2020/01/15/tinder-grindr-okcupid-share-your-personal-information-study-says/4476084002/

https://www.braze.com/customers

Civic Issues 6: Children’s Privacy on the Internet

The other day, I was watching a bunch of solo jazz dance tutorial YouTube videos and adding them to a playlist for a student organization I’m an officer for (Swing Dance Club! Come check it out next semester!) when I was suddenly informed by a popup that I couldn’t add a certain video to a playlist because it had been marked as a video for children, and there are new rules about childrens’ content on YouTube. I guess one of the rules is “no playlists”? I was thrown off for several reasons. Firstly, the video wasn’t for children- I mean, there wasn’t anything explicitly adult about it, but it was a tutorial for a choreographed jazz routine first developed a hundred years ago. Is this the content toddlers want to see?

As I considered the playlist restriction, it sort of made sense. YouTube channels creating and curating borderline-nonsensical childrens’ content is a prime method of “ad-farming” (creating low-effort content for the sole method of earning advertising revenue) on the platform.  Secondly, childrens’ content and adult or frightening content might be mixed in a playlist, whether intentionally to scare children or for other purposes. These are the reasons I could think of. I suppose the platform prefers video order is selected carefully by the user, or by the trusted autoplay algorithm.

However, I decided to look into the law referenced in the notification, which is called the Children’s Online Privacy and Protection Act, or COPPA for short. It was originally enacted in 1998, but recent changes have caused the new sorts of online developments I had run into. The main, broad goals of the law include that children’s online tools, websites, and content for children under the age of 13 must have a “clear and comprehensive” privacy policy, keep children’s data secure, and provide notice and obtain parental consent (likely in the form of a “I agree to…” checkbox) before collecting information about children. Schools may also grant broad consent in place of parents on educational websites used solely for education purposes. It was the first Internet-specific privacy law passed in the US, so it’s an important piece of technological legislature history.

I’m sure many of us have cherished childhood memories of checking a box that says something like “By checking this box, I certify that I am at least 13 years of age.” Those sorts of statements when using a website are probably to ensure the website is not responsible for any of the specific restrictions of COPPA.

I looked to YouTube’s blog to find clarification of their response to the recent changes. The viewers of videos deemed “children’s content” (by either the creator of the content or the machine-learning algorithm) are now considered children regardless of the actual age of the user account watching the video. This now disables some functions, such as comments, notifications, targeted advertising (because of decreased data collection), and, apparently, sharing to playlists.

There are people (primarily content creators, on both sides of the ad-farming divide) that are unhappy with these changes. One flaw of COPPa many of them point out is the monetary danger for individuals: civil penalties can go up to $42,530 per violation.

So what exactly are we protecting children from? Primarily, data collection. What sort of data collection? The present Internet might look different from the one in 1998, but, currently, it seems to mainly be advertisers who collect the most user data of the sort COPPA restricts. This prevents some predatory targeted advertising practices that prey on children’s taste and possible lack of understanding of online commerce. In a situation like YouTube content, it’s less clear. There’s no targeted advertising, sure, but blocking actions like commenting seems to suggest a desire to keep children from viewing content deemed age-inappropriate.

Does it work? Well, it’s easy to get around by simply making sure your content isn’t labelled as children’s content. We saw this in the age of “I certify that I am at least 13 years of age”, and we are going to continue to see it as all sorts of online content creators try to avoid COPPA restrictions. On the user’s side, it’s fairly easy for a child to mindlessly check a box and escape into the full adult world of non-privacy-protection. As to keeping “inappropriate” content away from children, again, it’s pretty easy to click a box or simply click away into a non-COPPA-protected video or website. The measures seem slightly performative, considering how easy it is to get around them if desired, but I suppose every step towards keeping uncomfortable or unsafe content from appearing unwanted in front of a child is a good one. What, however, are the specifics of the sorts of content children under the age of thirteen should not be allowed to see? Is it morally right to restrict created content this way, and would it be right if platforms were to take further steps to keep children from ever seeing content they deemed “inappropriate”?

These are the censorship questions fueling many content creators’ rage about COPPA. The Federal Communications Commission, on the other hand, withholds the essential nature of this particular law in its current state. (The FCC came to widespread attention a few years ago when they repealed US net neutrality rules. Recently, they released a tone-deaf memo implying that the repeal of these rules was essential in order to aid broadcasters in providing aid in the midst of the coronavirus epidemic. I don’t get it either.)

Hopefully this has been an informative primer on the basics of children’s internet privacy in the US, and the ethical questions that continue to be debated, from the floor of Congress to your local Internet forum. The rights and protection of vulnerable groups have always been a matter of civic importance, and keeping an eye on how these issues are transformed on the Internet is an imperative of the modern civically engaged denizen.

Sources used in this blog post:

https://www.commonsense.org/education/articles/what-is-coppa

https://youtube.googleblog.com/2019/09/an-update-on-kids.html?visit_id=637213075436729775-1034442179&hl=en&rd=1

Civic Issues 5: Let’s Talk About Russia

Listen, I know that, with the pandemic happening, talking about Russian election-meddling has fallen off the front page of the news, but it remains a pertinent civic issue all over the world. In this blog post, I’ll be talking more generally about the Russian approach to authoritarian technology and the right to privacy, as well as how this related to their coronavirus response (I know, I said I was done talking about this, but it is such a quintessential example of pandemic response ushering in the end of privacy) and also the famous election meddling we’ve all heard about, which is, beneath all the coronavirus coverage, coming back with the US elections this year. Though the meddling is not usually an issue of privacy, it is an essential civic issue that threatens the very foundation of society.

Like China, Russia is developing an increasingly high-tech society with low expectation of individual privacy. Russia’s coronavirus containment response has been one of the more effective ones out there (though one always has to question whether case counts accurately reflect reality, due to lack of prevalent and timely testing, the lag between testing and reporting, and the desire for a country and its strongman figurehead to appear superior and capable). They started early and severe with quarantine measures, and have captured this moment to expand on citizen surveillance in the name of pandemic protection. Indeed, the novel coronavirus will be a testing ground for many technological advances new to Russian citizen surveillance. Just like many other countries, it is unlikely these changes will all be rolled back after the crisis passes, particularly because Russia’s crisis response has focused so intensely on the introduction and integration of new technology (such as thousands of security cameras) that would go to waste if dismantled after the outbreak.

What, specifically, are they putting in place? A map built on phone and credit card information, tracking the locations of those that are infected and reports, similar to those being built in other nations, including the United States. The Russian map goes a step further, however, aiming to alert friends and family of those who have been quarantined to stay away, and also aiming to automatically quarantine anyone who has been with twenty meters of a quarantined person for at least ten minutes. The ripple effect of this system within a network demonstrates the vast amount of information about people’s interpersonal webs the government is drawing on. Indeed, analyzing the social networks of the infected is a key point of the Russian containment program. For example, a Chinese woman who flew to the city from Beijing was promptly tested in February. The test came back negative, but not before the government had notified and collected data on all six hundred inhabitants of the building she lived in, as well as her friends and the taxi driver that had taken her home.

Other new developments include a massive network of security cameras and software programs with facial recognition capabilities. Moscow’s current 170,000 camera system has allegedly already caught over two hundred people violating self-isolation. In some cases, individuals had been outside for less than half a minute before being reported. Moscow’s police chief is currently installing 9,000 more cameras, and hopes to add as many more as necessary until “there is no dark corner or side street left”. The growing omnipresence of constant closed-camera surveillance is an issue that has been on the rise for a while all over the world, but this pandemic is giving it the public-relations boost it needs to be seen as a necessary measure for societal health. Even after the current crisis ends, facial recognition, universal cameras, and location tracking developments in Russia will only expand.

On a largely unrelated note, Russia has continued to meddle in elections around the world, after having been found responsible for meddling in many previous national affairs, most notably the 2016 US election and possibly the 2016 Brexit referendum. Just last month, a classified briefing was revealed warning the White House that Russians is currently meddling in the 2020 elections in favor of the re-election of Donald Trump. While most of the 2016 campaign was centered in Russia itself, much of the 2020 influence campaign has been outsourced to Ghana and Nigeria, aiming to promote social unrest and primarily racial division among Americans. Facebook recently discovered and closed a network of Facebook and Instagram accounts and pages created for this very purpose, most of which purported to be operated from within the United States. Altogether, these pages created content regularly seen by hundreds of thousands of users- perhaps millions. Though this particular network has been closed down, this is only the beginning, as the world prepares for the 2020 elections. Meddling is more than direct tampering with voting equipment- it also includes mass propaganda in favor of unrest or of any particular government on behalf of a foreign government. Staying educated about this is essential for the American voter, as well as anyone consuming online content with any social, civic, or political bent.

Here are the sources used for this blog post:

https://www.cnn.com/2020/03/29/europe/russia-coronavirus-authoritarian-tech-intl/index.html

https://www.cnn.com/2020/03/12/world/russia-ghana-troll-farms-2020-ward/index.html

Civic Issues 4: Coronavirus Updates Around the World

The last time I wrote a blog post about the coronavirus, the outbreak was  far-away theoretical overseas. Sure, watching the outbreak was horrible, but we didn’t really get it.

With school out for the rest of the semester, more US deaths than 9/11, and the return to normal, non-quarantined society months away, we all get it now.

That being said, I know absolutely everyone is sick of talking about this pandemic, so this will probably be my last blog topic on the subject. As I mentioned in my last blog post about the coronavirus, an epidemic is a colossal opportunity for governments and corporations to overstep privacy boundaries forever. Often, this sort of situation raises the question of the merits of technological privacy when compared to saving lives and preventing spread of disease.

Of course, consumer data is often used in day-to-day life to prevent disease spread. For example, Google provides information about who is searching for symptoms of many different diseases, so healthcare professionals may use swells in search data to predict and prepare for swells in reported cases. However, the key detail here is something called differential privacy: datasets may be shared in order for people to find and use patterns in the data, as long as individual users are not identifiable. The lack of regard for differential privacy is what makes many coronavirus measures different from the day-to-day data use we are used to. A national emergency is not a good time for privacy policies to make progress in government. However, it’s still important for the general public to be aware of these changes. In this blog post, I’ll touch on what the US is doing about coronavirus that may be questionable when it comes to technological ethics, as well as specific corporate examples (Zoom, Google), and how other parts of the world are incorporating worrying new strategies.

In the US, the coronavirus response has centered around many different technological developments. Silicon Valley has taken this moment to praise the United States’ relaxed position on technological regulation, arguing that this has allowed for swift innovation and development of solutions. However, the corporations swiftly building these solutions are full of ethical blunders (more on that below), and even the government’s response is, at times, morally challenging. The US government is sourcing cell phone information data to build maps of where cell phone users are in each state. The CDC could use this information to track compliance with stay-at-home orders, meaning that the data are likely not in alignment with differentiated privacy. The data are taken from the mobile advertisement industry, not from cell service providers themselves. The European Union has a similar strategy, and has shared information from their maps with several nations. Israel has also taken similar steps, while South Korea has made its patient map, compiled from geolocation, surveillance footage, and credit card transactions, public. In Taiwan, an “electric fence” system automatically sends information to authorities when a quarantined person strays too far away from home, while, in China, authorities get regular information on the location of most cell phone users in quarantined areas. The US government is supposedly in talks with tech companies such as Google and Facebook in order to source more location data.

While this data may be helpful for tracking areas of infection and violations of stay-at-home orders, is it ethical? Historical cases have established that using cell phone location data in criminal investigations without a permit may be lawful, because the information one gets from a cell phone location is similar to the information one might obtain simply by physically following someone around a public space. However, this is a non-criminal scenario, when most people are supposed to be hidden away in their private homes. In my mind, the allegory to the physical world looks dicey here, amounting to something that looks more like stalking than simply following someone around a public space. Again, because privacy policy development is not currently a priority, and it is unlikely that governments will relinquish all control of this data after the crisis is over, the ramifications of this wave of cell phone tracking are ethically questionable.

In Europe, where a culture of more stringent privacy policies prevails, technologically invasive steps are facing some resistance. In Poland, for instance, people returning from abroad are required to use an apparently faulty app called Home Quarantine, which takes their personal details, requests selfies, and tracks their location. The data authorities in the country had not been informed about the application.

In corporate news, everyone’s using Zoom now. Which was fine until it wasn’t. Zoom’s privacy policy was not nearly as stringent as it ought to have been, and its mass usage suddenly put it in the spotlight. They have updated the policy this month, but there are still concerns. Lack of clarity about Zoom’s “attention tracking” feature, and what exactly screen-sharing hosts can see of meeting members’ activity, have made students and employees alike nervous. Though zoom claims to be end-to-end encrypted, in truth, Zoom video calls can still be seen by the Zoom corporation, making them not truly entirely encrypted. Just today, a class-action lawsuit surfaced because Zoom was sharing the data of the users of the IOS app with Facebook without revealing this information in its privacy policy. And, finally, “Zoombombing” was briefly in the spotlight, as pranksters across the globe simultaneously realized how easy it was to generate random active meeting numbers and join meetings uninvited with disruptions.

And yes, though the president did garble the message originally, Google really is making a coronavirus testing website, which asks a number of diagnostic questions, and then directs a lucky few to a location to obtain the actual medical test. The catch? You have to have a Google account. This might not seem to be a big deal, but tying this vital health information to a Google account is Google’s latest move in obtaining a massive amount of health and bioinformatics data. I’ll touch on this issue in more detail in a future blog post. For now, know that Google is recording vital testing information in return for users to sign up for their service and begin contributing a lifetime of both health-related and other personal data.

Well, I’m far over wordcount and I’ve hit a variety of topics. Hopefully, I’ve clarified what’s happening for someone out there. Any reader is entitled to their opinion regarding the ethics of the current move away from privacy in the midst of a pandemic. What’s essential is that everyone is informed with accurate, clear information, and is able to make those decisions and voice those opinions meaningfully. I’ve done my best to educate first and voice my opinions second.

Here are the sources used in informing this blog post:

https://www.vox.com/recode/2020/3/31/21201019/zoom-coronavirus-privacy-hacks

Google: https://foreignpolicy.com/2020/03/30/google-personal-health-data-coronavirus-test-privacy-surveillance-silicon-valley/

https://www.cnn.com/2020/03/29/europe/russia-coronavirus-authoritarian-tech-intl/index.html

https://www.reuters.com/article/us-health-coronavirus-europe-tech-poland/in-europe-tech-battle-against-coronavirus-clashes-with-privacy-culture-idUSKBN21D1CC

https://www.theverge.com/2020/3/29/21198158/us-government-mobile-ad-location-data-coronavirus

Civic Issues 3: Data Privacy and the Coronavirus

As the burgeoning technological and manufacturing sectors in China fuel an economic boom, the growing middle class within the nation is rapidly adopting a variety of technologies. However, even as Chinese technologies race ahead of American progress, their developers and distributors are guided by the hand of an authoritarian government. Historically, individual privacy protections within China when it comes to technological advances have been practically nonexistent, lapsing behind the inadequacy and technological illiteracy of US policymakers into explicitly politically-motivated restrictions on technological use and implementation. One well-known example of this is China’s infamous so-called “Great Firewall of China”, which restricts internet access across the nation. During times of conflict and public dissent, when social media usage by protesters cannot be contained, access to the entire internet within a region may be blocked off. Less dramatic than this is the constant filtering of Chinese internet and social media functions. Topics that the Chinese authoritarian government does not want people talking about or searching for yield no search results on the internet and cannot be posted on social media. These topics go from the obviously political (for example, the famous photographs of the tragedy in Tienanmen square) to the seemingly benign (content relating to Winnie the Pooh hs been blocked after people started pointing out that Chinese president Xi Jinping supposedly looks like the cartoon bear). Disobeying these protocols in China has real-world consequences, as the nation arrests thousands of people each year for “internet crimes” and puts them in prisons and detention camps that the outside world knows very little about. During the first 10 months of 2019, for example, the Chinese government arrested 60,000 suspects on charges of internet crimes as part of a renewed campaign to clean up its internet. Another example of a newly developing arena of Chinese government use of the internet to control the behavior of the masses is the social credit system it has been experimenting with and slowly implementing across regions of the country. Such systems assign scores relating to an individual’s trustworthiness and respectability by measuring a variety of social and legal infractions, such as traffic violations.

All this demonstrates the way China’s government is unafraid to explicitly shape and direct the internet. So when the question of big data and individual privacy is raised, it’s no surprise that the Chinese government handles vast quantities of personal data directly. Just as there is no cultural expectation of a free internet, there is also no cultural expectation of individual privacy.

The use of big data to track diseases is a promising arena of future data use. In fact, it’s already implemented all over the world in several ways; American health authorities, for example, watch Google trends to see who is searching for symptoms of a particular disease in order to map its spread. However, the government ideally does not have the individual, identified information of each person in such a situation. Instead, the goal is differential privacy: looking at a corpus of data for patterns without connecting them to individuals. Though it might seem like a situation like an alarming epidemic might make this privacy less of a priority, there are benefits to differential privacy even in such situations. For example, the United States frequently conceals the identities of victims of high-profile diseases simply to the general metropolitan area they live in. There are many reasons for this, one being the simple truth that making individuals identifiable under media scrutiny creates an environment in which other victims may hesitate to report their illness and be subjected to similar scrutiny.

With all this context, it likely comes as no surprise that China has not made differential privacy its first priority in using data to try to model the spread of the current coronavirus outbreak, COVID-19. In some parts of China, citizens buying fever and cough medications currently have to register with their real names to allow for authorities to follow up with them about their illness. In other parts, full names are required to board the subway. Verifying one’s identity by scanning a QR code is now required for buses, trains, and taxis across much of the country. These sorts of developments make it much easier to pinpoint who is spreading the disease, where hotspots are developing, and who may be attempting to disobey quarantine orders. However, it’s jarring to consider the way the government knows one’s approximate whereabouts at all times in China in such a system, especially when one considers the slim likelihood that such a system would be dismantled entirely after the outbreak. The epidemic seems to be an excuse, according to some, for the government to collect even more data from citizens, barring them transport if they refuse to comply. This data, combined with the information they already have on all citizens, creates an unsettling corpus.

How would people respond to a similar outbreak in the US? There might be more resistance due to a cultural expectation of freedom. However, US data collection, though far behind China’s transgressions, is a topic of fierce legislative power, weighed down by much misunderstanding. How might the US handle a situation like this? What are China’s motives with this development?

Sources cited in this post:

https://www.scmp.com/tech/apps-social/article/3052232/coronavirus-accelerates-chinas-big-data-collection-privacy

https://www.chinadaily.com.cn/a/201911/15/WS5dcdf2ada310cf3e3557777c.html

 

Civic Issues 2: Keeping US Tech out of China

For years, the current president and his administration have harped on the dangers of China eclipsing the United State in numerous forms of industry. Perhaps the greatest of these supposed threats is China’s burgeoning technological sector. The tone of panic and us-versus-them mentality of many lawmakers discussing technological globalism raises some questions. In particular, how much of this protectionism is justified? More importantly, is it an effective step towards profit and progress, or is clueless emotional sentiment sending us backwards?

These questions have been on the mind of many readers this week, as the Commerce Department considers a new set of rules which would allow the United States government to bar transactions between American and Chinese technology firms. These rules won’t just effect US tech companies attempting to procure parts from China, either; they also impact American firms seeking to export products abroad. More specifically, they are considering measures such as limiting the number of export licenses that may be held by a company that sells products to, or shares intellectual property with, China. A variety of US technology firms are responding with increased panic to the notion that the government might cut them off from suppliers of parts at any point, without notice or reason besides vague protectionism. Though Trump’s administration has been idly discussing similar proposals for months, the new proposed rules have legislative force behind them that previous discussions of the issue lacked.

In turn, foreign firms are starting to shun American firms and their products, fearing that these rules, if put into place, would prevent their American suppliers from creating their technological products.

In particular, this month, Trump has considered restricting sales of aircraft parts to China. This discussion and potential policy is part of a larger effort on behalf of the Trump administration to keep “sensitive information” out of China. This concern about vague trade secrets is the reason high-tech industries in particular have been the target of the brunt of this scrutiny. Companies such as General Electric, for example, might no longer be able to sell jet engines, because General Electric sells airplane parts to China as part of a larger multinational corporate airfare trade system.

Though these rules might have been drafted in order to keep business within the United States, ironically, the threat of these rules is causing businesses to move more and more of their research and development facilities to locations outside the United States, in order to guarantee timely, uninterrupted access to trade with China in the uncertain future. Investment and planning for new development is increasingly being turned to locations outside the United States, which are safe from these rules.

In fact, the very jobs Trump’s administration may be trying to protect, in the microchip industry and other similar manufacturing sectors of the tech industry, are most threatened by these rules. Because the market for their products is being threatened, US companies are accelerating the speed at which they transfer jobs in manufacturing to facilities overseas. In turn, this lack of US manufacture money ceases to fund research and development within the United States. Individual talented scientists and researchers gravitate towards company centers in other nations, with less stringent trade regulations. Though this brain drain might seem like a small issue at first, when combined with the redirected flow of investment away from the United States, it is nothing to sneeze at.

The most threatening factor of the rules, to the industry, is how broad and far-reaching they are. According to the New York Times, “The proposed rule would allow the commerce secretary to block transactions involving technology that was tied to a “foreign adversary” and that posed a significant risk to the United States.” This is a rule so vague that companies have every right to be worried about how it might be applied. It follows a history of vague and threating trade restrictions with China, such as the one with Chinese telecommunications company Huawei last year, which allowed the US government to block the purchase of technology designed by a “foreign adversary”. With rules so vague already in place, it is possible that technological protectionism could take effect in trade with any number of countries.

Why is it that, despite the outcry from real corporations, policies and rules promoting blind protectionism continue to be developed? What makes people so passionately drawn to economic protectionism and fear around technology? What sorts of misunderstandings drive this fear, and how could our society and government clear them up in order to take economically productive steps in the future? These are all questions to ponder in relation to this issue.

 

Sources for this post:

Civic Issues 1: Technological Illiteracy and the Iowa Caucus

This semester, I plan to write about issues of technology, and particularly privacy in politics and public policy. Individual privacy is an issue that is increasingly prevalent in this technical age, with more and more data created and stored about everyone at all times. There is so much misinformation and ambiguity clouding this issue, both among policymakers and among ordinary civilians. I hope to increase my literacy about these issues and spread an interest in remaining up-to-date on these topics.

Though it’s not every day that a major privacy is headlines the news, at this point it’s probably at least a monthly occurrence. This week, I’m going to reflect on the Iowa Caucus disaster from the perspective of privacy risks and the technological illiteracy rife in politics today.

For those who don’t know: the results of the caucus have been delayed and muddled by the use of an app, developed by a consulting group called Shadow for the Democratic National Convention, as part of the DNC’s attempt to portray technological prowess and simplify the caucus process. But oh, were it so simple. The app ended up malfunctioning, releasing unhelpful error messages to thousands of voters. The DNC had attempted to use what they called “security by obscurity”, not releasing any details about the app, the group that released it, or the security vetting it had gone through before using it for the caucuses.

Here are some facts we know now about the app and its developers. The app was developed in the two month before the caucuses, and the consulting group was paid about $60,000, much less than expected for the development of any decently functional app at such a large scale. The app was so rushed that it was distributed through two different beta-testing platforms, rather than normal distribution platforms, which made the proces much more complicated for voters. In fact, the consulting group did not even pay for the full plans of the beta-testing software, relying on the limited free version for the rollout of the app. The instructions for the app and its improperly explained two-factor identification system left many users confused, with some of them sharing pictures on social media containing sensitive voter information. Shadow’s past ventures have been rejected by various political groups, including the campaign of Joe Biden, who said the group “did not pass our security checklist”. Finally, when volunteers saw the disaster coming ahead of the caucuses, the Democratic National Convention did not adjust their plans. All this suggests not only incompetence on behalf of the developers, but also negligence, ignorance, and a willingness to compromise security for a tech-savvy image on behalf of the DNC.

How does something like this happen, and how is there not enough accountability in government to condemn those responsible for the issue? The answer is the general public lack of understanding regarding how technology works, and when it should and shouldn’t be trusted with something as essential as elections. Though this news event is not directly on the topic of my blog, I believe it is a good introduction to why technological literacy is an essential part of political discussions of civic issues. Though there were paper backups for the caucus information, the entire fiasco caused public unrest and distrust of the political system’s handling of technology, though this is certainly not the first time this has happened. With such a precarious development, there was potential for so much more to go wrong. Even with no actual malicious actors, this lack of literacy is a threat to information security. The surrounding conspiracy theories, confusion, and individual leaks of information will continue to make the process more complicated than it needs to be, and the delay in caucus results only raises political tensions. In the coming months, I hope to discuss some of these relevant issues, in order to make myself a more responsible citizen.

Source for facts on the Iowa Caucus app: https://www.theverge.com/2020/2/5/21123337/iowas-caucus-fracas-tech-literacy