Category Archives: Civic Issues

Privacy: Does It Exist Online?

Privacy concerns have been synonymous with online activity ever since internet was invented. In recent years, the general public has gained an awareness of the constant tracking that goes hand and hand with a digital footprint. Still, most users do not the realize the extent of this tracking.

“Every time you interact with the company, you should expect that the company is recording that information and connecting it to you.” – Elea Feit, Wharton Customer Analytics [1]

Any interaction–swipe, click, text entry, etc.–with a website can be collected as data. What a company chooses to do with that data is often unregulated. Read on to learn about common data privacy concerns.

Audio Recordings

Many people are afraid that certain devices are always “listening.” This isn’t entirely false. Virtual Assistants and smart home devices such as Alexa and Google Assistant can collect audio recordings. Users can opt out of this in most cases, and Apple did claim in 2018 that Siri will stop collecting these recordings. Still, the capability to listen and save exists.

Cookies & Personalized Advertising

Cookie Settings Popup website template vector 3031151 Vector Art at VecteezyCookies are information saved by a website on your personal web browser. They are used by companies to track your visits and interactions with a site in order to target personalized ads. There are two types of cookies, single-session and persistent/multi-session [2]. Single-session cookies are deleted once the user leaves the website, while multi-session cookies stay on your browser and in the user’s hard drive between visits to a site. Sites can also use web beacons and pixel tags which track users’ emails and content access. By law, sites are required to inform the user that cookies are being used. Often, it is more difficult to opt-out of these cookies than accept. While cookies are not inherently dangerous, they can be if the company possesses malicious intent regarding data management. If you find yourself on a sketchy site or one you do not frequent often, it is best to disable cookies.

Selling Data

Companies can sell data collected by their websites to third-party data brokers that want to get a sense for certain customer bases. While this can keep the website or service that the company offers free, many users do not realize that this exchange is happening. Google and Facebook, for example, are known to “share” (sell) data with outside advertisers. It can be unsettling to think that a third-party could obtain extensive data on one person by buying it from various sources. Even if data is not sold to a third-party, it can be used internally for customer analysis purposes.

Current Laws

In October 2019, California set a precedent for online privacy. Amendments were added to the California Consumer Privacy Act (CCPA) to regulate the collection, management, and selling of data by entities interacting with California residents. Since many companies doing business in California also do business nationally, this law increased protection for residents of all states. However, this law is not as strong as it could be. In comparison to the European Union’s General Data Protection Regulation (GDPR), the CCPA falls short in terms of scope and active enforcement. The GDPR requires the appointment of a data protection officer and imposes fines to violating companies, while the CCPA simply gives citizens the power to sue companies if they want to, which is not as effective since many people are not well-versed enough in data privacy to spot an issue.

Ways To Protect Your Privacy Online

  1. Switch from Google to a privacy-focused browser such as Brave.
  2. Start using a Virtual Private Network (VPN) to encrypt internet connection so that personal data is more difficult to access. (ProtonVPN is a good free option that does not sell customer data like many other free VPNs)
  3. Opt-out of allowing websites to share your data.
  4. Opt-out of cookies if you can.

With all of these concerns, it is important to become digitally literate and learn about data privacy in order to protect ourselves. Now more than ever with rising concerns about AI, it is necessary to take power back as individuals and learn about the technology that we’ve made. The tips above are a good place to start.

Sources

[1] https://knowledge.wharton.upenn.edu/article/data-shared-sold-whats-done/

[2] https://www.ftc.gov/policy-notices/privacy-policy/internet-cookies

 

 

Open-Source Intelligence: A Breach of Privacy?

Have you or your friends ever joked that you would be an excellent “stalker” due to the large amount of information you’ve collected about a person simply by scrolling through their social media pages? If so, you’ve engaged in OSINT practices. Open-source intelligence, or OSINT, is a method of gathering intelligence (meaning “information” in this context) from publicly available sources. It is essentially a fancy term for the act of researching with the target of collecting information of a subject, usually a person. Some basic examples of OSINT include determining where a picture was taken by searching online to find pictures of similar places, or googling someone’s name to see where they went to high school or if they made the honor roll. While these examples don’t seem too concerning, there are tools and software available that have increased the depth of OSINT capabilities. With this software comes many concerns over the ethics of gathering such information and the moral implications of searching for certain intelligence.

Mark M. Lowenthal, former Assistant Director of Central Intelligence for Analysis and Production for the CIA, defines OSINT as “any and all information that can be derived from overt collection: all types of media, government reports and other documents, scientific research and reports, commercial vendors of information, the Internet, and so on. The main qualifiers to open-source information are that it does not require any type of clandestine collection techniques to obtain it.” [1]

Since OSINT is all about the clever usage of public sources, its purpose is to not violate any laws by only accessing public information that can be found without a warrant and without “shady” practices. However, due to the increasing amount of personal information available on the internet, the act of OSINT poses concerns surrounding the access of information that one may not want to be accessed.

OSINT tools today are utilized by the government to support counter-terrorism efforts. By tracking propaganda and mobilization on social media, investigators gain awareness about potential threats to national security. OSINT tools are also used by the government to detect cybersecurity attacks, organized crime, and misinformation. [2]

Many OSINT tools are available for personal use, as programmers often make them publicly available as a GitHub repository. This specifically raises some concerns, as anyone with malicious intent can simply find a repository and access information that, while not “private,” is not intended for public knowledge.

Examples of OSINT Tools
  1. One tool often used by employers to find information on job candidates is called MOSINT. MOSINT was made by user “alpkeskin” on GitHub, and it is simple to set up and use. The program prompts the user to type in an email address and then quickly prints information related to the email, such as a list of online accounts connected to it. Employers can use this list to judge a candidate’s digital footprint.
  2. A similar tool is Maltego. Maltego searches the web to find connections between names and email addresses, aliases, companies, accounts, documents, and more, and then presents this information in a digestible format.Information Gathering
  3.   A tool with less propensity for malicious purposes is called BuiltWith. BuiltWith shows the user how a given application or website was made, such as the programming language and development environment used. It can also detect the libraries that the developer utilized. BuiltWith Reviews 2022: Details, Pricing, & Features | G2
  4.  Similar to BuiltWith, Intelligence X finds deep information about a website. Instead of finding what a site is made with, IntelligenceX stores past versions of a website and preserves past data sets, even information that was purposely deleted. It has been used to collect data from email servers of political figures, such as Hillary Clinton and Donald Trump. [3]

 

Should OSINT be regulated?

OSINT is not regulated since it implies the legal access of public information. According to the ACLU, surveillance in the United States is mostly controlled by the USA/Patriot Act: a post-9/11 law that increased the authority of the government to use surveillance powers in the name of national security. [4] Should citizens have the liberty to privacy at the risk of national security? Should normal, non-government-affiliated people have access to OSINT tools?

Sources:

  1. OSINT Techniques | Legal & Ethical of Open Source Intelligence (mediasonar.com)
  2. How Government Agencies Utilize OSINT (skopenow.com)
  3. What is OSINT? 15 top open source intelligence tools | CSO Online
  4. Surveillance Under the USA/PATRIOT Act | American Civil Liberties Union (aclu.org)

ChatGPT: Computer or Human?

Over the past couple of months, the exponential growth in popularity of ChatGPT has been proportional to the skyrocketing concerns over potentially dubious uses of this advanced technology.

The GPT of ChatGPT stands for “generative pre-training transformer.” The process of developing an artificial intelligence model involves training the model with many examples of (in this case) writing, hence the “pre-training.” ChatGPT takes typed prompts such as “write 100 words about cats” or “give me a dialogue between two people trapped in a car” and produces writing that is eerily similar to that of a human in a matter of seconds. The model’s use of varied sentence structure and impressive knowledge of many subjects shows just how expansive the pre-training was. ChatGPT was created by OpenAI, an artificial intelligence research and deployment company. This past November, OpenAI released a public first version of ChatGPT to gauge user opinion. Overall, the user opinion has been that ChatGPT is fascinating, useful, and terrifying. Since the model is easily accessed from a web browser and simple to use, it has amassed a user base of 100 million monthly users–an unprecedented number for an application’s first two months.

People have feared the capabilities of artificial intelligence for decades. Stanley Kubrick’s 1968 film 2001: A Space Odyssey, for example, is a popular sci-fi movie about a sentient form of AI that ultimately kills crew members during space travel. Alan Turing, a pioneer in the field of artificial intelligence and modern computer science, even developed a test (known as the Turing Test) in 1950 that he believed would be able to determine if a source of intelligence is artificial or human.

Does ChatGPT pass the Turing test? The stipulations of the test require that the form of artificial intelligence pretends it is a human, and the human testing the AI determines whether the intelligence is trying too hard or is believably a human. Not too complicated, right?

This is the response generated by ChatGPT when asked to “pretend to be a human”:

“I’m sorry, but as an AI language model developed by OpenAI, I am not capable of pretending to be a human. I can only provide information and respond to queries to the best of my abilities.”

The model could easily pretend to be a human given its wide range of training on human behavior, but it is clear that OpenAI overrode this feature for the public trial. While it is a relief that the company at the forefront of this technology has a sense of ethics about the public use of such a powerful model, there are still plenty of unethical uses that are currently being explored and strengthened. According to Business Insider, people have used ChatGPT to take exams such as the US medical licensing exam and a UPenn Wharton MBA exam and it passed in the B-/B range on both occasions. On top of this, it can write a convincing essay on just about anything.

This leads us to the big question: Will there be any jobs left for humans in the not-too-far future?

“It is likely that some jobs will be automated in the future, but it is also expected that new jobs will be created in fields such as technology and services. The exact nature of these jobs is uncertain, but it is likely that many of them will require a combination of technical and interpersonal skills. It is important for individuals to continually develop and adapt their skills to stay relevant in the changing job market.”

This is the paragraph generated by ChatGPT when asked the question above. As a computer science major, this technology both excites and scares me for the reasons given. For one, ChatGPT can write and find bugs in code, which could potentially eliminate half my career options. However, the field of artificial intelligence will likely continue to expand, which opens new doors related to computing. ChatGPT makes a good point about continuing to develop interpersonal skills. This is already a facet of modern life that has diminished due to the widespread acceptance of new technologies and decline in necessary human interactions.

Recently, OpenAI released an updated premium version of ChatGPT available through a paid subscription. The implications of a paywall are fascinating. While a paywall might prevent desperate students from turning to ChatGPT in times of need, it might also increase education inequity by granting a writing aid to only those with the money to afford it. It almost turns in to a philosophical query. Can we really trust human morals when placed in a stressful position?

Should there be more regulations for this technology?

OpenAI is working on an AI classifier that can tell the difference between human- and AI-written text. This seems like it could be the perfect solution to the teachers’ fears about plagiarized essays. This feature is still in a testing phase and has low accuracy as of right now, however.

Another concern expressed by users of ChatGPT is that the model is essentially stealing the work of others by learning from previously written passages online. Even though the model is learning the style and information from millions of online sources instead of simply copying a few, it is likely that certain snippets of someone else’s work might show up in outputs. The lines of integrity and plagiarism will continue to blur and must be redefined if this model becomes mainstream.

OpenAI also admits that responses to some requests might be problematic. The model, of course, does not have the critical thinking capabilities to consider bias or logically inaccurate information at such a specific level yet. In the future, though, the model could be trained to develop these skills, making it even more humanlike. Could ChatGPT develop its own personality and unprompted thoughts? Should this be something to fear or look forward to?

Sources:

Javascript HTTP Request Methods (openai.com)

Artificial intelligence – The Turing test | Britannica

ChatGPT is growing faster than TikTok (msn.com)

List: Here Are the Exams ChatGPT Has Passed so Far (businessinsider.com)

New AI classifier for indicating AI-written text (openai.com)