How Realistic is Drone Delivery?

In 2013, Amazon stated that they would begin working towards a new type of delivery service, delivery by drone (Austin, 2021). However, almost a decade later, and still no sign of this high-tech option available to the public. Was founder of Amazon, Jeff Bezos, in too far over his head? How realistic is drone delivery, and how achievable is it? Between an increase in job loss within the industry, overly-ambitious expectations, and lack of solvable issues and resources, drone delivery services have had an array of miscalculations.

One of my own personal doubts about the technology would be immediate risk of damage due to incorrect calculations and misdirection. Meaning, what would happen if there were physical obstacles in between the drone and its intended destination? How easy could it be for a bystander to interfere with a drone’s delivery, a means of transportation that requires no human interaction? What would stand in between possibility of theft, destruction, natural disaster-related obstacles, technical malfunctions? Even though it could be a huge step in the business industry and very convenient, it is a highly flawed concept with very high risk of complications.

Austin, P. L. (2021, November 2). Whatever happened to Amazon’s Drone Delivery Service? Time. Retrieved February 13, 2022, from https://time.com/6093371/amazon-drone-delivery-service/

https://time.com/6093371/amazon-drone-delivery-service/

Google v. Oracle and the copyright-ability of API’s

   Last Spring in the Supreme Court, Google and Oracle concluded a decade long battle over whether or not Oracle owned a truly copyrightable piece of software. The ruling may have been a piece of news that went unseen amidst the pandemonium of the pandemic yet it held very large implications for other tech companies and arguably even the public. Google and Oracle are two tech giants that dominate the software industry, they both offer products that deal with something called an API. The piece of software that was in question was an API (Application Programming Interface). Oracle’s predecessor, SUN Microsystems, created Java. Java is one of the leading programming languages and Google recognized how useful this existing language would be in creating their Android operating system. Google had intentions of making an easy to use open source system and sought after purchasing a license to use the API within Java. Google had radically different views with the use of this API, Google wanted it to be interoperable, meaning that it would be able to “write once and be used anywhere.” However, Oracle did not agree with Google’s open source vision and ceased any negotiation. This is when Google decided to seek out their own solution. In doing so, they copied around roughly 11,000 lines of code( out of millions), which Oracle took offense to and immediately claimed that Google infringed on their “copyright”. A lawsuit immediately followed, brought on by Oracle. After being judged in favor of Oracle at the lower circuit court system, the case was sent up to the Supreme Court.

   The Supreme Court ended up ruling in Google’s favor, in a majority 6-2 decision. You may be wondering what all of this court drama has to do with the software world. The implications are more grand scale than immediately visible. The fact that the court ruled that a “functional operating system” could not be copyrighted meant that the ENTIRE tech world was now able to create freely and creatively without any worries of litigation, which could dissuade software companies from creating in the first place due to high legal fees. If it had been ruled in Oracle’s favor then the entire tech world would be at risk of changing the way APIs are used and licensed. This would be a very big issue due to how essential and prominent these API systems are nowadays. Long story short, Google’s victory allows the public to freely innovate and create software without any worry of needing to create systems that are innately incompatible with other common systems. This was a win for not only the tech world, but for the concept of capitalistic innovation in one of the fastest growing industries.

 

https://www.bbc.com/news/technology-56639088#:~:text=Oracle%2C%20another%20tech%20titan%2C%20had%20sued%20Google%20in,a%20lower%20court%27s%20decision%20it%20had%20infringed%20copyright.

 

Is the “Google effect” helpful or harmful?

The Increasing accessibility we have to the Internet has been very beneficial for research and learning purposes, especially as students. But has it consequently impacted our cognitive abilities? Research shows that, “The average number of Google searches per day has grown from 9,800 in 1998 to over 4.7 trillion today,” (Google Annual Search Statistics, 30 Apr. 2013). This statistic reflects the increasing convenience of the Internet, as well as a possible decrease in cognitive activity. If we have, roughly, all of the answers to any question we may need to ask, why would we ever challenge ourselves to find our own answers through experience? Could this mindlessness also be impacting our memory? I have personally relied heavily on the Internet, specifically Google, to answer my questions, both personal and academic, and have noticed that I lack the skills to answer questions without the help of the internet. Similarly, retaining information seems almost obsolete when we have access to such a vast variety of information, as mentioned in the article, “If we rely on Google to store out knowledge, we may be losing an important part of our identity,” (Academic Earth, Sparrow, B, J Liu, and D M. Wegner 2011).

https://academicearth.org/electives/internet-changing-your-brain/ 

How Does Our Misuse of AI Effect it’s Success

In March of 2016, Twitter released their first installation of Artificial Intelligence (AI), which was a bot account under the handle @TayandYou. It was capable of generating captions in “meme” format, responding to other Twitter users, etc. Tay was one of it’s kind’s first attempt at applying “conversational understanding”, mimicking the language style of a teenage girl from our generation. Tay was programmed to merely absorb the language patterns of those who interact and apply them in it’s responses and content.

An article stated that, “given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions,” (Perez March 2016), and shortly after it’s release, Tay was more than equipped with various offensive “opinions” and, although Tay was not intentionally making these hurtful and controversial statements, users were being harmed by the language used and consequently, had Tay shut down.

Similar to a parrot, Tay was just simply using the content she was provided with in order to communicate with it’s users. Tay was not aware that these statements held any kind of backhanded, aggressive, and even threatening intention as the system does not have the capability of recognizing this level of emotion. Although this type of AI may be able to communicate with language filters and more adjustments in order to avoid instances like these, Twitter has since shut Tay down and has not made any further advancements that are available to the public.

Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism [Updated]

https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

Where Did Google Flu Trends Go Wrong?

In recent years, Big Data has becoming increasingly more relevant, especially in terms of the pandemic and contact tracing. Similarly, even at our University, we have been implementing a system that tracks our contact with other students and stores data based on our Covid19 exposure and testing. Google, on the other hand, first released this type of concept back in 2008, in a paper for Nature journal. The idea relied on the consistency of American’s using their search engines to answer flu-related questions, assuming that this was confirmation that the user was experiencing flu-like symptoms.

Google believed that, “Rather than relying on disease surveillance used by the US Centers for Disease Control and Prevention (CDC) – such as visits to doctors and lab tests – the authors suggested it would be possible to predict epidemics through Google searches,” (Royan Oct. 2013). However, though this may seem like a fool-proof concept in theory, it suffered many flaws that consequently lead to the inaccuracy of the data. Google had “successfully” been able to predict flu trends prior to the release from the CDC, which lead people to believe that it was far more accurate and convenient.

The Google team had assessed the accuracy of their model prior to release by comparing to disease reports from the 2007 epidemic. “The predictions appeared to be pretty close to real-life disease levels. Because Flu Trends would able to predict an increase in cases before the CDC, it was trumpeted as the arrival of the big data age,” (Royan Oct. 2013) These results of this study were shared with the general public, fueling their trust in this system.

Following the 2009 pandemic, it was evident that the Google Flu Trends tracker had, “overestimated the size of the influenza epidemic in New York State,” (Royan Oct. 2013). Now, people were starting to question if the tracker was more accurate at predicting future trends, or just simply applying previous information in hopes of finding a pattern. Algorithms can be very difficult to manage and maintain and should not be used in place of true research and data.

https://theconversation.com/googles-flu-fail-shows-the-problem-with-big-data-19363