AI’s Grasp on the Art of Impersonation

Recently, I have been seeing many different types of misinformation via media swirling around on the internet, with a new culprit to this societal phenomenon: Artificial Intelligence. Its use has been in place for years, especially in the Hollywood industry, but its access has just recently been given to the general society. UK-based company, ElevenLabs, headed by Zach Silberberg, has been making recent headlines due to its creation of a technology that allows for almost complete voice imitation with little initial reference given. This new development in artificial intelligence has already started off with beat, raising ethical and moral questions due to the many concerning things that many have made with this new technology.

Author John Hendrickson recently wrote an article in The Atlantic in which he interviewed Silberberg about his opinion on his company’s controversial technology. In ElevenLabs’ point of view, their AI audio and video technology was intended to be used for “storytelling”, specifically as an obvious comedic bit. He does acknowledge though, that there has been an unfortunately thin line between storytelling and disinformation/propaganda, especially in the political realm. In recent weeks, these political ai-generated “deepfake”-akin videos center around President Joe Biden. They show him saying that he has not visited East Palestine after that area’s train derailment because he found himself lost on the island from Lost and speaking about the 2011 movie, We Bought a Zoo. While these political figure videos may have comedic intent that could be seen as a simple joke, there is a real worry that this technology could produce dangerous impersonations that are misread. The past few elections were filled with misinformation, culture wars, and propaganda which greatly influenced Americans and gave many outside viewers a poor view on this country. This would likely just be execrated with ElevenLab’s new programs. 

When Hendrickson brought this concern to Silberberg, he emphasized that the key to AI deepfakes is that they show the mannerisms of the person that they are misrepresenting. For example, the President’s stuttering due to documented speech disabilities. But, even Silberberg acknowledges that his company’s technology and AI in general are headed in a bad direction:

“My opinion is that, blanket statement, the use of AI technology is pretty bleak. The way that it is headed is scary. And it is already replacing artists, and is already creating really f*****-up, gross scenarios.”

Sure, a joke is fine, but the danger of this technology comes with the intentions behind it. We like to think that people are rational, ethical, and have good intentions, but this naive sentiment puts us between a rock and a hard place. Technology’s evolution is integral to the advancement of society but that evolution may be putting society at risk. As AI technology progresses and gets smoothed out, these videos and audio recordings may become indistinguishable from reality, having us all question each other

These deepfake videos and audios via AI go beyond politics, with some scary results from the use of ElevenLabs’ new technology. Author Kyle Barr from Gizmodo.com writes that this new technology is being used in Hollywood on popular figures. Recently, 4Chan, a popular anonymous English-language image board website, saw its users post various deepfakes on popular stars. Some well-spread examples include Emma Watson was reading Mein Kampf and Rick and Morty creator Justin Roiland assaulting his wife (after his recent criminal charge). In addition, users also posted various manipulated, fake clips of popular animated characters saying offensive and off-putting things.

Author Justin Carter from Gizmodo.com reports that many of the voice actors of these characters being deepfaked have become outspoken about the dangerous effect this technology has on their personal likeness, image, and years of work. As I mentioned earlier, this manipulation of voices has been happening for years in the Hollywood industry, with voice acting being a main focus. Often, there are clauses in the contracts with these actors which basically require them to give away their voice rights which allows for more limited work (and less pay) on their part which production companies can use to create larger forms of media. This “small amount of reference being turned into a large amount of work” aspect is exactly why ElevenLabs’ new AI technology is so potentially dangerous.

ElevenLabs has created such an advanced technology that a 15-30 second clip from TikTok, Instagram, or other media is all that is needed as reference to create longer audio and potential video fakes. Scam calls have been apart of our society for decades, but this new tech has only accelerated their threat to society. According to author Jakob Aylesbury from eTeknix.com,  Scammers are actually using ElevenLabs’ capabilities to mimic the voice of loved ones of elderly people. This allows them to easily manipulate and confuse this vulnerable population to being scammed out of money and resources, which is disgusting. While this can be managed like any other scam call from the perspective of phone/service companies, their threat is greatly enhanced and should not be overlooked.

Perhaps the most terrifying thing about ElevenLabs’ capabilities and like technology is that there has been little done to stop the dangers that it brings to society. ElevenLabs is looking to implement account verification or manual checking on these fake videos/audio recordings, but no other action has been considered. I also think there is an argument that social media companies really need to start looking into the affect that this type of misinformation has on their respective platforms. Like much other propaganda, these videos have spread like wildfire on Twitter, Instagram, and TikTok; and while they can be approached similar to other posts with misinformation, that may become harder to differentiate. As this technology becomes more hammered out and loses many of its obvious fake elements, the oversight may become impossible simply due to its nature. 

As I have mentioned in my prior two blogs focusing on artificial intelligence, no matter what direction this technology takes, it will have a real and life-changing effect on our society. How we approach it and the level of ethics used may determine the survival of the human race.

*I have not included any of the videos mentioned due to their inappropriate content and intent*

References

  • Aylebury, Jakob. “Scammers Using AI Voice Generation to Mimic the Voices of Loved Ones.” ETeknix, 7 Mar. 2023, https://www.eteknix.com/scammers-using-ai-voice-generation-to-mimic-the-voices-of-loved-ones/.
  • Barr, Kyle. “Ai Voice Simulator Easily Abused to Deepfake Celebrities Spouting Racism and Homophobia.” Gizmodo, Gizmodo, 30 Jan. 2023, https://gizmodo.com/ai-joe-rogan-4chan-deepfake-elevenlabs-1850050482.
  • Carter, Justin. “Voice Actors Are Having Their Voices Stolen by Ai.” Gizmodo, Gizmodo, 12 Feb. 2023, https://gizmodo.com/voice-actors-ai-voices-controversy-1850105561.
  • Hendrickson, John. “The next Big Political Scandal Could Be Faked.” The Atlantic, Atlantic Media Company, 3 Mar. 2023, https://www.theatlantic.com/politics/archive/2023/03/politicians-ai-generated-voice-fake-clips/673270/.
  • Lajka, Arijeta, and The Associated Press. “Artificial Intelligence Makes Voice Cloning Easy and ‘the Monster Is Already on the Loose’.” Fortune, Fortune, 11 Feb. 2023, https://fortune.com/2023/02/11/artificial-intelligence-makes-voice-cloning-easy-and-the-monster-is-already-on-the-loose/.

 

 

Will Robots be taking over our Jobs?-It’s Complicated

Since I was young, there has been a looming fear that technology would be taking over jobs. This subject is one that I even remember being alluded to in television TV shows, with the titular characters of Nickelodeon show Sam and Cat often going to a restaurant with robot serving staff. With the emergence of electronic ordering stations at fast food eateries and ChatGPT, there has been a new question of technology taking jobs from us. Of course, the recent pandemic added to this by necessarily expediting many of the technological investments by companies as places started to open up again. But, how big of a threat are robots to take over our jobs? Well, it’s complicated.

Of the jobs that could be taken over by AI, they are mostly white collar, middle-high level jobs as opposed to more skill-based blue collar jobs. Certain tech jobs are on the chopping block simply due to their nature, along with marketing/advertising positions also threatened as algorithms overtake the traditional targeted ad methods. Interestingly enough, according to Business Insider authors Mok and Zinkula, law jobs are already being taken over by AI technology which greatly shortens the brief information taking process. In addition, many jobs in the financial sector like accountants, stock traders, and financial analysts are being replaced by technological systems as they become more advanced/accurate. And as I mentioned in the last article, Customer Service jobs are quickly being replaced by AI systems like akin to ChatGPT. Of course, there is no denying that these job losses are real concerns, but perhaps there is a big silver lining in the midst of the AI threat: the jobs this new technology already has and will create.

Business Insider’s Paris Marx stresses that many of the concerns about robots, AI, and technology taking over jobs come from missing context. A 2014 analysis showed that AI could wipe 47% of jobs by 2034, but nearly a decade later those job cuts are not widely seen, especially in the ways predicted. In a more recent study from 2020, research has shown that while these new technologies will cut about 85 million jobs by 2025, it will also create 97 million new jobs. The important context of the matter is that behind the curtain of all this technological change, there are tons of people making it all work. A big space right now, the electric self-driving car industry, threatened to take the jobs of drivers yet it created millions of jobs in the tech sector to monitor this new technology. Parx also notes that many of the technological investments  by businesses are actually meant to be used as an enhancement/compliment to the jobs of many, not a replacement.

AI’s usage is often credited as efficiency in businesses and while that may be true, there is also a reality that these systems often act as a necessary middle man in many situations. As we saw with record low unemployment emerging out of the pandemic, many public places were understaffed and AI technology was used to fill an emerging gap. From a customer standpoint, for many who are savvy with technology many of these technologies like electronic order systems gave a well-appreciated convenience along with the ability to limit contact with others during the height of the pandemic. Parx stresses that AI is meant to help employees and customers, as technology has done since the beginning of time. This new advancement in technology is aimed at empowerment and liberation for employees and customers in order to save us time and increase productivity.

This technology undoubtedly also has many capabilities which are revolutionary. Readwrite.com author Daniel Williams writes that as those in the Baby Boomer generation retire earlier, there is a need by companies to figure out how to best take advantage of this new AI technology in order to replace many of the skills that have left the labor force. Things like drywall installation, plumbing, painting, and electrical work can be done faster, on a larger scale, and more precisely than traditional labor in this space in certain instances. As mentioned before, this technology is not meant to replace these positions but simply enhance them. AI technology is becoming more creative and precise, which can fill a large gap being created as less people attend trade schooling. In his opinion, AI is in no way meant to directly replace human labor.

But, as this technology enhances jobs all throughout the labor force, a new question that has emerged: Will this technology increase the wealth gap? As I mentioned earlier, AI and like technology has created many new, high paying jobs in the tech industry, with 97 million new jobs created by 2025. The Guardian author Steven Greenhouse points out that while fears about AI taking over are likely overblown, there is a real threat that this technology could increase the wealth gap. McKinley global has estimated that 1/4 of workers will see AI incorporated in some aspect of their job. Greenhouse mentions that 50-60% of companies have AI projects in the works which aim to take over many aspects of the higher middle class. While blue collar jobs are not as threatened, white collar jobs may see cuts simply due to these AI enhancements. This will lead to a big gap between those in the high-paying tech sphere & above and those in lower-class skill-based positions. In addition, the jobs that are left are likely to become very commodified as AI enhances jobs.

I think the main thing that could be taken away from this AI threat is the realization that this is simply technological progression. Since our world’s beginning, people’s jobs have changed, merged, and adapted as technology has progressed to make life easier and innovation possible. There is definitely excitement when it comes to this space as I think recently there has been a new understanding that AI technology could send us down a million different paths, instead of what was originally thought of as the inevitable, scary one. AI, robots, machinery, and the like have so many possibilities to change society and our jobs as we know them. Will that happen?-it’s complicated.

References

Greenhouse, Steve. “US Experts Warn AI Likely to Kill off Jobs – and Widen Wealth Inequality.” The Guardian, Guardian News and Media, 8 Feb. 2023, https://www.theguardian.com/technology/2023/feb/08/ai-chatgpt-jobs-economy-inequality.

Marx, Paris. “Artificial Intelligence’s Dirty Secret.” Business Insider, Business Insider, 12 Feb. 2023, https://www.businessinsider.com/chatgpt-ai-will-not-take-jobs-create-future-work-opportunities-2023-2.

Mok, Aaron, and Jacob Zinkula. “Chatgpt May Be Coming for Our Jobs. Here Are the 10 Roles That AI Is Most Likely to Replace.” Business Insider, Business Insider, 2 Feb. 2023, https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02.

Williams, Daniel. “The Impact of AI as Companies Address the Skilled Labor Shortage.” ReadWrite, 6 Feb. 2023, https://readwrite.com/the-impact-of-ai-as-companies-address-the-skilled-labor-shortage/.

The Good, the Bad, and the Ugly of ChatGPT

I Asked Chat GPT For 3 Unique Side Hustle Ideas | by Tica Darius | Jan, 2023 | DataDrivenInvestorThe internet went crazy recently when a new AI program named ChatGPT was launched. This revolutionary program uses AI to answer questions that users may have. While this technological chat idea has existed since Google’s creation, Chat GPT is able to answer questions more accurately and with more detail. For example, a science teacher could ask the program “Can you write a lesson plan on the process of photosynthesis” and the chat would quickly reply with a detailed lesson plan. Sounds great, right? Sure. But, in many integral fields this program opened a Pandora’s box of possible issues that could ensue. From the good, to the bad, to the ugly; this technology advancement has put society in a conflicting spiral.

The Good: 

Interestingly enough, when it came time to figure out my Civics Issue Blog, I came across a very interesting TikTok video by user “gibsonishere” about teachers using ChatGPT for good reasons. A person would first be worried about academic integrity and cheating when thinking about this new technology, but this teacher gave a new perspective. She considers it a tool to help students write better and more efficiently. To prove her point, she uses ChatGPT to help write a letter to her school board, asking for their approval for the use of this program. This TikTok user argues to her board that technology is ever evolving and it should be taken advantage of, rather than shoed away, avoiding the inevitable of its progression. Her argument definitely gives a unique perspective to this highly contentious issue.

The argument that ChatGPT should be used as a tool is also being stressed by those in the business world. As businesses grow and their customer bases expand, there becomes a need for customer service. Weetechsolution.com argues that this system will greatly improve customer service technology in order to help the business recovery of companies. Due to ChatGPT’s intelligence, it has capabilities to be a much greater tool to customer service representative and online internal customer service technology, which would help both companies and their customers. 

In a more general sense, this ChatGPT also has the potential to increase productivity in companies and reduce expenses. Ideally, companies will be able to use this technology to provide quicker and better service to their clients while also saving labor cost. This may not always be seen as a good thing, but its potential to improve the function of many customers cannot be overlooked. The capability of this system has so much potential and its use as a tool for so many people shows that this daunting AI technology can be used for good.

The Bad:

Of course with the bad of ChatGPT, the first thing that must be negatively mentioned is the questions surrounding plagiarism, cheating, and academic integrity. As soon as this program came to light, those in the academic field had genuine questions about its impact on the learning of kids. Many wondered if the program would give kids an incentive to cheat, be lazy, and not put genuine effort into their assignments and exams. At the collegiate level, there have already been numerous reported instances of ChatGPT being used to write papers or complete assignments (Agomuoh). This genuine concern has been asked before as technology as progressed, but ChatGPT really is a whole new ball game. Luckily, plagiarism checkers used by teachers and professors have already started to add a “possibly AI-generated” red flag to their systems in order to tackle this new academic challenge.

Another concern with ChatGPT is its bias. While implicit bias is not new in the technology sector, ChatGPT had one very public, concerning interaction with a user. Steven Piantosdi, a Twitter user, asked ChatGPT many questions dealing with race, gender, and intelligence. He got concerning results, with the program stating that white males are the best scientists and black mens’ lives should not be saved if needed, among other things. This interaction blew up and went beyond Twitter, highlighting its issues. This racist and sexist bias was not hard to find and was quite apparent in simple questions. It really shows the danger of the program when used by adults, let alone children.

 

ImageImage

Akin to bias, ChatGPT also has many issues when it comes to accuracy. This AI program is obviously very advanced and complicated, so this hiccup is expected but cannot be ignored. While it has a vast amount of knowledge through its system, there are many subjects that ChatGPT simply cannot give an answer to. Digitaltrends author Fiona Agomouh writes “When I used the chatbot to explore my area of interest, tarot, and astrology, I was easily able to identify errors within responses and state that there was incorrect information.” In addition, CNET seemingly uses this technology and readers have found “glaring inaccuracies” in many of their articles. As I mentioned in an earlier part of this piece, ChatGPT really excels as a tool, rather than a totally truthful answer machine.

The Ugly: 

Perhaps non-surprisingly, ChatGPT has had many capacity issues with its launch. It is not an open to the public service (like Google) necessarily with users having to create an account to use the program. In addition, users often encounter an error message stating that ChatGPT is at capacity, showing the limits to this very advanced program. While this is not uncommon with new technology, it perhaps alludes to a realization that this technology is not as much of a threat as originally thought.

I also cannot finish this blog without stating the very obvious and ugly concern with ChatGPT which is that technology is overtaking our lives. It is really so interesting to think how exponentially quickly we went from the creation of the computer to Google to ChatGPT and like systems. In movies, media, and other outlets we have seen what a threat technology can be when used incorrectly and this new AI system really emphasizes that point. Who knows where ChatGPT will take us, but one thing is sure: technological progression will be a civic issue for years to come.

 

References:

  • Agomuoh, Fionna. “The 6 Biggest Problems with CHATGPT Right Now.” Digital Trends, Digital Trends, 27 Jan. 2023, https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/.
  • Gibson [@gibsonishere]”It’s actually very cool…” TikTok, January 14, 2023, https://www.tiktok.com/@gibsonishere/video/7188674636165598510?_r=1&_t=8ZUzsEpiLeN
  • Solution, WeeTech. “What Is CHATGPT and the Benefits of Using Chatgpt.” WeeTech Solution Pvt Ltd, https://www.weetechsolution.com/blog/what-is-chat-gpt-and-the-advantages-of-using-chat-gpt.
  • Steven T Piantadosi [@spiantado] “Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.” Twitter, December 4, 2022