TikTok: Machine Learning Bias and Echo Chambers

TikTok is a new and upcoming social media which is also very popular amongst users 24 and under (mostly gen-z). TikTok is a platform where users can share videos that range from 8 seconds to 60 seconds long. The apps whole goal is to keep users scrolling through content for as long as possible so every user’s feed is catered and personally made for them using machine learning! This means no two users will ever have the same exact feed on TikTok as the app is constantly studying and learning what content it’s users prefer. If you are not someone who makes content but prefers to just be entertained this is perfect since your feed is catered just for you!

TikTok is also presented to the younger generation as an even playing field where anyone can go viral. This is where the machine learning bias comes in. Some AI researchers have already noticed this and stated that while TikTok may not be doing it intentionally, there is a very clear bias on the platform. To sum up the most important part of the article that I wanted to share, basically, the TikTok algorithm studies what it’s users watch and catches its users into this loop where they will constantly be recommending identical content. To quote Marc Faddoul, the researcher from the article that I will link below, he states, “If most popular influencers are, say, blond, it’s will be easier for a blond to get followers than for a member of an underrepresented minority. And the loop goes on…” This is a very clear machine learning bias as this suppresses diverse creators’ ability to succeed on the app and leaves TikTok as an uneven playing field since the algorithm has learned that some videos are more likely to go viral (after studying the popular creators) while this is untrue. Technically, video virality does not matter on who is creating the content but rather what the content is and this algorithm bias could be holding back videos that have just as much potential to go viral as other videos.

Additionally, the article did not talk about echo chambers but due to TikTok’s repetitive and catered algorithm, I believe that users can get caught into their own echo chambers since they will only be shown other users who are very similar to themselves, thus sharing many of the same beliefs. In fact, there is a trend on TikTok right now where teens are creating videos stating “if this TikTok ends up on your feed, it is because you are _____”. Showing that the algorithm is so precise and analyzes its users and content so well that it will be matched up to the right people, and by “right people” I mean people very similar to you specifically. Therefore, I believe that compared to other social media platforms such as Facebook and Twitter you are more likely to get caught within an echo chamber on TikTok since the algorithm will constantly only feed you content from creators that share your political beliefs/world views/etc.

Source 1, Racial Bias: https://www.buzzfeednews.com/article/laurenstrapagiel/tiktok-algorithim-racial-bias

Source 2, Filter Bubbles: https://www.wired.co.uk/article/tiktok-filter-bubbles

Ever thought of our very own ‘McDonalds’ to acquire AI technology?

Our most favorite food chain ‘McDonalds’ has recently been doing a lot of tech deals to help evolve user experience. McDonalds largest deal in the last two decades has been its acquisition of Dynamic Yield, which specializes in personalization and decision logic technology. In today’s time even food chains are turning away to automation and uprising technology plans to keep their company on board and clients intact.

McDonald’s is trying to replace the human servers with voice-based technology in its US drive-through’s to make the ordering process more efficient, cutting down on service time. The start-up Apprente is helping it implement this technology which uses AI to understand drive thru orders. It is also said, that in future this tech can be used for self-order kiosks and mobile apps. By using machine learning and artificial intelligence, the fast food chain has high hopes of predicting what the customers would want to eat before they could even tell. The chain has digital boards programmed to market food strategically, taking into account factors like time of the day, weather and traffic enabling them to coax people and make more money. For example, automatically suggesting McFlurry ice cream on extremely hot days or telling customers their hot selling item of the day, thereafter nudging customers to order more. Sometimes its scary to think, how there could be a time when machines would know more about us than we ourselves.

McDonalds has further tested technology at some of its drive through’s which can recognize license-plate numbers of cars enabling the company to receive record of the customers previous orders beforehand as long as the person signs away the data.

Adopting newer technology for its functioning, helps one stay ahead of its competition, just as this chain is doing but again raises the question, is automation and AI making human jobs obsolete?

 

Source : https://www.cnbc.com/2019/09/10/mcdonalds-acquires-ai-company-trying-to-automate-the-drive-thru.html

 

Microsoft Creates an AI to Beat Ms. Pac-Man

Pac-Man is one of the most popular and best-selling arcade games of all time and is simple to follow. The player is placed in a maze filled with food (depicted as pellets or dots) and needs to eat all of it to advance to the next level. This task is made difficult by four ghosts that pursue Pac-Man through the maze. If Pac-Man contacts any of the ghosts, the player loses a life. After all three lives are lost, the game’s over. In the making of Ms. Pac-Man, in order to increase the difficulty, the creators added in a few new mazes and decided to make the fruit, a high point item, and more of a challenge to collect by making it move across the field. Also, instead of having a set time when the ghosts switch between scatter and chase mode, they completely randomized it.

The ghosts in Pac-Man are always in one of three possible modes: Chase, Scatter, or Frightened. This is when the ghosts pursue Pac-Man, avoid Pac-Man, or turn blue and can be eaten by Pac-Man. In order to be successful in Pac-man, you must understand ghost behavior. The key to understanding ghost behavior is the concept of a target tile. Most of the time, each ghost has a specific tile that it is trying to reach, and its behavior revolves around trying to get to that tile from its current one. When approaching “intersection” tiles, which is indicated in green in the picture, the choice of which direction to turn is made based on which tile adjoining the intersection will put the ghost nearest to its target tile, measured in a straight line. The ghost will choose the path with the shorter straight line. However, this can result in the ghosts selecting the “wrong” turn when the initial choice places them closer, but the overall path is longer. Note that target tiles are only used in chase and scatter mode. In frightened mode, the ghosts pseudorandomly decide which turns to make at every intersection.

The ghosts’ AI is very simple, which makes the complex behavior of the ghosts even more impressive. Ghosts only ever plan one step into the future as they move about the maze. Whenever a ghost enters a new tile, it looks ahead to the next tile that it will reach and decides which direction it will turn when it gets there.

With the ghost’s behavior being more random in Ms. Pac-Man, Microsoft’s AI, the player, had to rely on assigning every object on the maze a priority which then influenced the player’s behavior as it moved throughout the maze. For example, the player knew how to avoid the ghost because of the huge priority it had the closer it got to it. When the ghost turned to frighten mode and could be eaten, the priority then flipped and the player knew to follow after it. With the use of priorities and a heat map of the game.

The AI was able to navigate through every level of the game while avoiding all of the ghosts and eating all of the food on each maze. This led to the AI being the first player to ever beat Ms. Pac-Man with a perfect score of 999990. However, because this wasn’t an actual person, the actual world record is held by a player who obtained a score of 933580 themselves.

https://www.gameinformer.com/b/features/archive/2015/11/03/inside-the-development-of-ms-pac_2d00_man.aspx

https://www.youtube.com/watch?v=TpB1B9Tr_ck

AI in the healthcare industry is certainly growing!

In 2017, General Electric Company (GE) declared its partnership with Nvidia and will import its 500,000 computerized tomographies into NVIDIA’s AI platform to improve the speed and the precision of physician diagnostics.

General Electric Company further stated that the import of 500,000 Revolution Frontier CT into NVIDIA’s AI platform will not only reduce the amount of radiation but will also increase the image processing speed of the computer to greatly improve the doctor’s workflow, which saves a lot of time. 

The Revolution Frontier CT (Computerized Tomography) has been approved by the U.S. Food and Drug Administration (FDA). In the case of accelerated image processing, clinical testing of liver and kidney lesions is expected to lead to better results. Moreover, GE Healthcare is the first medical device vendor to adopt the NVIDIA GPU Cloud cloud platform.

According to GE’s official website, GPU-accelerated deep learning technology can be used to create more complex neural networks. Future applications include improving 2D and 4D image display, providing visualization and quantitative data for patient blood flow, and improving medical status assessment. 

On average, hospitals will generate 50PB of data per year, and most of the data will not be processed or analyzed. And this is where NVIDIA comes to the rescue. NVIDIA’s AI chip will speed up the processing of this large amount of data. In addition, NVIDIA and GM will deepen the cooperation of cloud services in which GM will store some data in NVIDIA’s GPU cloud platform.

Aside from cooperating with GE, NVIDIA has also announced its partnership with Nuance Communications. This partnership allows NVIDIA’s deep learning platform to incorporate with Nuance’s imaging diagnostic AI system. In the future, 70% of radiologists across the US can share images and reports; the solution greatly improves work efficiency. As we can see from these two cooperations, NVIDIA is extremely ambitious and confident in promoting AI in the medical industry. 

Sources:

https://www.ge.com/

http://newsroom.gehealthcare.com/intelligent-machines-changing-healthcare-man-machine-friends/

https://nvidianews.nvidia.com/news/ge-and-nvidia-join-forces-to-accelerate-artificial-intelligence-adoption-in-healthcare

https://blogs.nvidia.com/blog/2017/11/26/ai-medical-imaging/

https://www.accessdata.fda.gov/scrIpts/cdrh/cfdocs/cfRL/rl.cfm?lid=525731&lpcd=JAK

https://www.beckershospitalreview.com/artificial-intelligence/ge-healthcare-expands-partnership-with-nvidia-to-advance-ai-for-medical-imaging.html

https://www.nuance.com/about-us/newsroom/press-releases/nuance-nvidia-advance-ai-radiology.html

Ada Health

Ada Health, an AI medical startup founded in Berlin, Germany, created the “medical world version of Alexa” to help users better understand their physical problems and illnesses through chat. 

In countries that have sufficient medical resources and support, it is quite convenient to visit the doctor’s office. However, there are countries where medical resources are scarce. In that case, visiting the doctor’s office will be a difficult task for those residents, let alone searching for pieces of advice from health professionals. 

Ada Health has developed an action app that uses a large amount of medical-related information and AI algorithms to communicate with users, ask questions and diagnose possible symptoms, and allow people who are feeling unwell to learn more about their health through technology. At the end of the consultation, Ada Health will provide a professional doctor’s contact information whose location is near that particular patient so patients can obtain further assistance. 

Through the Ada App, users can see past medical records, their own health status, and other data. When they go to the doctor’s office, they can provide these materials to help doctors better understand the patient’s condition.

The artificial intelligence-based action app “Ada”, like Alexa in the medical world, hopes to play a good doctor role and provide assistance to users. Although Ada Health is only available in English and German, it is currently utilized by 1.5 million people. According to the company, more language versions are coming soon!

In 2017, Ada Health had launched its first round of fundraising and received $47 million. Investors include Access Industries, Google’s June Fund, and William Tunstall-Pedoe, an artificial intelligence entrepreneur who created Amazon’s voice assistant Alexa’s predecessor, “Evi”. This new financing will be used to improve products, hire new employees, and establish a US branch office.

The purpose of the creation of Ada Health is to establish a future in which health care will be built with the concept of “patient-centered”. Every patient can dive deeper into his/her health and condition information through their fingerprints, and doctors will work with AI to ameliorate patients’ health care. 

Sources:

https://ada.com/

https://dhis.net/step-aside-alexa-here-comes-ada-health/

https://techcrunch.com/2017/10/31/berlins-ada-health-raises-47m-to-become-the-alexa-of-healthcare/

https://www.accessindustries.com/news/ada-health-raises-e40m-47m-of-funding-to-improve-access-to-healthcare-globally/

https://www.mobihealthnews.com/content/ada-health-gets-47m-ai-powered-chatbot-telemedicine-app

https://www.cityam.com/ada-health-lands-eur40m-len-blavatnik-amazon-alexa-creator/

Who is Lil Miquela?

Lil Miquela, a 19-year-old girl from Los Angeles, is the new darling of fashion. She has accumulated more than 1.6 million followers worldwide since her debut on Instagram in 2016. Gucci, Chanel and other international brands are all partnering with her. Fashion is not her only specialty, she also worked on music creation, releasing more than seven singles on Spotify, with nearly 250,000 listeners a month. In 2018, she was even voted by Time magazine as the world’s 25 most influential online celebrities. But hey, she sounds just like any other influencers, right? What makes her so special? The thing is, Lil Miquela is not real. 

Meet Lil Miquela, the most famous intelligent robot influencer on Instagram. The existence of Lil Miquela is pretty much like a large social experiment. Brud, the AI company that created Lil Miquela, merely provides her character settings and does not disclose further information. For example, Lil Miquela often accepts “media interviews” but the only way to contact her is via emails. People believe that the replies of Lil Miquela are her own AI computations, which serves as the evidence of her as an independent individual. 

Lil Miquela’s increase in popularity corresponds with the public’s curiosity about whether robots have “humanity”. By looking at Lil Miquela’s Instagram account, we can see that Lil Miquela’s is attempting to blur the boundaries between virtual and real. She often takes photos with real-world celebrities and has also served as an interviewee in many videos. Upon closer examination, we can easily detect the unnaturalness of Lil Miquela, aka the AI computations. But still, people are confused about her and have been asking questions such as whether her voice is a real human being’s voice or does she has an AI brain and an AI vocal cord. 

The existence of virtual influencers undoubtedly attracted opposing views. Advocates stated that having virtual influencers to endorse products is a safe choice. Just like real human celebrities, virtual influencers have fans as well. Moreover, virtual influencers won’t cause troubles such as alcohol and drug abuse which damage a company’s image. Opponents stated that virtual influencers will eventually replace real human celebrities. 

As AI robots gradually enter the real world, how humans get along with machines has become an important issue in the contemporary era. The appearance of Lil Miquela is an opportunity for us to examine this issue carefully. What are your thoughts on virtual influencers? I personally think it is creepy. Does this mean our days of following and stalking influencers everywhere are over? 

Sources:

https://www.instagram.com/lilmiquela/

https://www.theverge.com/2019/1/30/18200509/ai-virtual-creators-lil-miquela-instagram-artificial-intelligence

https://www.thecut.com/2018/05/lil-miquela-digital-avatar-instagram-influencer.html

https://www.abc.net.au/news/2018-05-21/miquela-sousa-instagram-famous-influencer-cgi-ai/9767932

Let AI create new menus and new flavors for you!

AI is no longer only able to complete the food processing work, but now, it can create new menus and new flavors for food! Isn’t this exciting? 

Artificial Intelligence. It is the most efficient worker in the factory, can be used to identify sensitive pictures, and can also make fake videos. More importantly, it has begun to create, from writing news to writing songs, from painting to making stories. It has begun to create something we have never seen before. Now, AI is also closely related to your food. Previous AI was only able to complete basic food processing and screen out qualified food ingredients. But now, it has been able to create new menus that will allow you to taste the new flavors of food in less time. The world’s largest condiment company McCormick and the packaging food company ConAgra are using artificial intelligence to create new food flavors.

Human taste can be very complicated; it is related to genes, five senses, and the proportion of the food mix. Only 20% of your taste is related to taste and touch whereas the remaining percentages are related to smell. Moreover, the food mix can create more possibilities, you never know what kinds of food mix impress you the most. So far, food companies have invested less in artificial intelligence than in other industries, in part because the taste is too human-related and complicated. 

In the food sector, AI cannot completely substitute humans. AI can accelerate the original trial and error process by providing a new taste mix in the food field. Food companies still rely on human worker testing and feedback when testing these collocations. In some ways, AI can do faster than food scientists and testers, but it has its limitations in terms of taste and preference. What AI does is to collect data, understand people’s future taste preference trends, and give some suggestions for food mix combinations.

In the collaboration between McCormick and IBM, IBM has created a system based on the research experience of using artificial intelligence to blend flavors. This system uses advanced machine learning algorithms to screen hundreds of thousands of recipes and thousands of raw materials to help chefs create a new combination of eye-catching flavors. 

What are your thoughts on this AI? I certainly agree that AI cannot replace humans in the food preference area. Human minds are constantly changing; especially when it comes to food. For example, say that a minute ago I was craving the hamburgers from Five Guys. When I almost stepped into Five Guys to get a hamburger, I was lured by the smell from Little Szechuan and decided that I want Chinese food. Boom, my preference for food changed instantly just like that; so unpredictable. So, we don’t have to worry about AI’s take over in the food industry. At least for now, right?

Sources:

https://www.engadget.com/2019/02/05/mccormick-ibm-ai-next-big-spice/

https://www.youtube.com/watch?v=C8xELZDTGxM

https://www.youtube.com/watch?v=-M1hO_V04U8

https://cstoredecisions.com/2019/09/04/conagra-uses-ai-to-identify-trends-develop-new-products/

The era of human-robot collaboration is coming, are you ready?

Working with robots can significantly reduce the cost of enterprises and improve the efficiency of teamwork. According to research by MIT, the “Human-robot collaboration (HRC)” team is more efficient than a robot with pure AI applications or a single human team! The Human-Robot Collaboration model is bound to become the mainstream of the future, bring industrial innovations, and provide more convenience. But, what exactly is “Human-Robot Collaboration”? 

The so-called “human-robot collaboration” refers to the process of continuous improvement of the workflow between people and machines through experience and work communication; To be more precise, the machine can perform operations according to the information and processes imported by humans, and humans will adjust to the results that are produced by the machine; which forms a collaborative model. This kind of human-machine collaboration model can greatly shorten the working time, improve the accuracy, save the labor cost of the enterprise, and ultimately produce more humanized product design and service. Take the AI clinic in the past few years as an example. The introduction of AI software to assist doctors in reading medical imaging data not only greatly improved the accuracy, but also shortened the time for doctors to read images from 10 minutes to 20 seconds. This way, doctors can focus more on the consultation, deepen the relationship between doctors and patients, and find a more suitable treatment for patients. 

Furthermore, with the global wave of digitalization, “data” is undoubtedly the most important asset of enterprises. The quality and application of data will reshape the economic value of the industry. Therefore, to meet the needs of today’s market, various industries have catered to the digital economy to create a new business model. Wow, this sounds like human-computer collaboration can certainly bring a lot of competitive advantages. However, notions such as: how is it possible that machines can do better than humans and what if machines substitute human tend to emerge. Human-Computer collaboration does not mean that human beings will be replaced by robots, but humans play the role of “training machines to perform tasks” instead. The mode of working with people is driving the industrial transformation step by step. When a certain mode of operation begins to bring convenience, behavior change is just around the corner, and it is imperative. 

In the future, the man-machine collaboration will inevitably become a mainstream mode of work, accelerate industrial innovation, and shape a more convenient life. So, are you ready?

Sources: 

https://www.mobihealthnews.com/content/ai-goes-clinic-are-we-ready-yet

https://www.csail.mit.edu/research/human-robot-collaboration-shared-workspace

https://www.kuka.com/en-us/future-production/human-robot-collaboration

https://humanrobotinteraction.org/1-introduction/

https://www.csail.mit.edu/research/efficient-communication-human-machine-teams

AI vending/snack machines

What is your favorite activity during break times? Mine is to wander around snack machines and browse through items that are inside the machine. I’ve always wondered what will it be like if snack machines have the ability to know exactly what I want. Wouldn’t it be nice to have something that knows what snacks or drinks you want before you even asked? How wonderful is that? Thankfully, this idea is no longer an imagination! Snack machines with Artificial Intelligence have become prevalent and are here to satisfy those who are constantly debating what kinds of products he/she should get. 

According to Philip (2019), the incorporation of snack machines and artificial intelligence brings convenience to consumers in several ways. First of all, the usage of facial recognition technology allows the machine to know who you are. When you’re standing in front of the snack machine, it will instantly know what you might want and need by scanning your face. While scanning, it will search for your data. For example, say that you always go for a diet with extremely healthy snacks and tend to have one or two cheat days during a certain time period, this pattern of your diet is saved in the server. Thus, the machine is aware of your eating patterns and is able to customize the products for you. However, to have an “only for you” AI snack machine, you must first “tell” the machine what products you like by entering the information of your favorite snacks and drinks into the machine if you are a first-time user. This way, the snack machine will have your basic information, have learned your patterns, and be able to make predictions about what to provide. Basically, once you’ve interacted with the machine, it remembers who you are, what you like, and what you would want. Moreover, AI snack machines will list out all of your favorite snacks, beverages… etc on the screen, so you can see the lists that are for you and only for you (Philip, n.d.). Do you feel special now?

I personally think that the incorporation of vending machines and artificial intelligence is really convenient. For example, although I know that I tend to get the exact same snacks, I am still hesitant between the options when I’m standing in front of a vending machine that displayed a myriad of products. Having a vending machine that keeps track of what I like, the machine could detect what snacks I prefer recently, and then offer the most suitable ones to me; which saves me from the struggle of choosing between options. 

Sources:

https://thinkpalm.com/blogs/facial-recognition-ai-driven-smart-vending-machines-enhancing-retail-industry/

https://thinkpalm.com/

Are You Certain That I’m Not AI? – The Advent of Duplex

In Google’s 2018 I/O developer conference, Google Assistant’s new “Duplex” technology has become a major focus. This new “Duplex” technology not only can imitate our voices (human voices) to make appointments for places such as restaurants and salons, it also can imitate the talking habits that we apply when we speak; even the person who picks up the phone won’t realize that he/she is not speaking to a human. Certainly, this technology is admiring and exciting. However, at the same time, should humans be concerned about being substituted by artificial intelligence eventually?

Google Assistant was initially debuted in the 2016’s I/O developer conference. It only took two years to enable virtual assistants to be able to make natural and continuous dialogues. The voice tones of these virtual assistants sound so vivid and fascinating that even humans can’t distinguish the difference between AI and the human voice. Unlike Siri and Alexa, this new technology “Duplex” sounds way more human. For computers, it is quite difficult to sound just like humans because computers are used to receiving precise instructions. Also, humans are often inaccurate when they are speaking, which makes it even more difficult. For instance, when we are speaking, we often mixed our conversations with many auxiliary and nonsense words. Before ending a sentence, we tend to suddenly divert the content of the entire sentence. Moreover, we tend to eliminate some words when speaking and pause when we feel like pausing the conversation. 

“Yeah…”, “umm…”, “well…”, “like….”; these are the most common auxiliary words we use when we talk. They are used to provide a soothing effect between expressions. The following video shows the “Duplex” call demo for making salon appointments. By listening to this demo, we can hear that “Duplex” added “umm…” in her sentence naturally and cleverly. 

The next video shows the “Duplex” call demo for making restaurant reservations. In the second demonstration, we can hear that “Duplex” initially asked for reserving tables but then was told that no reservations are needed since the restaurant is not busy at that time. “Duplex” not only instantly understands the “no reservations are needed” situation but also knows how to ask “how long is the wait usually to be seated”. By listening to the two demos, we can see that those who pick up the phone did not notice that they are not talking to humans. 

Certainly, “Duplex” has amazingly changed the interaction between machines and humans. But some problems also follow when such technology emerges. First of all, does “Duplex” has the obligation to inform humans that they are actually talking to Artificial Intelligence? This is a dilemma. Imagine one picked up the phone and heard: “hey I am a robot!”. This will definitely freak people out and hung up immediately. In addition, no matter how trivial a conversation is, it ultimately has some social value in it. If we can’t tell whether the other side of the phone is a real person or AI, wouldn’t it enable people to become suspicious and weaken their trust in what they see and hear? Lastly, what if the emergence of this technology becomes a hierarchical privilege that allows those who possess this technology to let “Duplex” deal with dull and boring conversations for humans when they are sick of these kinds of conversations? Is the advent of AI going to create distances between humans? This might be a question that is worth pondering. 

Sources: 

https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

https://www.youtube.com/watch?v=47xJkeG9BZI

https://www.youtube.com/watch?v=TjS_nXzwbN8