Microsoft Creates an AI to Beat Ms. Pac-Man

Pac-Man is one of the most popular and best-selling arcade games of all time and is simple to follow. The player is placed in a maze filled with food (depicted as pellets or dots) and needs to eat all of it to advance to the next level. This task is made difficult by four ghosts that pursue Pac-Man through the maze. If Pac-Man contacts any of the ghosts, the player loses a life. After all three lives are lost, the game’s over. In the making of Ms. Pac-Man, in order to increase the difficulty, the creators added in a few new mazes and decided to make the fruit, a high point item, and more of a challenge to collect by making it move across the field. Also, instead of having a set time when the ghosts switch between scatter and chase mode, they completely randomized it.

The ghosts in Pac-Man are always in one of three possible modes: Chase, Scatter, or Frightened. This is when the ghosts pursue Pac-Man, avoid Pac-Man, or turn blue and can be eaten by Pac-Man. In order to be successful in Pac-man, you must understand ghost behavior. The key to understanding ghost behavior is the concept of a target tile. Most of the time, each ghost has a specific tile that it is trying to reach, and its behavior revolves around trying to get to that tile from its current one. When approaching “intersection” tiles, which is indicated in green in the picture, the choice of which direction to turn is made based on which tile adjoining the intersection will put the ghost nearest to its target tile, measured in a straight line. The ghost will choose the path with the shorter straight line. However, this can result in the ghosts selecting the “wrong” turn when the initial choice places them closer, but the overall path is longer. Note that target tiles are only used in chase and scatter mode. In frightened mode, the ghosts pseudorandomly decide which turns to make at every intersection.

The ghosts’ AI is very simple, which makes the complex behavior of the ghosts even more impressive. Ghosts only ever plan one step into the future as they move about the maze. Whenever a ghost enters a new tile, it looks ahead to the next tile that it will reach and decides which direction it will turn when it gets there.

With the ghost’s behavior being more random in Ms. Pac-Man, Microsoft’s AI, the player, had to rely on assigning every object on the maze a priority which then influenced the player’s behavior as it moved throughout the maze. For example, the player knew how to avoid the ghost because of the huge priority it had the closer it got to it. When the ghost turned to frighten mode and could be eaten, the priority then flipped and the player knew to follow after it. With the use of priorities and a heat map of the game.

The AI was able to navigate through every level of the game while avoiding all of the ghosts and eating all of the food on each maze. This led to the AI being the first player to ever beat Ms. Pac-Man with a perfect score of 999990. However, because this wasn’t an actual person, the actual world record is held by a player who obtained a score of 933580 themselves.

AI in the healthcare industry is certainly growing!

In 2017, General Electric Company (GE) declared its partnership with Nvidia and will import its 500,000 computerized tomographies into NVIDIA’s AI platform to improve the speed and the precision of physician diagnostics.

General Electric Company further stated that the import of 500,000 Revolution Frontier CT into NVIDIA’s AI platform will not only reduce the amount of radiation but will also increase the image processing speed of the computer to greatly improve the doctor’s workflow, which saves a lot of time. 

The Revolution Frontier CT (Computerized Tomography) has been approved by the U.S. Food and Drug Administration (FDA). In the case of accelerated image processing, clinical testing of liver and kidney lesions is expected to lead to better results. Moreover, GE Healthcare is the first medical device vendor to adopt the NVIDIA GPU Cloud cloud platform.

According to GE’s official website, GPU-accelerated deep learning technology can be used to create more complex neural networks. Future applications include improving 2D and 4D image display, providing visualization and quantitative data for patient blood flow, and improving medical status assessment. 

On average, hospitals will generate 50PB of data per year, and most of the data will not be processed or analyzed. And this is where NVIDIA comes to the rescue. NVIDIA’s AI chip will speed up the processing of this large amount of data. In addition, NVIDIA and GM will deepen the cooperation of cloud services in which GM will store some data in NVIDIA’s GPU cloud platform.

Aside from cooperating with GE, NVIDIA has also announced its partnership with Nuance Communications. This partnership allows NVIDIA’s deep learning platform to incorporate with Nuance’s imaging diagnostic AI system. In the future, 70% of radiologists across the US can share images and reports; the solution greatly improves work efficiency. As we can see from these two cooperations, NVIDIA is extremely ambitious and confident in promoting AI in the medical industry. 


Ada Health

Ada Health, an AI medical startup founded in Berlin, Germany, created the “medical world version of Alexa” to help users better understand their physical problems and illnesses through chat. 

In countries that have sufficient medical resources and support, it is quite convenient to visit the doctor’s office. However, there are countries where medical resources are scarce. In that case, visiting the doctor’s office will be a difficult task for those residents, let alone searching for pieces of advice from health professionals. 

Ada Health has developed an action app that uses a large amount of medical-related information and AI algorithms to communicate with users, ask questions and diagnose possible symptoms, and allow people who are feeling unwell to learn more about their health through technology. At the end of the consultation, Ada Health will provide a professional doctor’s contact information whose location is near that particular patient so patients can obtain further assistance. 

Through the Ada App, users can see past medical records, their own health status, and other data. When they go to the doctor’s office, they can provide these materials to help doctors better understand the patient’s condition.

The artificial intelligence-based action app “Ada”, like Alexa in the medical world, hopes to play a good doctor role and provide assistance to users. Although Ada Health is only available in English and German, it is currently utilized by 1.5 million people. According to the company, more language versions are coming soon!

In 2017, Ada Health had launched its first round of fundraising and received $47 million. Investors include Access Industries, Google’s June Fund, and William Tunstall-Pedoe, an artificial intelligence entrepreneur who created Amazon’s voice assistant Alexa’s predecessor, “Evi”. This new financing will be used to improve products, hire new employees, and establish a US branch office.

The purpose of the creation of Ada Health is to establish a future in which health care will be built with the concept of “patient-centered”. Every patient can dive deeper into his/her health and condition information through their fingerprints, and doctors will work with AI to ameliorate patients’ health care. 


Who is Lil Miquela?

Lil Miquela, a 19-year-old girl from Los Angeles, is the new darling of fashion. She has accumulated more than 1.6 million followers worldwide since her debut on Instagram in 2016. Gucci, Chanel and other international brands are all partnering with her. Fashion is not her only specialty, she also worked on music creation, releasing more than seven singles on Spotify, with nearly 250,000 listeners a month. In 2018, she was even voted by Time magazine as the world’s 25 most influential online celebrities. But hey, she sounds just like any other influencers, right? What makes her so special? The thing is, Lil Miquela is not real. 

Meet Lil Miquela, the most famous intelligent robot influencer on Instagram. The existence of Lil Miquela is pretty much like a large social experiment. Brud, the AI company that created Lil Miquela, merely provides her character settings and does not disclose further information. For example, Lil Miquela often accepts “media interviews” but the only way to contact her is via emails. People believe that the replies of Lil Miquela are her own AI computations, which serves as the evidence of her as an independent individual. 

Lil Miquela’s increase in popularity corresponds with the public’s curiosity about whether robots have “humanity”. By looking at Lil Miquela’s Instagram account, we can see that Lil Miquela’s is attempting to blur the boundaries between virtual and real. She often takes photos with real-world celebrities and has also served as an interviewee in many videos. Upon closer examination, we can easily detect the unnaturalness of Lil Miquela, aka the AI computations. But still, people are confused about her and have been asking questions such as whether her voice is a real human being’s voice or does she has an AI brain and an AI vocal cord. 

The existence of virtual influencers undoubtedly attracted opposing views. Advocates stated that having virtual influencers to endorse products is a safe choice. Just like real human celebrities, virtual influencers have fans as well. Moreover, virtual influencers won’t cause troubles such as alcohol and drug abuse which damage a company’s image. Opponents stated that virtual influencers will eventually replace real human celebrities. 

As AI robots gradually enter the real world, how humans get along with machines has become an important issue in the contemporary era. The appearance of Lil Miquela is an opportunity for us to examine this issue carefully. What are your thoughts on virtual influencers? I personally think it is creepy. Does this mean our days of following and stalking influencers everywhere are over? 


Let AI create new menus and new flavors for you!

AI is no longer only able to complete the food processing work, but now, it can create new menus and new flavors for food! Isn’t this exciting? 

Artificial Intelligence. It is the most efficient worker in the factory, can be used to identify sensitive pictures, and can also make fake videos. More importantly, it has begun to create, from writing news to writing songs, from painting to making stories. It has begun to create something we have never seen before. Now, AI is also closely related to your food. Previous AI was only able to complete basic food processing and screen out qualified food ingredients. But now, it has been able to create new menus that will allow you to taste the new flavors of food in less time. The world’s largest condiment company McCormick and the packaging food company ConAgra are using artificial intelligence to create new food flavors.

Human taste can be very complicated; it is related to genes, five senses, and the proportion of the food mix. Only 20% of your taste is related to taste and touch whereas the remaining percentages are related to smell. Moreover, the food mix can create more possibilities, you never know what kinds of food mix impress you the most. So far, food companies have invested less in artificial intelligence than in other industries, in part because the taste is too human-related and complicated. 

In the food sector, AI cannot completely substitute humans. AI can accelerate the original trial and error process by providing a new taste mix in the food field. Food companies still rely on human worker testing and feedback when testing these collocations. In some ways, AI can do faster than food scientists and testers, but it has its limitations in terms of taste and preference. What AI does is to collect data, understand people’s future taste preference trends, and give some suggestions for food mix combinations.

In the collaboration between McCormick and IBM, IBM has created a system based on the research experience of using artificial intelligence to blend flavors. This system uses advanced machine learning algorithms to screen hundreds of thousands of recipes and thousands of raw materials to help chefs create a new combination of eye-catching flavors. 

What are your thoughts on this AI? I certainly agree that AI cannot replace humans in the food preference area. Human minds are constantly changing; especially when it comes to food. For example, say that a minute ago I was craving the hamburgers from Five Guys. When I almost stepped into Five Guys to get a hamburger, I was lured by the smell from Little Szechuan and decided that I want Chinese food. Boom, my preference for food changed instantly just like that; so unpredictable. So, we don’t have to worry about AI’s take over in the food industry. At least for now, right?


The era of human-robot collaboration is coming, are you ready?

Working with robots can significantly reduce the cost of enterprises and improve the efficiency of teamwork. According to research by MIT, the “Human-robot collaboration (HRC)” team is more efficient than a robot with pure AI applications or a single human team! The Human-Robot Collaboration model is bound to become the mainstream of the future, bring industrial innovations, and provide more convenience. But, what exactly is “Human-Robot Collaboration”? 

The so-called “human-robot collaboration” refers to the process of continuous improvement of the workflow between people and machines through experience and work communication; To be more precise, the machine can perform operations according to the information and processes imported by humans, and humans will adjust to the results that are produced by the machine; which forms a collaborative model. This kind of human-machine collaboration model can greatly shorten the working time, improve the accuracy, save the labor cost of the enterprise, and ultimately produce more humanized product design and service. Take the AI clinic in the past few years as an example. The introduction of AI software to assist doctors in reading medical imaging data not only greatly improved the accuracy, but also shortened the time for doctors to read images from 10 minutes to 20 seconds. This way, doctors can focus more on the consultation, deepen the relationship between doctors and patients, and find a more suitable treatment for patients. 

Furthermore, with the global wave of digitalization, “data” is undoubtedly the most important asset of enterprises. The quality and application of data will reshape the economic value of the industry. Therefore, to meet the needs of today’s market, various industries have catered to the digital economy to create a new business model. Wow, this sounds like human-computer collaboration can certainly bring a lot of competitive advantages. However, notions such as: how is it possible that machines can do better than humans and what if machines substitute human tend to emerge. Human-Computer collaboration does not mean that human beings will be replaced by robots, but humans play the role of “training machines to perform tasks” instead. The mode of working with people is driving the industrial transformation step by step. When a certain mode of operation begins to bring convenience, behavior change is just around the corner, and it is imperative. 

In the future, the man-machine collaboration will inevitably become a mainstream mode of work, accelerate industrial innovation, and shape a more convenient life. So, are you ready?


AI vending/snack machines

What is your favorite activity during break times? Mine is to wander around snack machines and browse through items that are inside the machine. I’ve always wondered what will it be like if snack machines have the ability to know exactly what I want. Wouldn’t it be nice to have something that knows what snacks or drinks you want before you even asked? How wonderful is that? Thankfully, this idea is no longer an imagination! Snack machines with Artificial Intelligence have become prevalent and are here to satisfy those who are constantly debating what kinds of products he/she should get. 

According to Philip (2019), the incorporation of snack machines and artificial intelligence brings convenience to consumers in several ways. First of all, the usage of facial recognition technology allows the machine to know who you are. When you’re standing in front of the snack machine, it will instantly know what you might want and need by scanning your face. While scanning, it will search for your data. For example, say that you always go for a diet with extremely healthy snacks and tend to have one or two cheat days during a certain time period, this pattern of your diet is saved in the server. Thus, the machine is aware of your eating patterns and is able to customize the products for you. However, to have an “only for you” AI snack machine, you must first “tell” the machine what products you like by entering the information of your favorite snacks and drinks into the machine if you are a first-time user. This way, the snack machine will have your basic information, have learned your patterns, and be able to make predictions about what to provide. Basically, once you’ve interacted with the machine, it remembers who you are, what you like, and what you would want. Moreover, AI snack machines will list out all of your favorite snacks, beverages… etc on the screen, so you can see the lists that are for you and only for you (Philip, n.d.). Do you feel special now?

I personally think that the incorporation of vending machines and artificial intelligence is really convenient. For example, although I know that I tend to get the exact same snacks, I am still hesitant between the options when I’m standing in front of a vending machine that displayed a myriad of products. Having a vending machine that keeps track of what I like, the machine could detect what snacks I prefer recently, and then offer the most suitable ones to me; which saves me from the struggle of choosing between options. 


Are You Certain That I’m Not AI? – The Advent of Duplex

In Google’s 2018 I/O developer conference, Google Assistant’s new “Duplex” technology has become a major focus. This new “Duplex” technology not only can imitate our voices (human voices) to make appointments for places such as restaurants and salons, it also can imitate the talking habits that we apply when we speak; even the person who picks up the phone won’t realize that he/she is not speaking to a human. Certainly, this technology is admiring and exciting. However, at the same time, should humans be concerned about being substituted by artificial intelligence eventually?

Google Assistant was initially debuted in the 2016’s I/O developer conference. It only took two years to enable virtual assistants to be able to make natural and continuous dialogues. The voice tones of these virtual assistants sound so vivid and fascinating that even humans can’t distinguish the difference between AI and the human voice. Unlike Siri and Alexa, this new technology “Duplex” sounds way more human. For computers, it is quite difficult to sound just like humans because computers are used to receiving precise instructions. Also, humans are often inaccurate when they are speaking, which makes it even more difficult. For instance, when we are speaking, we often mixed our conversations with many auxiliary and nonsense words. Before ending a sentence, we tend to suddenly divert the content of the entire sentence. Moreover, we tend to eliminate some words when speaking and pause when we feel like pausing the conversation. 

“Yeah…”, “umm…”, “well…”, “like….”; these are the most common auxiliary words we use when we talk. They are used to provide a soothing effect between expressions. The following video shows the “Duplex” call demo for making salon appointments. By listening to this demo, we can hear that “Duplex” added “umm…” in her sentence naturally and cleverly. 

The next video shows the “Duplex” call demo for making restaurant reservations. In the second demonstration, we can hear that “Duplex” initially asked for reserving tables but then was told that no reservations are needed since the restaurant is not busy at that time. “Duplex” not only instantly understands the “no reservations are needed” situation but also knows how to ask “how long is the wait usually to be seated”. By listening to the two demos, we can see that those who pick up the phone did not notice that they are not talking to humans. 

Certainly, “Duplex” has amazingly changed the interaction between machines and humans. But some problems also follow when such technology emerges. First of all, does “Duplex” has the obligation to inform humans that they are actually talking to Artificial Intelligence? This is a dilemma. Imagine one picked up the phone and heard: “hey I am a robot!”. This will definitely freak people out and hung up immediately. In addition, no matter how trivial a conversation is, it ultimately has some social value in it. If we can’t tell whether the other side of the phone is a real person or AI, wouldn’t it enable people to become suspicious and weaken their trust in what they see and hear? Lastly, what if the emergence of this technology becomes a hierarchical privilege that allows those who possess this technology to let “Duplex” deal with dull and boring conversations for humans when they are sick of these kinds of conversations? Is the advent of AI going to create distances between humans? This might be a question that is worth pondering. 


Meet Norman Bates, the first AI psycho

Artificial intelligence (AI) can imitate human behavior from a large amount of data. For example, from Google’s 2018 Developers Conference, we saw the introduction of the new “Duplex” technology from Google Assistant. Aside from being able to make reservations for humans by calling places such as restaurants, salons… etc, this new “Duplex” technology can even make decisions for humans when it comes to unexpected situations. This technology is really convenient for someone who considers making decisions as one of his/her worst nightmares! Despite all the awe, the emergence of this sort of technology further led to many moral and ethical discussions.

We often say that the problem is not about the algorithm; instead, the training materials that are fed to the algorithm are the biggest issues. In a study by the MIT Media Lab that was conducted in 2018, the researchers fed an abundance of uncomfortable and disturbing materials to train an artificial intelligence called “Norman” and had successfully trained a model with prejudice and gave birth to this first AI psychopath.

The name “Norman” was derived from the main character of the famous movie “Psycho” from Alfred Hitchcock. “Norman” is a deep learning method for pictures and descriptions. Basically, when “Norman” sees an image, it will automatically generate a paragraph describing what he thought he saw in the picture. The research team then fed “Norman” with tons of unresting contents such as pictures of corpses and the concept of death. After all the feeding, another test called Rorschach Inkblots is conducted to determine whether “Norman” has been diverted from normal AI or not. 

The Rorschach Inkblots is a personality test consisting of 10 cards with ink stains. 5 of the 10 are black ink on white, 2 of the 10 are white and black or red, and the remaining ones are colored inkblots. Subjects should answer what they initially thought the card looked like, and what they felt like later. The psychologist would judge the subject’s personality based on the answers and statistics. After several Rorschach Inkblots Tests, the research showed that the research team has trained the world’s first psychopath AI.

Let’s take a look at one of the pictures that the research team showed “Norman”. For example, when I saw this picture, I think that this looks like two dwarfs from Snow White are high-fiving with both their hands and feet. They seem joyful and were having fun. A normal AI state that it saw a vase with flowers in it. However, “Norman” saw a man got shot and is dead. Basically, any normal picture can be distorted by “Norman” and describe the picture in various disturbing ways.

From this “Norman” AI experiment, it can be found that by feeding biased data, a biased AI model can be easily trained. If manipulated, it may succeed in affecting sensitive societal issues. The arousal of disputes that involve sensitive topics may be due to algorithmic bias which accelerates the spread or enlarges the extreme position or prejudice of certain subjects. The concept of AI being prejudiced is just like the concept of us as humans: we eventually become the type of person who we chose to become friends with, we are easily influenced as well. 



Will what we see in IG change?

Several artists I have been following on social media have been talking about how the Instagram algorithm has been updated once again.

Now, Instagram is going to demote content they see as “ inappropriate ” even if the post does not violate the social media page’s community guidelines. This means that a post on Instagram sees as “inappropriate” will have less reach and will be filtered from Explore and Hashtags. This will affect many influencers who post content that is revealing or cause many meme pages to get less exposure.

A problem of this is that the guidelines are very vague. Some assume that since AI is involved in this process of screening content, this can be seen as trying to make the algorithm efficient and shrinking its responsibilities.

With Tumblr recently banning sexual content and now with Instagram lowering the engagement of vaguely “inappropriate” content, it makes raises the question if this is a trend we could expect from other social media platforms.


Image: TechRadar