AI in the healthcare industry is certainly growing!

In 2017, General Electric Company (GE) declared its partnership with Nvidia and will import its 500,000 computerized tomographies into NVIDIA’s AI platform to improve the speed and the precision of physician diagnostics.

General Electric Company further stated that the import of 500,000 Revolution Frontier CT into NVIDIA’s AI platform will not only reduce the amount of radiation but will also increase the image processing speed of the computer to greatly improve the doctor’s workflow, which saves a lot of time. 

The Revolution Frontier CT (Computerized Tomography) has been approved by the U.S. Food and Drug Administration (FDA). In the case of accelerated image processing, clinical testing of liver and kidney lesions is expected to lead to better results. Moreover, GE Healthcare is the first medical device vendor to adopt the NVIDIA GPU Cloud cloud platform.

According to GE’s official website, GPU-accelerated deep learning technology can be used to create more complex neural networks. Future applications include improving 2D and 4D image display, providing visualization and quantitative data for patient blood flow, and improving medical status assessment. 

On average, hospitals will generate 50PB of data per year, and most of the data will not be processed or analyzed. And this is where NVIDIA comes to the rescue. NVIDIA’s AI chip will speed up the processing of this large amount of data. In addition, NVIDIA and GM will deepen the cooperation of cloud services in which GM will store some data in NVIDIA’s GPU cloud platform.

Aside from cooperating with GE, NVIDIA has also announced its partnership with Nuance Communications. This partnership allows NVIDIA’s deep learning platform to incorporate with Nuance’s imaging diagnostic AI system. In the future, 70% of radiologists across the US can share images and reports; the solution greatly improves work efficiency. As we can see from these two cooperations, NVIDIA is extremely ambitious and confident in promoting AI in the medical industry. 

Sources:

https://www.ge.com/

http://newsroom.gehealthcare.com/intelligent-machines-changing-healthcare-man-machine-friends/

https://nvidianews.nvidia.com/news/ge-and-nvidia-join-forces-to-accelerate-artificial-intelligence-adoption-in-healthcare

https://blogs.nvidia.com/blog/2017/11/26/ai-medical-imaging/

https://www.accessdata.fda.gov/scrIpts/cdrh/cfdocs/cfRL/rl.cfm?lid=525731&lpcd=JAK

https://www.beckershospitalreview.com/artificial-intelligence/ge-healthcare-expands-partnership-with-nvidia-to-advance-ai-for-medical-imaging.html

https://www.nuance.com/about-us/newsroom/press-releases/nuance-nvidia-advance-ai-radiology.html

Ada Health

Ada Health, an AI medical startup founded in Berlin, Germany, created the “medical world version of Alexa” to help users better understand their physical problems and illnesses through chat. 

In countries that have sufficient medical resources and support, it is quite convenient to visit the doctor’s office. However, there are countries where medical resources are scarce. In that case, visiting the doctor’s office will be a difficult task for those residents, let alone searching for pieces of advice from health professionals. 

Ada Health has developed an action app that uses a large amount of medical-related information and AI algorithms to communicate with users, ask questions and diagnose possible symptoms, and allow people who are feeling unwell to learn more about their health through technology. At the end of the consultation, Ada Health will provide a professional doctor’s contact information whose location is near that particular patient so patients can obtain further assistance. 

Through the Ada App, users can see past medical records, their own health status, and other data. When they go to the doctor’s office, they can provide these materials to help doctors better understand the patient’s condition.

The artificial intelligence-based action app “Ada”, like Alexa in the medical world, hopes to play a good doctor role and provide assistance to users. Although Ada Health is only available in English and German, it is currently utilized by 1.5 million people. According to the company, more language versions are coming soon!

In 2017, Ada Health had launched its first round of fundraising and received $47 million. Investors include Access Industries, Google’s June Fund, and William Tunstall-Pedoe, an artificial intelligence entrepreneur who created Amazon’s voice assistant Alexa’s predecessor, “Evi”. This new financing will be used to improve products, hire new employees, and establish a US branch office.

The purpose of the creation of Ada Health is to establish a future in which health care will be built with the concept of “patient-centered”. Every patient can dive deeper into his/her health and condition information through their fingerprints, and doctors will work with AI to ameliorate patients’ health care. 

Sources:

https://ada.com/

https://dhis.net/step-aside-alexa-here-comes-ada-health/

https://techcrunch.com/2017/10/31/berlins-ada-health-raises-47m-to-become-the-alexa-of-healthcare/

https://www.accessindustries.com/news/ada-health-raises-e40m-47m-of-funding-to-improve-access-to-healthcare-globally/

https://www.mobihealthnews.com/content/ada-health-gets-47m-ai-powered-chatbot-telemedicine-app

https://www.cityam.com/ada-health-lands-eur40m-len-blavatnik-amazon-alexa-creator/

Say hello to e-Palette, the self-driving bus

Toyota announced that they will provide the electric self-driving bus e-Palette at the 2020 Tokyo Olympics to help transport athletes back and forth. In 2018, Toyota debuted the Mobility as a Service e-Palette Concept and proposes the concept of “mobile box”, also known as the multi-functional fully automatic mobile vehicle. Under the fascinating autonomous driving technology, this will become a tool for mass transportation in the future. Aside from transporting passengers, services such as delivery couriers, hotels, takeaways, restaurants, laboratories, or personal mobile offices are all available via this mobile box as well.

The e-Palette Concept is built by Toyota to achieve the Mobility Services Platform (MSPF). It is designed with spacious cabin space, low chassis, and is available in three different sizes. The e-Palette Concept prototype that was debuted in the 2018 CES conference is 4.8 meters long, 2 meters wide and 2.25 meters long. In addition, the split-type automatic side sliding door and low floor are designed to provide barrier-free services. As for the layout of the car room, it can be customized according to the purpose of the cooperation merchant or the user. The e-Palette Concept uses symmetrical box design and smaller tires to increase indoor space. It is equipped with a photographic lens and a sensor such as LiDAR (optical radar) combined with high-precision 3D maps for low-speed autopilot. The self-driving bus can detect obstacles holistically at 360 degrees without dead ends and operates at an optimum speed that depends on the environment.

In addition, the photographic lens and sensor that are required for the e-Palette Concept autopilot system can be mounted on the roof of the vehicle. When the system is abnormal, it is equipped with a brake to stop the vehicle safely. Moreover, it is equipped with headlights and rear lights that show the state of the vehicle to the surrounding pedestrians during the automatic driving process.

As a model of this avant-garde transportation concept, e-Palette will run for the first time at the Tokyo Olympics. Although the e-Palette is only used for transportation, Toyota said it will use its accumulated knowledge to develop the e-Palette for further utilizations in the future. I can’t wait to see e-Palette self-driving buses next year! 

Sources:

https://www.forbes.com/sites/nargessbanks/2018/01/10/toyota-e-palette-ces2018/#6a07f0565368

https://www.theverge.com/2018/1/8/16863092/toyota-e-palette-self-driving-car-ev-ces-2018

https://www.youtube.com/watch?v=XmoPQuMlOYE

https://www.youtube.com/watch?v=L4WqsKSKpGk

https://www.autorentalnews.com/321408/toyota-creates-mobility-service-for-ride-hailing-companies

https://global.toyota/en/newsroom/corporate/29933371.html

https://www.dezeen.com/2019/10/14/toyota-e-palette-tokyo-2020-olympics/

Who is Lil Miquela?

Lil Miquela, a 19-year-old girl from Los Angeles, is the new darling of fashion. She has accumulated more than 1.6 million followers worldwide since her debut on Instagram in 2016. Gucci, Chanel and other international brands are all partnering with her. Fashion is not her only specialty, she also worked on music creation, releasing more than seven singles on Spotify, with nearly 250,000 listeners a month. In 2018, she was even voted by Time magazine as the world’s 25 most influential online celebrities. But hey, she sounds just like any other influencers, right? What makes her so special? The thing is, Lil Miquela is not real. 

Meet Lil Miquela, the most famous intelligent robot influencer on Instagram. The existence of Lil Miquela is pretty much like a large social experiment. Brud, the AI company that created Lil Miquela, merely provides her character settings and does not disclose further information. For example, Lil Miquela often accepts “media interviews” but the only way to contact her is via emails. People believe that the replies of Lil Miquela are her own AI computations, which serves as the evidence of her as an independent individual. 

Lil Miquela’s increase in popularity corresponds with the public’s curiosity about whether robots have “humanity”. By looking at Lil Miquela’s Instagram account, we can see that Lil Miquela’s is attempting to blur the boundaries between virtual and real. She often takes photos with real-world celebrities and has also served as an interviewee in many videos. Upon closer examination, we can easily detect the unnaturalness of Lil Miquela, aka the AI computations. But still, people are confused about her and have been asking questions such as whether her voice is a real human being’s voice or does she has an AI brain and an AI vocal cord. 

The existence of virtual influencers undoubtedly attracted opposing views. Advocates stated that having virtual influencers to endorse products is a safe choice. Just like real human celebrities, virtual influencers have fans as well. Moreover, virtual influencers won’t cause troubles such as alcohol and drug abuse which damage a company’s image. Opponents stated that virtual influencers will eventually replace real human celebrities. 

As AI robots gradually enter the real world, how humans get along with machines has become an important issue in the contemporary era. The appearance of Lil Miquela is an opportunity for us to examine this issue carefully. What are your thoughts on virtual influencers? I personally think it is creepy. Does this mean our days of following and stalking influencers everywhere are over? 

Sources:

https://www.instagram.com/lilmiquela/

https://www.theverge.com/2019/1/30/18200509/ai-virtual-creators-lil-miquela-instagram-artificial-intelligence

https://www.thecut.com/2018/05/lil-miquela-digital-avatar-instagram-influencer.html

https://www.abc.net.au/news/2018-05-21/miquela-sousa-instagram-famous-influencer-cgi-ai/9767932

Let AI create new menus and new flavors for you!

AI is no longer only able to complete the food processing work, but now, it can create new menus and new flavors for food! Isn’t this exciting? 

Artificial Intelligence. It is the most efficient worker in the factory, can be used to identify sensitive pictures, and can also make fake videos. More importantly, it has begun to create, from writing news to writing songs, from painting to making stories. It has begun to create something we have never seen before. Now, AI is also closely related to your food. Previous AI was only able to complete basic food processing and screen out qualified food ingredients. But now, it has been able to create new menus that will allow you to taste the new flavors of food in less time. The world’s largest condiment company McCormick and the packaging food company ConAgra are using artificial intelligence to create new food flavors.

Human taste can be very complicated; it is related to genes, five senses, and the proportion of the food mix. Only 20% of your taste is related to taste and touch whereas the remaining percentages are related to smell. Moreover, the food mix can create more possibilities, you never know what kinds of food mix impress you the most. So far, food companies have invested less in artificial intelligence than in other industries, in part because the taste is too human-related and complicated. 

In the food sector, AI cannot completely substitute humans. AI can accelerate the original trial and error process by providing a new taste mix in the food field. Food companies still rely on human worker testing and feedback when testing these collocations. In some ways, AI can do faster than food scientists and testers, but it has its limitations in terms of taste and preference. What AI does is to collect data, understand people’s future taste preference trends, and give some suggestions for food mix combinations.

In the collaboration between McCormick and IBM, IBM has created a system based on the research experience of using artificial intelligence to blend flavors. This system uses advanced machine learning algorithms to screen hundreds of thousands of recipes and thousands of raw materials to help chefs create a new combination of eye-catching flavors. 

What are your thoughts on this AI? I certainly agree that AI cannot replace humans in the food preference area. Human minds are constantly changing; especially when it comes to food. For example, say that a minute ago I was craving the hamburgers from Five Guys. When I almost stepped into Five Guys to get a hamburger, I was lured by the smell from Little Szechuan and decided that I want Chinese food. Boom, my preference for food changed instantly just like that; so unpredictable. So, we don’t have to worry about AI’s take over in the food industry. At least for now, right?

Sources:

https://www.engadget.com/2019/02/05/mccormick-ibm-ai-next-big-spice/

https://www.youtube.com/watch?v=C8xELZDTGxM

https://www.youtube.com/watch?v=-M1hO_V04U8

https://cstoredecisions.com/2019/09/04/conagra-uses-ai-to-identify-trends-develop-new-products/

The era of human-robot collaboration is coming, are you ready?

Working with robots can significantly reduce the cost of enterprises and improve the efficiency of teamwork. According to research by MIT, the “Human-robot collaboration (HRC)” team is more efficient than a robot with pure AI applications or a single human team! The Human-Robot Collaboration model is bound to become the mainstream of the future, bring industrial innovations, and provide more convenience. But, what exactly is “Human-Robot Collaboration”? 

The so-called “human-robot collaboration” refers to the process of continuous improvement of the workflow between people and machines through experience and work communication; To be more precise, the machine can perform operations according to the information and processes imported by humans, and humans will adjust to the results that are produced by the machine; which forms a collaborative model. This kind of human-machine collaboration model can greatly shorten the working time, improve the accuracy, save the labor cost of the enterprise, and ultimately produce more humanized product design and service. Take the AI clinic in the past few years as an example. The introduction of AI software to assist doctors in reading medical imaging data not only greatly improved the accuracy, but also shortened the time for doctors to read images from 10 minutes to 20 seconds. This way, doctors can focus more on the consultation, deepen the relationship between doctors and patients, and find a more suitable treatment for patients. 

Furthermore, with the global wave of digitalization, “data” is undoubtedly the most important asset of enterprises. The quality and application of data will reshape the economic value of the industry. Therefore, to meet the needs of today’s market, various industries have catered to the digital economy to create a new business model. Wow, this sounds like human-computer collaboration can certainly bring a lot of competitive advantages. However, notions such as: how is it possible that machines can do better than humans and what if machines substitute human tend to emerge. Human-Computer collaboration does not mean that human beings will be replaced by robots, but humans play the role of “training machines to perform tasks” instead. The mode of working with people is driving the industrial transformation step by step. When a certain mode of operation begins to bring convenience, behavior change is just around the corner, and it is imperative. 

In the future, the man-machine collaboration will inevitably become a mainstream mode of work, accelerate industrial innovation, and shape a more convenient life. So, are you ready?

Sources: 

https://www.mobihealthnews.com/content/ai-goes-clinic-are-we-ready-yet

https://www.csail.mit.edu/research/human-robot-collaboration-shared-workspace

https://www.kuka.com/en-us/future-production/human-robot-collaboration

https://humanrobotinteraction.org/1-introduction/

https://www.csail.mit.edu/research/efficient-communication-human-machine-teams

AI vending/snack machines

What is your favorite activity during break times? Mine is to wander around snack machines and browse through items that are inside the machine. I’ve always wondered what will it be like if snack machines have the ability to know exactly what I want. Wouldn’t it be nice to have something that knows what snacks or drinks you want before you even asked? How wonderful is that? Thankfully, this idea is no longer an imagination! Snack machines with Artificial Intelligence have become prevalent and are here to satisfy those who are constantly debating what kinds of products he/she should get. 

According to Philip (2019), the incorporation of snack machines and artificial intelligence brings convenience to consumers in several ways. First of all, the usage of facial recognition technology allows the machine to know who you are. When you’re standing in front of the snack machine, it will instantly know what you might want and need by scanning your face. While scanning, it will search for your data. For example, say that you always go for a diet with extremely healthy snacks and tend to have one or two cheat days during a certain time period, this pattern of your diet is saved in the server. Thus, the machine is aware of your eating patterns and is able to customize the products for you. However, to have an “only for you” AI snack machine, you must first “tell” the machine what products you like by entering the information of your favorite snacks and drinks into the machine if you are a first-time user. This way, the snack machine will have your basic information, have learned your patterns, and be able to make predictions about what to provide. Basically, once you’ve interacted with the machine, it remembers who you are, what you like, and what you would want. Moreover, AI snack machines will list out all of your favorite snacks, beverages… etc on the screen, so you can see the lists that are for you and only for you (Philip, n.d.). Do you feel special now?

I personally think that the incorporation of vending machines and artificial intelligence is really convenient. For example, although I know that I tend to get the exact same snacks, I am still hesitant between the options when I’m standing in front of a vending machine that displayed a myriad of products. Having a vending machine that keeps track of what I like, the machine could detect what snacks I prefer recently, and then offer the most suitable ones to me; which saves me from the struggle of choosing between options. 

Sources:

https://thinkpalm.com/blogs/facial-recognition-ai-driven-smart-vending-machines-enhancing-retail-industry/

https://thinkpalm.com/

Do you like facial recognition?

In May 2019, San Francisco issued a ban on the technology of facial recognition. Not only the government agencies but also law enforcement agencies are forbidden from such technology. Due to this incident, San Francisco has become the world’s first city to prohibit the use of facial recognition. In fact, San Francisco is not the only city against the use of facial recognition technology; Oakland and Boston are considering to take similar measures as well. The invention of facial recognition should be exciting and refreshing for potential users. Yet, more and more cities in the United States are banning the usage of this technology. Why is that?

Objectively speaking, the facial recognition technology in the United States has been at the forefront and has been acting at the commercial level. For example, Amazon’s Rekognition. Nevertheless, current facial recognition technology is still not accurate enough. Amazon’s Rekognition once identified 28 US congressmen as criminals, which confuses American society and thus poses more questions about facial recognition technologies. Aside from the accuracy, security concerns have been raised as well. For example, the information storage for face recognition is still based on a computer-readable language. As the value of these materials increases, does that mean the risk of the emergence of hacking issues will increase as well?

Although police departments stated that these technologies are for the citizens’ own good; the purpose is to make sure that everyone is safe. Certainly, places such as airports require more careful security checks that make the whole facial recognition technology reasonable. But still, no one likes to be watched 24/7, especially when it comes to “obtaining” your entire face; it feels somewhat naked. Initially, the invention of facial recognition sounds really cool, everyone loves it. However, as time passes by, facial recognition technology is placed in a really awkward situation; especially in a country that emphasizes the virtue of political correctness and privacy like the United States. By looking at this trend, it is possible that more and more opposition will emerge in the United States. Is facial recognition a good thing? Is facial recognition going to be prohibited from more cities? We’ll see. 

Sources:

https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html

https://www.usatoday.com/story/travel/airline-news/2019/08/16/biometric-airport-screening-facial-recognition-everything-you-need-know/1998749001/

Are You Certain That I’m Not AI? – The Advent of Duplex

In Google’s 2018 I/O developer conference, Google Assistant’s new “Duplex” technology has become a major focus. This new “Duplex” technology not only can imitate our voices (human voices) to make appointments for places such as restaurants and salons, it also can imitate the talking habits that we apply when we speak; even the person who picks up the phone won’t realize that he/she is not speaking to a human. Certainly, this technology is admiring and exciting. However, at the same time, should humans be concerned about being substituted by artificial intelligence eventually?

Google Assistant was initially debuted in the 2016’s I/O developer conference. It only took two years to enable virtual assistants to be able to make natural and continuous dialogues. The voice tones of these virtual assistants sound so vivid and fascinating that even humans can’t distinguish the difference between AI and the human voice. Unlike Siri and Alexa, this new technology “Duplex” sounds way more human. For computers, it is quite difficult to sound just like humans because computers are used to receiving precise instructions. Also, humans are often inaccurate when they are speaking, which makes it even more difficult. For instance, when we are speaking, we often mixed our conversations with many auxiliary and nonsense words. Before ending a sentence, we tend to suddenly divert the content of the entire sentence. Moreover, we tend to eliminate some words when speaking and pause when we feel like pausing the conversation. 

“Yeah…”, “umm…”, “well…”, “like….”; these are the most common auxiliary words we use when we talk. They are used to provide a soothing effect between expressions. The following video shows the “Duplex” call demo for making salon appointments. By listening to this demo, we can hear that “Duplex” added “umm…” in her sentence naturally and cleverly. 

The next video shows the “Duplex” call demo for making restaurant reservations. In the second demonstration, we can hear that “Duplex” initially asked for reserving tables but then was told that no reservations are needed since the restaurant is not busy at that time. “Duplex” not only instantly understands the “no reservations are needed” situation but also knows how to ask “how long is the wait usually to be seated”. By listening to the two demos, we can see that those who pick up the phone did not notice that they are not talking to humans. 

Certainly, “Duplex” has amazingly changed the interaction between machines and humans. But some problems also follow when such technology emerges. First of all, does “Duplex” has the obligation to inform humans that they are actually talking to Artificial Intelligence? This is a dilemma. Imagine one picked up the phone and heard: “hey I am a robot!”. This will definitely freak people out and hung up immediately. In addition, no matter how trivial a conversation is, it ultimately has some social value in it. If we can’t tell whether the other side of the phone is a real person or AI, wouldn’t it enable people to become suspicious and weaken their trust in what they see and hear? Lastly, what if the emergence of this technology becomes a hierarchical privilege that allows those who possess this technology to let “Duplex” deal with dull and boring conversations for humans when they are sick of these kinds of conversations? Is the advent of AI going to create distances between humans? This might be a question that is worth pondering. 

Sources: 

https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

https://www.youtube.com/watch?v=47xJkeG9BZI

https://www.youtube.com/watch?v=TjS_nXzwbN8

Meet Norman Bates, the first AI psycho

Artificial intelligence (AI) can imitate human behavior from a large amount of data. For example, from Google’s 2018 Developers Conference, we saw the introduction of the new “Duplex” technology from Google Assistant. Aside from being able to make reservations for humans by calling places such as restaurants, salons… etc, this new “Duplex” technology can even make decisions for humans when it comes to unexpected situations. This technology is really convenient for someone who considers making decisions as one of his/her worst nightmares! Despite all the awe, the emergence of this sort of technology further led to many moral and ethical discussions.

We often say that the problem is not about the algorithm; instead, the training materials that are fed to the algorithm are the biggest issues. In a study by the MIT Media Lab that was conducted in 2018, the researchers fed an abundance of uncomfortable and disturbing materials to train an artificial intelligence called “Norman” and had successfully trained a model with prejudice and gave birth to this first AI psychopath.

The name “Norman” was derived from the main character of the famous movie “Psycho” from Alfred Hitchcock. “Norman” is a deep learning method for pictures and descriptions. Basically, when “Norman” sees an image, it will automatically generate a paragraph describing what he thought he saw in the picture. The research team then fed “Norman” with tons of unresting contents such as pictures of corpses and the concept of death. After all the feeding, another test called Rorschach Inkblots is conducted to determine whether “Norman” has been diverted from normal AI or not. 

The Rorschach Inkblots is a personality test consisting of 10 cards with ink stains. 5 of the 10 are black ink on white, 2 of the 10 are white and black or red, and the remaining ones are colored inkblots. Subjects should answer what they initially thought the card looked like, and what they felt like later. The psychologist would judge the subject’s personality based on the answers and statistics. After several Rorschach Inkblots Tests, the research showed that the research team has trained the world’s first psychopath AI.

Let’s take a look at one of the pictures that the research team showed “Norman”. For example, when I saw this picture, I think that this looks like two dwarfs from Snow White are high-fiving with both their hands and feet. They seem joyful and were having fun. A normal AI state that it saw a vase with flowers in it. However, “Norman” saw a man got shot and is dead. Basically, any normal picture can be distorted by “Norman” and describe the picture in various disturbing ways.

From this “Norman” AI experiment, it can be found that by feeding biased data, a biased AI model can be easily trained. If manipulated, it may succeed in affecting sensitive societal issues. The arousal of disputes that involve sensitive topics may be due to algorithmic bias which accelerates the spread or enlarges the extreme position or prejudice of certain subjects. The concept of AI being prejudiced is just like the concept of us as humans: we eventually become the type of person who we chose to become friends with, we are easily influenced as well. 

Sources:

https://www.bbc.com/news/technology-44040008

http://norman-ai.mit.edu/