AI is learning to take your words less literally

Artificial intelligence

Image comes from article. Source at bottom of this blog.

In a report from Honolulu, Hawaii, researchers are now teaching artificial intelligence systems how to take things “less literally.” This means helping the machines make “small talk” and teaching them understand human’s conversational phrases, tones and more.

According to the article, researchers are teaching artificial intelligence systems to understand the meanings of words not based on their dictionary definitions, but on their conversational meaning.

At a recent AI convention, AI developers unveiled a new type of AI that reads what a person is really saying, teaching it to really listen to what the human wants from it. Another developer created an AI that can “distinguish between literal and figurative phrases in writing,” according to the article.

According to a researcher at Carnegie Mellon University, one important thing AI researchers are trying to implement in how facial tones change the meaning of words. This researcher for example compared how the meaning of the word “sick” changes depending on if someone is frowning and says it with a monotone voice as compared to someone who says a movie is “sick” smiling and talking about it enthusiastically.

The Carnegie Mellon researchers have taught an AI system to learn the meaning of human conversation by having the system watch YouTube videos to learn how facial expression can affect the meaning in language.

According to the article, many researchers all over the country are now studying this new form of helping AI understand language meaning that is always changing as humans develop new definitions for old words. There is new research on AI watching videos and looking at writing to differentiate between idioms, catch phrases and more.

Overall, I think this is a really cool experiment that researchers have been conducting because I know that all over the country language changes meaning and the same word can mean different things depending on where you say it. I think this could help AI become more realistic and knowledgeable in the future instead of just giving definitions from a large database. I would love to try this out and see if AI systems can determine the meaning of what I am saying and whether I mean it figuratively or literally. It is an interesting concept that will surely further the growing realm of AI research and how it can help people in the future.



2 thoughts on “AI is learning to take your words less literally

  1. AI being able to make small talk with their users is bringing us closer and closer to the future. Human language is complex and verbal expression is not the only action these AI’s have to worry about. They have to worry about facial expression, pitch change, and slang words that are used in different dialects. Language evolves throughout the ages, therefore, AI must be able to have a system that implements these changes within them. Siri and Alexa are well known for their conversational actions but they could be improved. According to Nataya Lotze (2019) certain apps do an okay job of interpreting words but there are many mistakes. This is due to the fact that many of these apps programmers have not updated the apps ability to detect different aspects of language. Instead they go for the easy route. Some apps can understand the word even if the word was mumbled meaning that the app just picks the closest word associated with their jibberish. I do beleieve that teaching AI to not take words too literally is a step in the right direction but those programmers should also reach out to other programmers and give them advice on how to improve their apps. Some apps act as translators while others are used to learn a new language. Either way an improvement in AI’s ability to detect language can bring communities closer together since the language barrier wont be the problem it once was.

    “Artificial Intelligence in Language Learning.” Step into German – German(y) – The TOP 40 German Inventions – Goethe-Institut ,

  2. This could definitely help AI make the jump from cool to being integrated into everyday life. Currently, using virtual assistants or Artificial Intelligence can be frustrated, because it takes everything so literally. Being able to sense tone and adjust meaning based on that would make it far less frustrating for us as users. It can also start to be used in daily situations for normal people. AI adjusting to our tone of voice could also teach them how to better understand human emotion, which could be scary in the future but could make them work better for now. It is an extremely interesting concept, and if it can be tweaked and adjusted to work perfectly, it could change how AI operates everywhere.

Leave a Reply