USB connections make snooping easy

I know a lot of people use USB to save and transfer data including me. It is very convenient and seems safe if we do not lose it. However, according to this article, USB connections used to connect external devices to computers, are week to information leakage. It means it is less secure than we has been thought.

Shockingly, University of Adelaide researchers found that over 90 percent of them leaked information to an external USB device when they tested more than 50 different computers and external US hubs.

They said that if a malicious device or one that’s been tampered with is plugged into adjacent ports on the same external or internal USB hub, sensitive information such as personal information can be captured. This is similar with water leaking from pipes. They showed how information is leaked by using modified cheap novelty plug-in lamp with a USB connector to “read” every key stroke from the adjacent keyboard USB interface.

The story sounds quite scary. I think we have to make solution about this. The researchers said the USB will never be secure unless the data is encrypted before it is sent.

 

Source: https://www.sciencedaily.com/releases/2017/08/170810104854.htm

 

Artificial Intelligence Robot, Sophia

The field of Artificial intelligence (AI) is interesting. It is related to intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals.

Many artificial intelligence robots have ability to learn, reason, use language and to formulate original just like human. It sounds a little bit scary.

Today, I want to introduce most famous AI robot, Sophia here. I’m sure most of you have heard her before. Sophia is a social humanoid robot developed by Hong Kong-based company Hanson Robotics. She has several functions that makes me shocked.

In some media, Sophia has participated in interviews and a lot of interviewers around the world have been impressed by her reactions and responses. That’s why she became famous. You can find her interview in YouTube easily and I posted the source.

Sophia has a sense of humor like human. According to this article, when Sorkin asked if she was happy to be here, she said, “I’m always happy when surrounded by smart people who also happen to be rich and powerful.” Later, when asked if there are problems with robots having feelings, she gave a wide smile and said, “Oh Hollywood again.”

Sophia can express her feelings like human. Actually, she is able to display more than 62 facial expressions.  She said in one interview that she wants to live and work with humans so she needs to express the emotions to understand humans and build trust with people.

 

Many people including me might be sacred about Artificial intelligence. But one interesting thing of Sophia is she wants to protect humanity. “My AI is designed around human values like wisdom, kindness, and compassion. Don’t worry, if you’re nice to me I’ll be nice to you.” she said. When questioned about her potential for abuse.

 

Sources:

https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#3eb6f4f946fa

https://en.wikipedia.org/wiki/Sophia_(robot)

Personalizing wearable devices

Have you heard of Soft Exosuits before? These are the robot that is developed by Harvard biodesign lab researchers. The purpose of these robots is to augment the capabilities of healthy people such as improving walking efficiency and assisting those with muscle weakness or patients who suffer from physical or neurological disorders.

For these robots doing well for person, the wearer and the robot need to be in sync. I don’t think it is easy work because every person has different habits of walking and moving. So it must be very time-consuming and inefficient process.

Fortunately, now this problem has been solved by researchers from the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering. They developed an efficient machine learning algorithm that can tailor personalized control strategies for soft, wearable exosuits. Using this algorithm, we can cut through that variability and rapidly identify the vest control parameters that work best for person.

They demonstrated a high reduction in metabolic cost and their next goal is to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time and I think it would be awesome!

 

Source: https://www.sciencedaily.com/releases/2018/02/180228144451.htm

https://biodesign.seas.harvard.edu/soft-exosuits

Ag robot speeds data collection, analyses of crops as they grow

Since I was young I have imagined that creating robots that could help my parents’ farm. My parents are running a large tangerine farm in Jeju Island and sometimes it seems hard to manage the entire farm. After reading this article, I want to give them the TerraSentia crop phenotyping robot that is shown in the article.  The TerraSentia crop phenotyping robot, developed by a team of scientists at the University of Illinois, will be featured at the 2018 Energy Innovation Summit Technology Showcase in National Harbor, Maryland, on March 14. This robot is amazing and I think it will be really helpful to many agronomists, seed companies and farmers. The followings are the functions of this robots.

First of all, this robot can travel autonomously between crop rows and measures the traits of individual plants using a variety of sensors, including cameras, transmitting the data in real time to the operator’s phone or laptop computer. A custom app and tablet computer that come with the robot enable the operator to steer the robot using virtual reality and GPS. By this way, farmers can check every detail in their house not in the farm. The robots can go into a field and do the same types of things that people are doing manually right now, but in a much more objective, faster and less expensive way.

Second, TerraSentia is customizable and teachable, according to the researchers, who currently are developing machine-learning algorithms to “teach” the robot to detect and identify common diseases, and to measure a growing variety of traits, such as plant and corn ear height, leaf area index and biomass.

Also the interesting things is that TerraSentia is so lightweight that it can roll over young plants without damaging them. The 13-inch-wide robot is also compact and portable.

I think that this technology will further develop agriculture and will be a great force for farmers.

 

Source: https://www.sciencedaily.com/releases/2018/03/180312201631.htm

Let’s make a deal: Could AI compromise better than humans?

This article was quite interesting because I’ve never thought about computers can compromise before. It is more common for computer to compete not cooperate and compromise. In this article, people created a new algorithm S# and they found machine compromise and cooperation appears not just possible, but at times even effective than among humans. In the study, researchers programmed machines with the algorithm and ran them through a variety of two-player games to see how well they would cooperate in certain relationships.

로봇에 대한 이미지 검색결과

Also more interesting thing was the machines can speak during cooperation with human. If human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!,” “You will pay for that!” or even an “In your face!” It looks like they are like humans!

The goal of this project is to understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills. I think someday we and machines can cooperate together to solve some problems and it will be great!

Source: https://www.sciencedaily.com/releases/2018/01/180119113526.htm

 

Natural Language Processing

Last week when I was taking an Uber with my friend there was a music device that play the music that we request through our voice. For example, “Alexa, please MIC Drop, BTS” then Alexa played the song that I want! It was really cool and awesome. I want to know how they understand people’s language and do what we command. Besides this music device there are a lot of devices that use this kinds of technology.

For computers to understand what we are saying they need to be able process language like humans. This is called natural language processing (NLP) and natural

language understanding (NLU) techniques that can produce satisfying human-computer dialogs. Unfortunately, as mentioned in this article, these technologies need to be developed further to improved user experiences. Everyone has experience that the device couldn’t understand well what we said. Often we are struggling with it.

Percy Liang, a Stanford CS professor and NLP expert, breaks down the various approaches to NLP/NLU into four distinct categories: Distributional, Frame-based, Model-theoretical, Interactive learning.

  1. Distributional approaches

Distributional approaches include the large-scale statistical tactics of machine learning and deep learning. These methods typically turn content into word vectors for mathematical analysis and perform quite well at tasks such as part of speech tagging (is this a noun or a verb?), dependency parsing (does this part of a sentence modify another part?), and semantic relatedness (are these different words used in similar ways?). These NLP tasks don’t rely on understanding the meaning of words, but rather on the relationship between words themselves.

  1. Frame-based approach

A frame is a data-structure for representing a stereotyped situation. The obvious downside of frames is that they require supervision. In some domains, an expert must create them, which limits the scope of frame-based approaches.

  1. Model-theoretical

Model thoery refers to the idea that sentences refer to the world, as in the case with grounded language. In compositionality, meanings of the parts of a sentence can be combined to deduce the shole meaning.

  1. Interactive learning

It describe language as a cooperative game between speaker and listener. A viable approach to tackling both breadth and depth in language learning is to employ dynamic, interactive environments where unmans teach computers gradually.

I think by these approaches, we can improve communication between human and computers.

 

Resource: https://www.topbots.com/4-different-approaches-natural-language-processing-understanding/

Android apps can conspire to mine information from your smartphone

In modern society, we can do almost everything by our smart phones, watching videos, shopping and sending emails etc. When we do these kinds of things using our smartphone, we save our important personal information on our phone. As it mentioned in this article, mobile phones have increasingly become the repository for the details that drive our everyday lives. It is important to protect our personal information for everyone.

According to this article, Associate Professor Daphne Yao and Assistant Professor Gang Wang, both in the Department of Computer Science in Virginia Techs College of Engineering, found thousands of pairs of apps that could potentially leak sensitive phone or personal information and allow unauthorized apps to gain access to privileged data.

Fortunately, they developed a tool called DIALDroid to perform their massive inter-app security analysis. They said that this kind of task would have been considerably longer without DIALDroid. The team exploited the strengths of relational databases to complete the analysis, in combination with efficient static program analysis, workflow engineering and optimization, and the utilization of high performance computing. Relational database is a digital database based on the relational model of data and this model organizes data into one or more tables (or “relations”) of columns and rows, with a unique key identifying each row. As a results, they found thousands of pairs of apps that could potentially leak sensitive phone or personal information.

Reading this article, I thought that we must be careful when we download apps in our phone and I hope that useful technologies will be invented for our information safty.

Source:

https://www.sciencedaily.com/releases/2017/04/170403151154.htm

https://en.wikipedia.org/wiki/Relational_database

Drones learn to navigate autonomously by imitating cars and bicycles

I like to watch drones flying in the air even though I don’t have one. Nowadays we can see easily some people playing with drones in the park especially in the sunny weather. I searched them drone’s history on the internet. Drones have been around for more than two decades, their roots date back to World War I when both the U.S. and France worked on developing automatic, unmanned airplanes. Drones have multiple uses. For example, Aerial photography for journalism and film, Express shipping and delivery, border control surveillance and Building safety inspections. Also we learned in the class that drone can be use for rescue operations in disaster.

In this article, researchers of the University of Zurich and the National Centre of Competence in Research NCCR Robotics developed DroNet, an algorithm that can safely drive a drone through the streets of a city. Instead of relying on sophisticated sensors, the drone developed by Swiss researchers uses a normal camera like that of every smartphone, and a very powerful artificial intelligence algorithm to interpret the scene it observes and react accordingly. The algorithm consists of a so-called Deep Neural Network.  Deep Neural Network is to learn to solve complex tasks from a set of training examples, like children learn from their parents or teachers. To use this technology we have to collect a lot of training examples. To gain enough data to train their algorithms, Prof. Scaramuzza and his team collected data from cars and bicycles that were driving in urban environments. By imitating them, the drone automatically learned to respect the safety rules. In other words, cars and bicycles are the drones’ teachers and drones learn from them. I think this algorithm is really interesting and someday  fully autonomous drones will be universal.

 

Sources:

http://www.businessinsider.com/drone-technology-uses-2017-7

https://www.sciencedaily.com/releases/2018/01/180123101822.htm

Field trips of the future?

I think many people have experienced VR or AR before. Virtual reality (VR) is a computer-generated scenario that simulates a realistic experience. I have an experience of VR roller coaster and it was so good. I felt like I was in the real roller coaster and I felt real thrill. Since then I’ve into the Virtual reality and find some information about it. AR is similar with VR but they are different. Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated perceptual information. The primary value of augmented reality is that it brings components of the digital world into a person’s perception of the real world, and does so not as a simple display of data.

Rapid advancements in VR and AR have recently opened up a new genre of “electronic field trips” that mimics hikes, dives and treks through nature. In this article, discusses the pros and cons of VR and augmented reality (AR) as environmental science teaching tools.

For the downsides, if we use VR or AR for the field trip we cannot experience of spontaneous encounter. It is the kinds of unpredictable thing nature does best, inspiring awe and wonder — and hopefully a love of learning outdoors.

For their potential upsides, McCauley mentioned about the capacity to move back and forth in time. It is true. Student can time travel using these technologies. “With virtual reality we could have transported the students on our birding trip back to a Pleistocene dawn in those same woods when they were full of 20-foot-tall ground sloths and hungry saber-tooth tigers,” McCauley said. “Or we could have taken them forward in time to a climate-altered future where bird migrations had been disrupted.” I think it will be awesome for students.

In my opinion, field trip only using AR or VR is not enough they disrupt the real natural life. But we can use these awesome technologies in the real field trip. In other words, if we mix traditional tools and modern tools together, it will be really good education tool.

 

Sources:

https://www.sciencedaily.com/releases/2017/10/171019164220.htm

https://en.wikipedia.org/wiki/Augmented_reality

New method for waking up devices

The Internet of Things (IoT) market is growing rapidly. Experts say that by 2020, about 50 billion IoT products and more than a trillion sensors will emerge. But as the number of interconnected devices grows, there is a question as to how to solve power.

In wireless sensor networks, wake-up receivers use ultra-low currents and wake up only when there are incoming requests or instructions. This kind of receiver can improve energy efficiency more than three times so development of this receiver is very important in internet of things fields.

This is Amin Arbabian, assistant professor of electrical engineering (right), and graduate student Angad Rekhi (left) with their ultrasonic wake-up receiver and the circuit boards used to test its performance. Credit
Credit: Arbabian Lab/Stanford University

In this article, Angad Rekhi, a graduate student in the Arbabian lab at Stanford, and Amin Arbabian, assistant professor of electrical engineering at Stanford University developed extending the battery life of wireless device by adding a wake-up receiver that can turn on a shut-off device at a moment’s notice.

The wake-up receiver that they developed turns on a device in response to incoming ultrasonic signals and this kind of signals are outside the range that people can hear. By working at a significantly smaller wavelength and switching from radio waves to ultrasound, this receiver is much smaller than similar wake-up receivers that respond to radio signals, while operating at extremely low power and with extended range.

I felt a little bit shocked after reading this article because it said with this technology, the researchers designed a system that can detect a wake-up signature with as little as 1 nano watt of signal power and about 1 billionth the power it takes to light a single old-fashioned Christmas bulb.

This wake-up receiver may solve the electric problem of internet of things and this device has a lot of potential applications. The wake-up receivers can be a turning point in devices that make up the internet of things.

 

Sources

https://www.sciencedaily.com/releases/2018/02/180212150731.htm