A robot learns to model the person it is playing a game with in order to intentionally win or lose.
Emergie — a robot designed to guide people out of burning buildings (with Ayanna Howard and Paul Robinette).
Robot hide-and-go-seek is used to understand the challenges and impacts of creating deceptive robots (with Ron Arkin).
Deep Learning is used to match the high level objects in a person’s environment to other environments a person has experienced with the goal of assisting the blind (with Zsolt Kira).
A robot learns to match the tools need by specific types of people in order to assist with search and rescue operations.
The Robot Ethics and Aerial vehicles Laboratory (REAL) examines both the ethical ramifications and scientific underpinnings associated with developing autonomous aerial vehicles and robots. Our goal is not only to create technology which will advance the science and state-of-the-art, but to also critically investigate the societal impact of creating autonomous robots and aerial vehicles.
This lab focuses on two broad but intertwined questions. First, how can robots, such as autonomous aerial vehicles, be made to intuitively interact with, communicate, and influence people? Second, what are the ethical, societal, and cultural ramifications of creating autonomous machines?
Investigating questions such as these demands a multidisciplinary team of researchers and a diversity of perspectives. Our research uses methods from social psychology, behavioral economics, and game theory in conjunction with artificial intelligence, machine learning, and engineering. We tend to focus on higher, cognitive, aspects of human-robot socialization such as relationship development, modeling of one’s interactive partner, and reasoning about trust and deception. We are particular interested in the possibility of using robots for search and rescue and humanitarian missions.