The Rapport Project

Conversation: we do it all the time.  But it’s not as straightforward a process as you’d think.

In face-to-face (or videoconference) conversation, we fire an astoundingly fast stream of data at each other, much of it poorly organized, ambiguous, or otherwise just tricky to work out.  And there’s a lot of it: we throw words at each other, which are tough enough, but then there’s also the prosody, pragmatics, and pitch of our speech.  And then there’s facial expression. And body language. And context.  And so on.

One of the coolest advances in recent years has been the development of tools to analyze and model facial expressions, speech, and nonverbals in dyadic conversation.  Dr. Brick worked on the development of a few of these tools over the years.  More recently, the RTS lab and its collaborators have started to expand what we can do with these tools to understand conversations in real time.

The Rapport Project.

One direction of particular focus in recent years has been the development of measures related to rapport in conversation.  That is, what is it that makes some people just click in conversation, and others just not?

The rapport project uses state-of-the-art face-tracking technology, moderns computer-generated avatars, and cutting-edge statistical methodology to answer those questions.