This project has been a great one for me because it has given me the opportunity to get familiar with multiple new technologies that I have never used before including the Microsoft Kinect 2.0 for Windows and the Microsoft Speech API. But, apparently, I didn’t want to stop there. Awhile ago, after playing with Nintendo’s Wii U, I thought, wouldn’t it be cool if this simulation had a controller similar to that. That’s when I decided to give a third new technology a try.
With the simulation taking shape, it was starting to become clear that some additional information might be useful to the user. However, I didn’t want to clutter the screen, which is essentially a window into our virtual classroom, with menu items and text, for fear of breaking the immersion we have worked so hard to achieve. I also didn’t want any complicated controls via a game pad or keyboard. I want to ensure the simulation feels as natural as possible for the user. That’s when I started thinking that it might be ok if the user was holding a touch screen tablet. The tablet would be wireless and light enough that a user could naturally go about their teaching duties while using the simulation. It’s also reasonable to assume that in real life a teacher might be holding something of comparable size, if not a tablet device itself, while really teaching. The tablet’s screen could contain customizable navigation, tips and tutorial information as well as lesson content and quiz questions, all on an easy to use touchable screen.
But could I make the technology work? I have seen examples of what I had in mind with how Nintendo’s Wii U gamepad works but I have not seen any Unity programs, running on a Windows machine, with this type of functionality. I knew I would need some sort of networking between the machine running the simulation and the tablet.
After searching the internet for any projects that might have done something similar to this I came across only one. The example was actually an experiment, that was never officially released, produced for Sesame Street, where the user could interact with 3D characters on one screen while using a secondary touchscreen device. This was basically exactly what I wanted to do! The program was done using Unity however they were using some third party software to handle the connection between the touchscreen and the machine running the main application. When I researched this third party it turns out they don’t even exist anymore. It seemed as though I was out of luck. However reading about this example did open my eyes to an alternative set up I had not thought of before though. The article about the Sesame Street application mentioned that both the main machine and the tablet were running the exact same application, just displaying different things, and that the networking of the two was using the same protocols that a multiplayer application would use. Initially, in my mind, I thought of the two screens as completely separate applications just passing data back and forth. But having it be the exact same application made sense as long as a multiplayer system could be achieved.
Ironically, the timing of this idea, and the timing of the technology to make it possible happened basically at the same time. Talk about being on the leading edge! With recent updates to Unity, you no longer need third party help to make multiplayer games. Unity has now built their own networking system into the engine called UNET. It is very new and several parts of it are still in beta. However, if multiplayer games are now possible, in theory, this should allow me to make a second screen controller for the simulation.
The setup for this ended up being like this. I am running the main application on an Alienware desktop PC. This is a powerful machine that runs and displays all the 3D graphics. It also is the sever of the multiplayer application. For the second screen, we ordered a new Microsoft Surface Book, which also just recently came out. This displays all the menus and additional information about the simulation. There was no need for this device to be particularly powerful because, even though it is the same application built in Unity, it will only be displaying 2D graphics and text. The tablet connects, as a client, to the server application running on the desktop PC.
We did run into several firewall issues while trying to initially get the two machines to talk to each other but we did eventually iron those out. It is also important to note that it appears, as far as I can tell, that UNET requires an internet connection for this communication to happen. After getting the two machines to talk to each other, I tried using a wireless router, with no internet connection, to setup a LAN just between the two machines and it did not work. This could be because of some lingering firewall issues, I am not really sure at this time.
So what are we using the second screen for?
First I built a main menu for the simulation where I divided activities into two categories. Here the user can choose from one of three “Learning” modules or out first example of a “Teaching” module. The “Learning” modules are essentially the tutorials for using the simulation. These modules let the user experiment with how the individual interactions work. The “Teaching” module allows the user to use all of the interactions, in combination, while actually attempting to teach a lesson.
Learning Modules
While using the “Take Attendance” learning module, the second screen displays a check list with all the student names. As the user calls on each of the students, a check mark is placed next to the name on the tablet while the student raises their hand on the main screen indicating they are present.
This helps the user get comfortable with speaking to the simulation and helps them put names to faces. Also on the tablet, next to each student’s name, is a pencil gauge used to display the attention level of each individual. This mirrors the pencil gauge shown on the main screen. The difference is, on the main screen, the user can only see the gauge of who they are currently talking to. While on the tablet, the user can see the gauges for all students at the same time.
This should help the user draw correlation between what the pencil gauge represents and the physical behavior the students are exhibiting. The final thing on tablet are “Tips” to help guide the user through the module. The “Tips” provide additional information that might be useful as well as letting the user know when they have successfully completed the module.
While using the “Now You See Me” learning module, the second screen once again displays the names and pencil gauges of all the students.
As the user walks around in front of the Kinect, the tablet displays, in real time, how where the user is in the virtual classroom effects each student’s attention. The “Tips” provide information about why they should be positioning themselves in a location that all students can see them as well as letting them know when they have found that position. The “Tips” also describe, how in this module, the amount of attention gained with line of sight is exaggerated in order to demonstrate the interaction.
While using the “Proximity Practice” learning module, the second screen displays the names and attention gauges that now, in real time, display how attention is effected by how close the user is to individual students.
The “Tips” let the user know why this is important and how this, when used in combination with the other interactions from the “Learning” modules, can keep the student’s attention levels high.
Teaching Modules
While the “Learning” modules focus on specific interactions with the simulation, the “Teaching” modules are designed to make the user use all of the interactions in combination while, doing what will be their eventual job, teaching lesson material. The first “Teaching” module we chose to make deals with a very general subject matter as opposed to any focused subject that a specific type of teacher might need to do cover. This should allow a teacher of any discipline to use the simulation for practice. Any additional “Teaching” modules that we produce will follow the format of this first example.
While using the “Lesson One: Bullying” teaching module, the second screen first displays an outline of some material the user should deliver to the students.
While covering the content, the user’s goal should be to keep the attention levels of the students as high as possible. They will do this using the interactions learned in the “Learning” modules. The user will quickly discover that focusing on the interactions is now more difficult while trying to cover the lesson material. The user will also discover, they no longer have the ability to monitor the pencil gauges of all the students at once like they did in the “Learning” modules. Instead, they will now have to closely monitor the physical behaviors of the students to determine how well they are paying attention. All these details are our attempts to make the simulation as realistic as possible.
After the user feels they have covered all the material for the outlined lesson, they are able to click the button labeled “Quiz Time.” I should also mention that all the navigation options on the second screen are also available via voice command. Meaning the user could simply say “Quiz Time” or push the “Quiz Time” button.
During a quiz, the tablet will display a series of questions pertaining to the lesson material. Below the question, the correct answer is also displayed on the tablet.
In this module, all questions have yes or no answers. To ask the class one of the questions the user can either click the button or speak the question verbally. As each question is delivered, the buttons on the second screen will light up yellow. After asking a question the user can watch as the students nod their head in response. This is where the user starts to get a feel of how well the students have learned the content of the lesson. Some may answer correctly and some might answer incorrectly.
There is some variation in how a user can ask the quiz questions. First, any question can be asked to the entire class or to an individual student. Second, the user can verbally request the students raise their hands in response to the question as opposed to nodding their heads. This requires a little rewording of the provided question which is described in the “Tips” window on the second screen.
After the user has asked all the questions on the quiz a new button appears on the second screen called “Quiz Results.” When the user clicks here they will be able to see each student’s quiz score.
They will also be able to toggle back and forth between the display of the quiz score and the student’s average attention level during the lesson.
This screen, with a recap of quiz scores and attention levels, is the first time the user gets some quantitative feedback for how well they performed while using simulation. The user should now realize that quiz scores are directly related to attention levels. Which is great because attention levels are exactly what the simulation gives the user the ability to interact with.
Hopefully this description provides some indication of the value added with the second screen controller. It should allow our users to use the simulation with minimal instruction and supervision. We will be doing our first group of play testing with students soon. It will be interesting to watch them use the simulation and get some feedback.
Leave a Reply
You must be logged in to post a comment.