Up until now, I have been using a single character to test and make demos with. I did, however, duplicate that character and give him two different names for testing purposes. This was done to save time initially. I wanted to start laying down a foundation of functionality before charging head strong into creating additional art assets.
Now with some basic functions in place, as described in previous posts about this project, I reached a point where variety has become an essential component in pushing this project forward. Obviously, to simulate a classroom, I must have a variety of students. I decided that 6 individuals would be a good starting number.
Why 6? I don’t know. It seemed like a manageable amount of characters to create without spending too much time. I also thought I could get enough variety in the visuals to define individual students. So without further adieu here is our class of students.
Here’s a look at the variety in the texture maps for each student. For the skin tones, I used photoshop to create a base flesh color and then overlaid different lighting solutions.
Since I am only one person, I did a number of things to save time in the creation of these characters. First, they are all the same height and general size. They all also share the same hands, feet and leg meshes. This saves a great deal of time with the rigging of the models. Rigging is the process of connecting bones to the meshes for animation. By keeping all the characters the same size I was able to use the same bone structure and rigging on all the characters. This means I can key frame out animations for one character and apply it to all other characters.
The heads and face meshes are the same as well. However I did add what’s called morph targets to them to give them some shape variety. In the morph targets, I simply moved some of the vertices around to give the faces individual appearances.
Each student character does have a unique hair mesh attached to it. Basically to make hair, I took polygons from the top of each characters skull and extruded them out to create different hair shapes.
There are also two slightly different torso meshes used. One for male characters and one for female characters.
These subtle changes to the meshes of the characters, combined with the unique texture maps, did a sufficient job in creating uniquely identifiable individuals.
Now that I have 6 characters, I chose to arrange them in the classroom in two rows of three. This arrangement immediately inspired the next bit of functionality that was necessary for the simulation. That functionality being the ability of the students to see you, the teacher.
In previous posts, I was experimenting with the ability of the user being able to move side to side and back and forth in the virtual classroom. At that time, we were using only two students who were sitting side by side. The students’ view of you was never obstructed. Now, with two rows of students, students in the second row might have an obstructed view of you depending on where you are relative to the students sitting in first row. This is a completely realistic scenario, which is good, so now we need a realistic method for those students, with an obstructed to view, to attempt to see you. We also need a way to keep track of who can, and who can’t, see you. In addition, it will probably be important for us to know which students you, as the user and teacher, can and can’t see since, depending on your position, your view may also be obstructed. This type of data will be important later in metrics that we use to measure if you are successfully using the simulation and if you are a good teacher. Ultimately, this is why we are creating this in the first place.
To accomplish these things I first needed some code that could simulate line of sight for both the user and the student characters. To achieve the first component of line of sight I used the Linecast function which is part of Unity’s Physics Library. This essentially draws an imaginary line between two points in space. These points will be at the center of game objects that contain a Collider component. A Collider is a piece of invisible 3D geometry used by the physics engine to detect collision. By adding a Collider component to each of the character’s heads, and one to the camera, which represents the user’s head, we can use Linecast to draw an imaginary line between the user’s point of view and the character’s heads. We can also use this to detect if the results of a Linecast were successful, and not obstructed by other Collider objects, like one students’s head being between the camera and another student’s head. This was enough to successful determine if the user could see a student however to detect is a student can see the user was more involved.
As the user, to alter what students are in view, you are able to physically moved back and forth and side to side, using the functionality I talked about in previous posts. The virtual students now must be able to move, and adjust their view, in order to react to the user’s movement. To accomplish this, student’s needed some code to give them the ability to tilt their necks.
While writing these functions, I figured it was reasonable to assume that in reality a student would at least be able to identify on which side, be it left or right, the teacher was on using their hearing. In the virtual world, this is simply where the user is along the x-axis compared to the student character. I used this direction as the determining factor for which direction the student chooses to tilt their neck, as a first attempt, to see the user. If unsuccessful, with -in a threshold of tilt in that direction, the student then tilts back the other way until a threshold is met. If the student is still unable to see the user after this second tilt, they then return to the direction of tilt they initially started down. This is basically indicating to the user that, despite giving their best effort to do so, by tilting their head back and forth, the student can’t see you. This visual indication tells the user maybe they should move so that this individual see!
[youtube https://www.youtube.com/watch?v=p9kkxvdRb04]
This, to me, is the first real justification for the viability of this project. We are starting to replicate real world problems and providing the user with measurable methods for solving them.
In addition to the neck tilting, I decided the line of sight of the students should be slightly more complex than that of the user. The user essentially has one vantage point, which is the point of view from the camera. This means all we have to do to confirm line of sight is draw a single line from that point to a students head and then verify it is unobstructed.
The virtual students, as it is with real live humans, ideally are able to see out of both eyes, essentially giving them two vantage points. So I wrote some code that can verify line of sight from each of the student’s eyes. This has an impact on how the neck tilting function works as well. A student who, as a result of tilting their neck, can see you out of one eye, must react differently than a student who can not see you out of either eye. Also, should a student, who is tilting their neck to see you out of one eye, suddenly be able to see you out of both eyes, as result of you moving, then they should return their neck tilt to a more comfortable upright position. This helped provide a much more realistic response system to characters trying to see.
Crazy the types of things you have to think about while programming isn’t it?! I wish I had the foresight to come up with that logic first try. However, as is the case with most things, it required a good deal of experimentation to get there.
The photo below has green lines that indicate the lines of sight between the camera/user and the students. The red lines indicate the lines of sight between the students eyes back to the camera/user.
In the next post I will be talking about techniques used to further individualize the students as well as some more examples of visual indicators that can be built into them to provoke measurable actions by the user.
Leave a Reply
You must be logged in to post a comment.