Last Living Dead Week, a week dedicated to any project of our choosing, I gave a presentation on how universities are like theme parks and how classrooms should be the attractions. You can find a link to that blog post here. Part of that presentation described how class time could be divided into three separate stages. The first stage would take place outside the classroom and contain some sort of pre-class activity. It would take about 15 minutes and could be treated as a ramp up time for stage two. Stage two would take place inside the classroom and be treated as traditional class time with a live lecture and time for questions. This stage would take about 30 minutes. The third and final stage would again take place outside the classroom and contain some sort of post class activity that reviewed, or gave a student a chance to interact with, content discussed in stage two.
An important underlying theme of this concept was to keep the physical university relevant through the use of technology. In contrary to the motivation behind many technologies that allow anytime, anywhere access; which ultimately leads to people doing things alone on their own time, this concept was meant to provide a way to use technology to bring people together.
For this Living Dead Week, I would like to expand on my three stage classroom idea by focusing primarily on the first and third stages. However, to do this, I first want to change the context a little bit. Instead of thinking of the three stages from a pre-class activity followed by a lecture followed by a post class activity viewpoint, I want to frame the stages with a past, present and future concept.
The inspiration for this concept comes from Charles Dickens’s ‘A Christmas Carol.’ In this story Ebenezer Scrooge is visited by three spirits that represent the past, present and future. These spirits escort Scrooge on a journey through time and space in order to teach him lessons on how to be a better man. This past, present and future method of teaching might serve as a good theme for my three stage classroom.
The past, or stage one, could be dedicated to reviewing what was discussed during a previous class. Think of it as a flash back. It also could serve as a method of connecting previous lessons to the lessons of the day.
The present, or stage two, would be the lesson of the day and delivered in a traditional lecture format.
The future, or stage three, could serve as a prediction model of where knowledge gained from the past and present might lead. The predicted future might not always be true however since it can only be predicted using knowledge from the past and present. Knowledge yet to come might change future outcomes. The realization of this might help kick start further thinking by students.
We know how students learn in the present. I even mentioned before that stage two takes place in the most traditional of settings; inside a lecture hall. But how would we facilitate them learning from the past or from the future? Well, with technology of course!
Let’s talk about hardware first. With the introduction of the Oculus Rift we are able to visually immerse users, in virtual worlds, in a way that previously was not possible. We can turn traversing space and time into a convincing, meaningful and personal experience. It is with this technology that we might be able to transport a student to any space and time in an immersive and meaningful way. However, there are two current problems that I see with the Rift.
The first problem deals with input. Once a user puts on the Oculus Rift the standard keyboard and mouse controls cease to be a realistic input device. Some alternative method of input is immediately necessary. Currently available input devices lack a proper one to one translation to what we can do in reality with our hands and what we can do in virtual reality. The Rift does a great job of tricking our eyes into thinking we are somewhere else, however, our inability to interact properly with the virtual environment hinders the immersion. Many efforts to overcome this hindrance are focused on translating our hand movements into virtual space. This is a monumental task because it involves not only the tracking of fingers and hands but also must address some sort of touch feedback. Even if and when those hurdles are overcome we will still be left with a lack of a convincing method of traversing through virtual space.
The second problem deals with availability. Currently the Oculus Rift is available in development kits only but it, along with several other similar devices, is on the verge of releasing a consumer model. If these devices catch on, virtual reality headsets could become a relatively common household device. Looking ahead, this will be a problem for my goal of keeping a university relevant with technology and from a creating classrooms as an attraction standpoint. It would be like trying to convince someone to come to your university because we have a smart phone app. Everyone has a smart phone, and everyone has loads of apps, so the novelty really doesn’t exist.
The good news is the answer to both these problems might lie in a single solution!
What I am proposing is to create an ideal virtual reality pod. The Oculus Rift will serve as the primary piece of hardware for the pod however the set up of the input devices, as well as the set up of the pods themselves, will create the solution to both of the previously stated problems.
Let’s focus on the input problem. For now, I want to abandon the notion that we need our hands in the virtual space to interact with it. There is currently no good input solution for this anyway. There are several attempts in development but I remain unconvinced by the demos and believe it is still a long way off. Even if a solution of exact translation of hands and fingers becomes available the problem of touch feedback and a method of traversing through virtual space will still remain.
The lack of a satisfactory input device is causing an unjustified delay for virtual reality. The type of input that I feel will be most successful in both navigating and interacting with virtual space, given current technology, does not have to replicate how we interact with the real world nakedly walking around. Frankly, this line of thinking feels a bit unimaginative given the unlimited possibilities of virtual space. Instead, I feel the best input strategy, at this time, is to let users feel as though they are in control of some type of machine or vehicle. This vehicle will be the pod.
The best real world analogy for this is a one man submarine like the one pictured below.
We use machines and vehicles to perform complicated tasks and to travel through the real world everyday so this type of input should feel very natural in virtual space. It will also help maintain the state of immersion because the user will be in a seated position eliminating the mind’s expectations of being able to walk around.
Controls for the pod should be simple, most likely some type of joystick in each hand in addition to a pedal for each foot. Each joystick might also have a button and or a trigger attached to it for additional input. These input devices should be positioned in the pod so that the user has no expectation of seeing their own body parts in virtual space. When you drive a car you don’t need to see your hands to turn the wheel nor your feet to operate the pedals. This will eliminate the disconnect between real body and virtual body and eliminate the need for touch feedback. Any feedback will be able to come from the control surfaces themselves.
Also in this scenario, the direction a user looks can function naturally and independently of the direction of movement. When you drive a car, you are able to look side to side while traveling in a non-related direction. Similarly, when you turn your head inside virtual space using the Oculus Rift, you should be able to move through the world as you comfortably observe it. Unlike a car, this virtual vehicle will be open to movement in any direction. Like the one man sub, a user should be able to turn, move horizontally and vertically while looking in the opposite direction all at the same time.
Other, more complicated, interactions with the virtual world could be handled in a similar way to traversing it. Moving, rotating and scaling objects can all be done using the same simple input devices. This also happens in the real world everyday. Construction workers operate cranes and diggers. Military pilots fly unmanned drones. Surgeons perform complicated surgery using robots. Any interaction with a virtual environment can be handle using a limited amount of controls.
Now let’s look at the availability problem. Although in the near future the Oculus Rift might be an affordable consumer product it is unlikely that many users will have the budget to construct an ideal experience like I am purposing here. It is even more unlikely that consumers will be knowledgable enough to realize what an ideal, fully immersive, set up should be. Most demos of people using the Rift currently feature a person sitting or standing with the device on with no input device or controls in hand. This clearly shows people are not ready to take full advantage of the technology.
Another issue with availability is the ability to share this immersive experience with others at the same time. It is unlikely that consumers will have one ideal set up but even less likely that they will have multiple ones. I envision a room with multiple pods all linked together on a LAN with virtual environments that can be explored by groups of people at the same time so that shared real world experiences can translate directly into virtual ones.
The purpose of the pods is to make the user feel comfortable and grounded in the real world while navigating the virtual world. The virtual worlds should be designed to transport the user’s mind to anywhere in time and space while the pod should make the user feel comfortable being there. For these reasons, and from the inspiration of ‘A Christmas Carol,’ I feel like a good name for the pods, both in the physical space and in the virtual world, would be Spirit. The pods will not take a pilot physically to a location but rather mentally transport them just as the spirits of the past, present and future did for Scrooge.
I do not want to be terribly prescriptive about what the Spirit pods should look like in virtual space because I think it can and should be different for different applications. In one virtual environment a pod might have mechanical arms, in another environment the pod might have a side mounted thermometer to collect temperature readings. What I do want is for the Spirit to give a user presence in the virtual environment. Although an application might not require that elements of the virtual world have any interaction with a Spirit, as was the case with Scrooge, I believe Spirits should always be aware of other Spirits who are both in the physical room and in the virtual space. This will encourage cooperation and discussion while the virtual space is explored and will surely lead to richer experiences.
The only two physical requirements I believe should be included in the design of a Spirit in the virtual world are that it have a dome or bubble where the pilot’s head would be and that it has a fuselage with fins on it to give a clear indication of direction of travel. The dome or bubble will provide a handy surface to project UI onto in true Iron Man fashion.
In the real world the Spirit pods will resemble driving or flight simulator rigs that exist today. The main requirement for the set up of the Spirit rig is that the users limbs should not appear in their field of vision similar to how your feet are hidden when you drive a car. I am not suggesting we build or create hardware. I am suggesting that we pick what we think is an ideal combination of existing hardware.
This approach is similar to the One Button Studio package that Education Technology Services has produced which is essentially a package of equipment that allows users to easily generate video content. We do not manufacture the camera or the projector or any of hardware used in One Button Studio. We simply put the hardware together, in ideal package, and connect them with a piece of custom software.
I even have a location picked out for the first installation of a group of Spirit pods. We have been searching for what do with our EGC Lab for sometime. I think a virtual reality arcade could be just the thing we have been looking for.
So let’s talk about the software role in my ideal virtual reality package.
An ideal hardware setup will get us half way to being able to fully immerse a user in virtual reality. The other half of the puzzle will be a piece of ideal software the connects the hardware to the virtual world. The goals of the software are essentially the same as the hardware. We want the user to feel comfortable in the real world while navigating and interacting with the virtual one. The software should feel intuitive with a completely natural translation between the user’s manipulation of the hardware and the corresponding reaction in virtual space.
The two major functions of the software will be to first, allow the user to traverse through virtual space and second, to allow a user to manipulate virtual objects. I am purposing that an ideal situation would be to have those two controls function in the exact same way. Let me explain.
First let me say that I do not wish to use the terms ‘game’ and ‘virtual reality’ as interchangeable descriptions. Not every virtual reality experience needs to be a game. I believe there is a great deal of value in just being able to observe in virtual space, free from the rules and structure of a gaming experience. Evidence of this can be found in the number of virtual reality films starting to come to the surface including many being released at the Sundance Film Festival this year. I will however use gaming to draw parallels in how the virtual world should be controlled.
Experienced gamers know that most first person control schemes, in current video games, rely on two control sticks. One stick is used to control the direction of movement and the second stick is used to control the direction of view. In virtual reality, using the Oculus Rift, the second stick becomes redundant because the direction of view now can be controlled, in theory more naturally, with the turning of your head. I say “in theory” because this is not necessarily the case with experienced gamers. I personally witness a coworker, who is an experienced gamer, struggle with the adjustment of not having to use a control stick to look around. At the same time, he also struggle with what to do with his second hand now that he could control both moving and looking using only one control stick. He ended up holding the controller, with two control sticks and designed to be held in both hands, in only one of his hands. The result was anything but natural.
Non-gamers are often reluctant to hold any control at all while using the Oculus Rift. They become so focused on the ability to control the view, with the turning of their head, that any further input can be overwhelming to them. Evidence of this can be found in most demos of the Oculus Rift where the user simply sits and acts as though they are on a ride.
These two examples, along with my own experimentation, tell me that we need to break the mold of typical controls recognized by experienced gamers while making something natural for non-gamers as well. To do this, we should look to the real world, because it has solved this problem already.
All through high school and college I spent my summers working on the maintenance grew of a golf course. There I got the chance to operate many types of equipment. One of my favorites was the front end loader I would occasionally get to use. Not only was it fun to drive but it amazed me that, with a little bit of practice, a great deal of precision could be achieved with it. It was a highly maneuverable vehicle. Thanks to it’s tanks like controls, driven with two control sticks, one in each hand, it could turned on a dime. The loader and bucket were controlled with foot pedals that, again with a bit of practice, seemed like a natural extension of the limbs of my body. Great pride was taken from being able to take a massive scoop of whatever material I was loading, raising high above my head, and placing it neatly in the back of a truck without spilling a drop.
Creating controls that feel like a natural extension of the body, and that allow for precision with practice, is exactly what I hope to achieve with the software that drives the Spirits. In the real world, the controls of a front end loader provide a solution to the two major control functions that the software needs to perform. The control sticks provide an agile solution for traversing the landscape and the foot pedals provide a precise method of handling objects. This example is what I will use as the starting point for the programming of the controls in the virtual world.
There will most definitely be some additional input needed for controlling a Spirit, and for manipulating virtual objects, but my intention is to keep both operations similar in what input is needed from the user. If the control sticks are used to move and turn the Spirit in virtual space, then that same method will be used to move and turn virtual objects as well. There will simply need to be some selection UI that allows the user to toggle between traversing and manipulating.
Intuitive and precise control is not the only goal of the virtual reality software. Another goal is to create multiplayer environments so that user can have a real time shared experience in virtual space.
With every educational game we design, we have found that a critical part of the experience is the discussion that takes place after playing the game. The problem with this is that the discussion is happening after the fact and things can get lost in translation. With a multiplayer environment, discussions can happen in real time, enriching the experience while it happens.
A third goal of the software will be to create an environment that can not only me explored and manipulated but be created as well.
I have, for a long time, believed that no matter how well and educational game is built, no one learns more about the subject matter that it is attempting convey than the game designer. It goes back to the old adage that you don’t truly know something until you can teach it. To design a great game about a subject means you must know the subject inside and out. The same philosophy should hold true in virtual reality as well. To really appreciate and understand the content and message of a virtual reality experience would mean being able to create content yourself.
As a good example of this, I dive once again into my Disney well of inspiration. Disney has a game called ‘Disney Infinity.’ A section of the game is dedicated to a module called ‘The Toy Box’ in which users can create worlds and games using a library of pieces found in other prebuilt modules of the game. Using the library a user can repurpose items to generate experiences that tell unique stories and that demonstrate a knowledge of content unattainable by playing through a prescribed scenario. There are other examples of games that do this as well including ‘Minecraft’, ‘Little Big Planet’ and ‘The Sims’ to name a few.
I believe the creation module of the software will serve as a valuable extension into educational virtual realty.
Now that I have established the goals of the software, it’s time I give it a name. To do this, I thought about what it meant to travel to, and learn from experiences gained, in remote locations other than the physical classroom. That is essentially what we will be attempting to do using virtual reality. In grade school, this traveling to learn, was called a Field Trip, so that is what I am naming the software package.
I hope the major themes of this idea have been consistent through out. It started out as a concept designed to keep the physical university and classrooms relevant with technology by providing students with a unique and ideal environment for virtual reality experiences that could extend learning across space and time. I also suggested ideal setups for both hardware and software with goals designed to make the user feel grounded in the real world while navigating and interacting in a virtual one. The connection between hardware and software will be critical to provide truly immersive and meaningful experiences.
One final concept, that I would like to propose, deals with this connection between hardware and software. There is another new technology that is gaining traction in the gaming world. The tech is call Near Field Communication. Games including ‘Skylanders,’ Nintendo’s ‘Smash Brothers,’ and ‘Disney Infinity’ are using this technology, on the bottom of collectable figures, to provide a physical connection to the virtual one. Which is one of the major themes of my Spirit and Field Trip proposal. The NFC chips, along with the figures they are attached to, store user data in a physically relevant form. I think it would be a good idea to give Spirit pilots something similar. Perhaps we could print some kind of 3D key to the Spirit pods that has an NFC chip on the bottom of it. This chip could store user data and provide a literal key to the data exchanged between the real world and the virtual one.
Pharmacy no prescription says
whoah this blog is great i love studying your posts.
Stay up the good work! You already know, a lot of individuals
are searching round for this information, you could aid them greatly.
No prescription says
Hi are using WordPress for your blog platform?
I’m new to the blog world but I’m trying to get
started and create my own. Do you need any html coding expertise to make your own blog?
Any help would be greatly appreciated!
No prescription says
Sweet blog! I found it while browsing on Yahoo News.
Do you have any suggestions on how to get listed in Yahoo News?
I’ve been trying for a while but I never seem to get there!
Thanks
No prescription says
Incredible quest there. What happened after? Take care!
No prescription says
It’s truly a nice and helpful piece of information. I am glad that you
simply shared this useful info with us. Please keep us informed
like this. Thanks for sharing.
without prescription says
Amazing! This blog looks just like my old one!
It’s on a completely different topic but it has pretty much the same page layout and design. Great choice of
colors!
No prescription says
Wow, that’s what I was looking for, what a material! present here at this webpage, thanks admin of this web page.
Silagra says
This is a really good tip particularly to those fresh to the blogosphere.
Short but very precise info… Thank you for
sharing this one. A must read article!
Here is my web page; Silagra
http://devenir-anorexique.com/ says
We’re a group of volunterrs and startig a
new scheme in our community. Your web sitee provided us with valuable info
to work on. You’ve dokne a formidable job and our entire communbity will bbe thankful to
you.
Puzzle Games For Kids says
Hurrah! At last I got a website from where I
know how to genuinely obtain useful facts regarding my study
and knowledge.