Introduction:
At the beginning of summer this year, I was told that my department at TLT would be participating in an event called Makeit. Makeit was meant to be a conference/event in which faculty from around the different campuses would come together to explore new and available technologies and concepts in order to better understand them and think about how they could bring those concepts to the classroom and incorporate them into current pedagogy. Some of that new technology derived directly from XR, so as an XR developer I was encouraged to participate as a way to provide expertise on the burgeoning technologies from the development side of things. I thought for a couple days about how I could best articulate my opinions and knowledge about the technology and I came to a pretty obvious conclusion: I’m there to help people understand the development side of this technology, why don’t I develop an application and talk about the process and decision making that went in to creating the experience?
Conceptually, this turned out to be a great idea. The organizer of the event liked the idea of having an experience made specifically for this event, even if it was just for part of one of the talks, my boss liked the initiative I took with presenting this idea and using it as a way to have a more structured project going in to the summer, and I liked the idea of spending a couple of months on an idea and going through the entire development pipeline as a way to reinforce my current knowledge and analyze my current processes and techniques. Plus, I’d have a cool little deliverable at the end of this, so it was basically the chillest Game Jam I’ve ever participated in. A win-win-win in my book!
As I planned out what this experience would be, I laid out exactly what this experience needed to do. No matter what the main aesthetic/mechanical hook of the game was, I would need to be able to deconstruct it in real time, showing players the choices I made as a developer to make this an enjoyable VR experience. Since the goal was to highlight the differences that still clearly exist between physical reality and virtual reality, I would need to have clear checkpoints to highlight to the player when they were losing an assistance or comfort tool. Early on we identified a couple concepts that VR fails to replicate clearly when used, and I think they’re good things to consider for any designer/developer who wants to build a VR experience:
- Weight – A concept so simple I think a lot of people take it for granted. No matter how much weight you add to an object in Unity (or Unreal, or any other engine) your controllers will only ever weight about 4.5 ounces. So does your game account for weight when an object is being held? And if so, how? And, most importantly, why?
- Dynamic Intricacy- Currently, you don’t have hands in VR. You don’t exist as a fully fleshed out person in VR. On its most basic level, in VR you exist as three points in space (1 for your head, and 1 for each hand). And if your game has grabbing, then 2 of those points are sticky. So, when you grab an object in VR, you can’t dynamically sculpt your right hand to match perfectly with how you would want to grab an object. Physically, you are holding a controller that was designed to be held in one particular way, and virtually you can’t extrapolate a dot into multiple unique contact points. There are systems that allow designers to create poses that hand models can take when grabbing an object, but those poses are limited to specific contexts and are only approximations on how a person will grab that object. Do you take the time to create detailed models and intricate animations? Or do you abstract the hand to be a more generic object, like a sphere, or an icon? Originally I called this concept Precision, but I think that doesn’t really articulate the problem very well, and it makes VR sound worse than it is. The problem isn’t an inability to be precise, it’s a problem to dynamically replicate common intricate gestures given the current technology.
After defining these areas as targets or goals the application should highlight, I took the time to think of what activity (or set of activities) would work. Honestly the idea came pretty quickly, as my team had been creating blocks and dice for other demonstrations I thought “what if we had the player stack blocks, like little blocks they would’ve stacked in Kindergarten?” That’s a task that I thought any adult could accomplish conceptually and mechanically, as picking up a physical block of any size is easy when you have fully controllable human hands, but would be difficult to make feel right in VR, where you don’t have hands, you have two sticky dots. Additionally, it was recommended that I could have the players stack the blocks according to their weight, another metric that would be much easier to track physically than virtually.
Stacking blocks was also a repeatable, short activity I could have the player do multiple times in order to establish a pattern of recognition so they would notice when certain elements had been removed from the original scene.
First Iteration:
So here’s what the first iteration of the game looked like. I was able to get a quick prototype up and ready for testing. It included four scenes that broke down a bunch of the tools I was using to help the player for the first scene, that I was able to peel back in future scenes. So we can see concepts like PuppetJump Connectors being used to help a player (and the game manager) understand if a block was on top of another block, and also keeping the blocks together in case the player accidentally knocked over the stack, something I was particularly prone to doing. I also used PuppetJump’s standard systems for replicating/simulating weight, and also labeled the blocks so the players could visually identify and properly stack via weight in the first scene.
The original four scenes were focused almost entirely on highlighting the assistance tools (or their absence) so that a player would be able to recognize what’s been lost as they continue. I felt this was the most important aspect of the experience. As we tested this application we had some interesting results:
First, it’s too short. That might sound like a major flaw, and to a larger studio with more rigid policies around asset creation and time management that’s a major problem. But specifically for me, and specifically for this project, that was great news. This game was conceptually simple enough that creating new scenes was a great expansion of what was already working in my book. It’s not always a bad thing if your testers leave wanting more, especially this early in testing.
Second, progression is too simple. The game in its current state was hyper-focused on being able to identify the differences between each of the scenes, and the problem with that is this wasn’t a spot the difference game. In a metatextual sense it was, but textually it was a game about stacking blocks, so the block stacking still has to be interesting. Just doing the same activity with the same variables slightly altered every time wasn’t enough to engage anyone. I can talk about the themes and significance behind and I do find analysis to be important, but in order to properly analyze something, you have to digest it first, and for games, that means you have to play the game. It doesn’t matter how deep your themes are if your players don’t engage with the game on the surface level.
Third, the table was too short. This may seem like a fairly innocuous change. Just raise the table, but it led to my boss and I having a very interesting discussion on how the severity of the changes between your physical body and the current virtual representation impacts development in so many different ways. As you can see from the video, a waist height table becomes a real struggle to handle in VR. In our current framework, when you bend over physically, virtually you’re still just moving your entire body forward. So the table keeps pushing you back because like a bad speedrunner you keep clipping your collider into the middle of the table, so it keeps pushing you back out!
Fourth, picking things up off the ground in VR, especially small things, is a bad experience. Forcing people to bend down and then slam their hands into the ground while they technically can’t see it is not good. It’s why Half-Life: Alyx has the force grab flick feature with very forgiving hit detection when it comes to pointing at the objects. Puppetjump also has a force grab like feature, but I wanted to make sure players stayed focused on the table, so I would need something that automatically puts the blocks back on the table when they’ve been thrown off.
Finally, the UI was terrible. I originally had a plan to make the UI both more adaptive and non-intrusive, allowing players to move around the table and the position of the UI would update according to where the player was, similar to menus popping up on the peripheral in sci-fi movies. In practice, this idea sucked. People didn’t know where the UI was going to be and so they did a maneuver that is a death knell for VR design: the swirlie. If you’ve designed VR or full body applications, then you’ve seen this move before. The player des a full 360, wildly whipping their head around trying to find exactly what the game wants them to interact with. While it’s usually laughed off as a silly move on the player’s part, if your players are consistently performing the swirlie at certain points in your game, you need to seriously review what is going on during that part. And for me, it was this poorly explained UI design that was trying to be too smart for no good reason. Additionally, the colors I had worked for testing, but overall were pretty muted and dull and didn’t sell the more active or brighter tone the game was supposed to have.
So I took those critiques into consideration, and iterated on the original project a lot. Over the next 4 months I tested and iterated on 20 more builds after this first one, and at the end of it all, I had this:
Final Iteration:
New backdrop, ornamentation, and a taller table! But that’s not all, now there are five scenes, different colored blocks, and a gradually increasing amount of blocks as the game progresses! This, combined with a lot of bugfixing and rewritings for logic, made for a game that felt very different from the first iteration while retaining as much of the focus and strengths there from the beginning. Let’s take our critiques from before, and see how they’ve been addressed.
First, the game now has five scenes, and the later scenes were extended in order to address the second critique. This increased the play time from on average a minute, to a more appropriate 3-5 minutes. That way, for groups of 5 people with only one headset, we have you play for 5 minutes, then watch others and discuss their experience for 20 minutes, then we all come together for conclusions for another 5 minutes. And we could guarantee that everyone stayed in the headset for 5 minutes, because of a point I’ll talk about later.
Second, the game’s complexity has increased. Now we ramp up the amount of blocks in the scene, and I was able to find more assistance tools I could remove while retaining the basic mechanical experience. This meant that even though the scene’s were more complicated, the longer duration of each progressive stage meant the player had more time to identify exactly what was missing, so it was more beneficial to the original goal of identifying what had been removed than my original iteration!
Third, the table is taller. It’s also thinner. Now, you don’t need to bend over to reach out to the blocks, and you don’t feel like you’re fighting against the table trying to get a block!
Fourth, players don’t have to bend over to pick up blocks on the ground. Instead, if a block touches the ground, it disappears in a small puff of smoke and reappears on the table! It was a cute whimsical solution that in some cases may have worked too well, as we’ll discuss later.
Finally, the UI is much simpler and brighter. While the player retains the freedom of movement to move anywhere they want in the scene, the UI firmly anchors them on the side of the table they start on. No one noticed at all, immediately gravitating to that side of the table. No one once even moved around to the other side of the table, even though there was nothing stopping them. Since I could assume where they’d be, I could have the UI ever present, so it was able to provide constant feedback and direction without being too obtrusive. On recommendation of my artist Tyler Greer, I changed the color scheme from a dull purple and gray to a bright blue and white contrast. This made a lot of the UI elements pop out more and proved to be not only more jovial and brighter, but also increased legibility through the brightness and contrast.
However, like Pual Valéry said “A Work of Art Is Never Finished, Merely Abandoned“, so now that I’ve “abandoned” this experience, let’s talk about how I could improve upon this design and future projects I work on.
First off, I wish I had the time to do even more testing. Especially with people who didn’t even know about the experience or what it was supposed to be. Everyone has bias, that’s what allows us to have different perspectives, so I wish I could have had more people with even more perspectives take a look at the game than just other members of my team. Or professional testers, but that’s probably way out of scope and budget for any project at this level.
The testing problem ties directly in to the issue with the blocks reappearing on the table. So, currently it’s coded so that when a block hits the ground, the ground basically teleports it back to the table and kills its velocity. That way, if the block was going super fast towards the ground it wouldn’t teleport on the table and bounce right off it, it would stay put. When blocks are connected, they retain all of their properties, they’re just connected using a physics object called a joint. See the problem? If you throw a stack of connected blocks off the table, there is a high chance one of those blocks does not touch the ground but is teleported anyway. Since it never touched the ground it doesn’t lose its speed. Since it doesn’t lose its speed, it bounces off the table, gaining even more speed. Eventually the speed becomes so great that it bounces around like a bullet and the player is unable to recover those blocks. My players found it very funny, but this was a huge oversight on my part. Luckily it wasn’t game breaking because I had included an always available reset button to scene, so if this happens you just hit the reset button and try again, but the fact that I missed this was a big blind spot in my testing.
I also should have included better tutorialization. The current experience just kinda drops you in, and while the cold open can work in a lot of experiences, including this one, I do think my future projects should make less assumptions about the players experience. For this app, it was mitigated by the fact that I was present during the play and could offer tutorializing elements, but that would not fly in an autonomous experience.
But the application at the event went great. People really liked exploring the interactions and systems in the game. There were a lot of very smart people who interrogated the systems present heavily, and we had some really interesting discussions about the viability of current technology and its limitations shown in this experience. I believe that overall I achieved my main goal of creating an experience that highlighted the current limitations of the technology, not as a way to scare off potentially interested faculty, but as a way for them to appreciate that development and utilization of this tech requires a different way of thinking about the material and experience they want their students to have.
I am very proud of the work I’ve put into this project and the final result! That being said, I’m most excited about taking what I learned here and applying it to my new experiences going forward! And if you want to try out the game, it’ll be available on Sidequest as a free to download application!