UPDATED MARCH 9, 2020
This is a continuation of an earlier post called Mechanical Engineering XR Lab (Creative Investigation). We have completed the Creative Investigation phase for now and have moved into the Prototyping phase of development. I decided to start this new post to cover that process.
October 9, 2019
Over the past couple of weeks, I have been starting to develop some of the core systems that we will need. I started with creating a graphing system that could capture and display live data from a moving object in the scene. Details of that can be found in the previous post.
The next system I started prototyping is a fully function scientific calculator for use in virtual reality. The layout, and functions of different brands of scientific calculators varies. We did a Google search and decided to use a layout and list of available functions similar to the functioning calculator they built for the web browser.
I built a quick 3D model of this calculator, added VR interactions to it, and began programming the logic to make it work. I used a Math Expressions Parser called mXparser to help with that task. It allows you to calculate math equations from strings rather than numerical variables. This is a great help when building a user input system like a calculator because you don’t have to deal with the different symbols and numerical data types. You can just collect all input as strings and mXparser does the rest! It was a great time saver for me. I also tried looking for a working Scientific Calculator in the Unity Asset Store first, to see if there was one available but that search turned up empty. So pictured below might be the first VR enabled fully functioning scientific calculator in existence! (At least it will be when it is fully functional, it’s about 50% operational at this point.)
The second system I built over the past couple of weeks is a way to visualize a spring. You might remember for the previous post that the spring is integral part of the Rectilinear experiment. It was easy to attached a Spring Joint between two carriages in Unity. Visualizing the function of the spring was also easy through the movement of the two carriages using the physics engine. However, visualizing the spring itself is a bit more challenging. You can see the function of the spring happening in the video below but, the spring itself is invisible.
To visualize the spring took some thought. To start, like I usually do, I searched the internet and the Unity Asset Store for any tutorials or working examples of this and found none. So with some creative thinking I programmed my own class of object to visualize a spring. First, I created game object variables that would represent the location of each end of the spring. Next, I made a list of Vector3s, to hold a defined amount of points in space, that will be evenly distributed across the distance between the two end points. The blue cubes in the video below represent those points.
This simple visualization would work well if a spring was a straight line, like an elastic band, but we want our springs to be made of metal and coiled. To take a step closer to making that happen, I added a child object to each of the blue cubes and offset them by a radius variable. This gave me a secondary point in space, still evenly distributed between the distance of the end points, but away from the center. This gave me the ability to define a three-dimensional tube with a definable radius. But I don’t want a tube! I want a coil! To do that, I run through the list of points and rotate each one by another variable that defines how many points I want in a coil. The more points, the smoother the coil will look, but the more memory we need to draw a single coil. The pink cubes below represent the secondary points that define the radius and the coil of the spring.
Having points, spread evenly across a distance, and coiled around like a spring, I now just needed a variable to define how many coils are in the spring. Below are the points working with that variable in place.
One last thing, I used an Animation Curve to add tension to the spring, keeping the coils tighter together at the ends, and more spread apart in the middle.
I now have the ingredients, all using code, to visualize a spring. The last step was just to connect the points using a Line Renderer. The final result looks like this.
(This spring is drawn using a Line Renderer, but I might also try rigging a cylinder mesh with a bunch of bones and then use the points generated by the spring code to place and move the bones. Will update later if that works.)
These two systems are good examples of how you never know what you may have to develop as smaller parts to a larger experience. But now that I have them I can use them for other projects down the road.
October 13, 2019
Pansy and I met this past week and looked over the work I had done so far. All effort so far has been placed on creating systems and functionality. No effort has been made to make things look visually appealing yet. While the majority of that type of work must be done in the Polish phase of development, to keep us agile, it’s not too early to start thinking about what might be the visual style we use in the final application.
Here are two images that I found, as inspiration, that we both agreed might be what the final experience looks like.
I also want to get some images from the movie Wall-E in this mix because I think that movie did a nice job of combining mechanical and futuristic type interfaces while keeping fun and interesting!
October 16, 2019
I finished up with the programming of the scientific calculator and after our last meeting, where we talked quite a bit about the visual design of things, I feel it is time to pause, and propose a user study on functionality and usability.
There is a elephant in the room with every VR experience. The question is, do users prefer a physical interaction with virtual interfaces or a more familiar digital one? When I say “more familiar” I mean we are used to interacting with digital interfaces with a mouse and a screen. We roll over items and click on them for actions. We can replicate this familiar form of interaction in VR by using the controller as a three dimensional pointer, in 3D space, in the same way we use a two dimensional pointer on a screen, point at items (the roll over), and click the trigger button on the controller for actions.
However, VR is intended to be immersive, it is intended to be spacial. We do not necessarily have to use familiar ways of digital interaction. Instead, we have the opportunity use familiar ways of spacial interaction. Instead of using a pointer, we can reach out in space and touch buttons for actions, as we would on the keyboard of our laptop or on a button of an actual calculator.
The biggest difference between real spacial, reach out and touch, actions and virtual spacial actions is the lack of haptic feedback. When I reach out and type something on my actual laptop, there is resistance to my fingers as I push down on the key. I have a physical reaction, and satisfaction, to the resistance. In contrast, when I reach out and touch a button in virtual space, sure I might see an animation play indicating the button is pressed, but there is no physical feed back in my fingers or hand.
The questions I would like to answer deal with what type of interaction with virtual interfaces do users prefer. Is it the reach out and touch, even though there is no haptic feedback? Or is the more familiar point and click method preferred? And, as VR becomes more familiar, will this preference change over time?
The intersection we have reach between the development of the scientific calculator, and the discussion of visual design going forward, make this a perfect candidate for this type of user study. The questions are important, not just for this project but, for any project going forward, and really could be the subject of a research paper. I’ve been around several subjects of VR research that are far less essential than this topic. So I think there is opportunity here.
What I propose for the user study is this:
I will create 4 versions of the VR scientific calculator that reside in the same virtual space. Two will be visually represented digitally and two will be represented physically.
The first of the digital calculators will be vertically aligned, and appear to the user as a hologram in the room. The second of the digital calculators will be on a screen, anchored as part of a console, that will be on a 45 degree, or so, angle. (I think flat horizontal alignment will be too difficult for users to manage because they will hit unintended buttons, especially with the reach out and touch interaction method.)
The first of the physical calculators will be sitting on a table, or stand, and it will have obvious handles on either end, indicating it can be picked up. The second will be part of built in console anchored to the floor.
We will then test all four of the versions of the calculator using both a reach out and touch method of interaction, and a more familiar point and click method of action. This will give users essentially 8 options as to what type of virtual interface they prefer. Four different visual styles each with two different interaction types.
Now I just have to check with Pansy to see if she is ok with this. I should mention, that although this may seem like a side step in the development process, it is not. Bouncing back and forth between Prototyping and User Testing is, and should always be, part of the virtual reality experience development process.
As I mentioned, these questions about preferred virtual interface interaction are always an elephant in the room. The calculator is the best example of a way to answer those questions that I have come across, aside from maybe a virtual keyboard, but we don’t normally pick that up. The calculator, be it physical or digital, is a familiar item, with a familiar interface. Could it be the key to discovering how we prefer to interact with virtual interfaces?
October 23, 2019
Pansy and I met today and talked at length about the possibility of doing user testing on an isolated portion of this experience with a calculator in VR. I was really for it because it touches on a broad question, not specific to this project. But after talking it out, we decided to not do it. I can totally understand why. It felt like too much of a sidebar at this point. Instead we will proceed with development and do user testing when more of the specific interactions for this application are available.
November 4, 2019
Exciting news! I got to work for an entire week last week!!! I only had one meeting to slow me down. This was the first time I got to work uninterrupted, for an entire week, for at least 4 months! I actually felt like a developer again. 😀
The results of that entire week of working are shown in the video above. Finally, I have a working prototype of a large chunk of the Rectilinear Experiment. Using a handheld UI, users can select parts from a Parts Library. While holding those parts, a “ghost part” will appear in a location where that part can be placed. The user can place the parts and set up the experiment. Once the machine is assembled, the user can grab the first carriage and move it, only a few centimeters, and the live graph is drawn to the screen above the workbench. Points on the graph can be touched, revealing data about the selected point. The calculator, which is fully functional, has found a home as a tab on the handheld UI.
Still much to do. But, for the first time in a long time, I am actually happy with what I got done this past week.
November 18, 2019
Not a ton of visual differences between this update and the last. However, there have been major coding updates! For awhile now, I have been meaning to rework my VRBasics base code. It’s 3 or 4 years old now so it is overdue. I typically make little changes, here and there with each project, but this past weekend I decided to do a major overhaul. The needs of this project let me know that it was time for a couple of reasons.
Reason one: This is a physics heavy project.
Working with the physics engine is always a challenge. Especially when you want very predictable and controllable results. The challenge with physics gets higher in VR because the controllers (or hands) are 3D objects in space but they do not follow the rules of physics in the simulated world. In the real world, we reach out and touch something and we are met with resistance. We can not just pass our hand through the object. We hit an object with our hand and it either moves or it stops our hand. But how do we replicate this in VR? At first it seems simple. We put a rigidbody and a collider where our hand is in space and when it collides with objects in the scene, that also have rigidbodies and colliders, then those objects will be moved. Oh, if only it were that simple…
Obviously this doesn’t work for large objects. We can’t have our users knocking over tables, or pushing cars, with a swipe of their hand but that’s easy to eliminate using some collision selectivity. The bigger problem is that even small objects, like a mug of coffee, don’t provide any resistance when we touch them in VR. Additionally, the controller is not bound by the rules of the physics simulation, at least not in any kind of realistic fashion. Our hand can pass through objects, create unlimited force on objects, push objects straight through other objects, all because the velocity and position of our hand is updated with no regards to the rules of the physics simulation. So how do we fix this?
We have to make our hand (controller) a more realistic part of the physics simulation. This was where the first major overhaul of VRBasics happened.
In the prior version of VRBasics, I had done it the simple way. A rigidbody and collider on the controller meets a rigidbody and collider of object in world results in object geting pushed. (Object gets pushed right through other objects) At first, you would think this would not happen. How can one collider get pushed right through another? Shouldn’t the physics simulation prevent this from happening? Normally it would. However, since the position of our controller is not following the rules, it can go anywhere without resistance. The fix comes in setting the collider on the controller to what Unity calls a Trigger collider. A Trigger can detect a collision without effecting the physics simulation. (I had previously used this for touching of objects but never for pushing.)
So how do we get a Trigger collider to effect the physics simulation? I just said that it doesn’t. The answer to that question came from a bit of code I had been using to throw objects. A throw happens by capturing the velocity (which is also a direction) of a controller the moment a grabbed object is released. When the object is released, it is automatically back to following the rules of physics, now with some applied velocity from the controller. A thrown object will not pass through other objects because it is following the rules.
I used that same concept to push objects. Now, when a Trigger collider of a controller touches another object (that can be pushed, controlled by a variable) it applies some amount of force to that object. The amount and direction of that force comes from the velocity of the controller but it eliminates the position variable which again, does not follow the rules of physics. Although the velocity of a controller is not technically follow the rules of physics in VR, it is following the rules of physics in the real world. Meaning, we are unable to move our actual hand at light speed, therefore there is real world limitation being put on our physical being that will be translated to the VR physics simulation.
Below is a video of what will happen if you do it the easy way by just putting a rigidbody and a collider on the controller.
For demonstration purposes, the ball is not attached to the controller. It is an object that is fully obeying the rules of the physics engine. It follows the movements of the controller by adding Force, a physics simulation friendly method of moving an object, towards the location of the controller. If the controller moves fast, and gets farther away from where the ball is, the Velocity of the ball speeds up. The Velocity then slows down as the ball approaches where it is meant to be, the location of the controller. What this demo proves is that if enough force is applied to an object, it will pass through another. When the ball is stopped against the side of the carriage, and the hand moves away from it, it is applying more and more force to the ball and the wall of the carriage. Eventually the force is too great and the carriage breaks the physics simulation and passes though the carriage next to it. This is demonstration shows one of the big reasons I reworked my base code. Any colliders on my controllers will be mark as Triggers or be told to ignore any objects with rigidbodies.
(I should just briefly mention that my method for grabbing objects does follow the rules of physics because I use a Fixed Joint, between the two, to establish the grab. That joint can be broken if too much force is applied to it. For example, if you try to pass a grabbed object through a table or a wall, the grab can be terminated using physics. It’s really the method for pushing objects that has been updated to follow the rules. Prior to this change, an object being grabbed could push objects through others.)
Reason two: This project has the need for digital UI.
For a long time, I have been largely against creating flat digital UI elements in a 3D virtual world. It always seemed cheap and off to me. However, given the complexity of what the users need to do in this experience, I have surrendered to the fact that it is sometimes easier to use digital interfaces rather than physical objects to accomplish tasks.
With this new commitment to the need for digital UI came new challenges to the old VRBasics code base. At first, it was a safe assumption that by simply making a UI button a member of the Touchable class, it could be activated the same as 3D objects. Not so fast my friend. The problem with that logic comes in the direction that the touch comes from. We’re working in 3D space here and Touchables, because of their 3D colliders, can be touched from any direction because the position of the controller does not follow any rules in the simulated world. I didn’t like that a user could one: touch a button from behind and two: pass through a button and then pull back through it again actually activating the button twice.
I addressed both of these issues by creating a new Button class of object in VRBasics. The Button class is specifically for 2D UI Elements but the concepts could be used for 3D buttons as well. In addition to using the collision methods of a Touchable, Buttons also use a Raycast from the controller to determine if the controller is actually pointing in the direction of the Button while touching it. This prevents the pull back through activation because the point has to be correct at the moment the touch starts. You can technically still activate the Buttons from the back but it requires careful hand manipulation to get the controller pointed directly at the rear of a button when you press it. If you are so hell bent on pressing buttons from the reverse direction, then so be it, I currently have no method to stop you. Lol
I also added some flare to the Button class by raising them off the surface of their sad 2D home. I created a Mover class that allows these 2D UI elements to move in 3D space when touched. (I think this little detail helped me come to terms with providing 2D UI in a 3D environment.)
December 2, 2019
Reason Three: Organization, Optimization and Extendability
I spent an enormous amount of time over the Thanksgiving break finishing the reorganizing, re-configuring and rebuilding of my base VR world building code. I have changed it so much at this point that I decided to renamed the entire package. VRBasics was the name of code base I wrote 4 years ago. It served me well during that time. However, it was starting to get so modified during that last few projects that it was time for a change. This new package is called PuddleJump. I thought long and hard about that name. I decided upon PuddleJump because it fits some of my overall philosophy about VR. It’s not a destination but a short trip to somewhere else.
I watched a movie on Netflix called Other Life that is about getting stuck in a virtual environment and the dangers of not being able to distinguish the difference between the real and the virtual. Good low budget movie that inspired some thought. That movie was actually another reason I went with the name PuddleJump. It’s important to keep VR experiences brief and in digestible chunks. I know this from watching many many users and from spending hours in VR myself.
Anyway, the new package has a ton of improvements over the previous one. Really too many to mention here but I will touch on just a few. Everything is organized into namespaces to keep from conflicts with other packages. Also, although the package is still cross platform compatible, I do have OVR and SteamVR completely separated in the code now. So it will be easy to include or exclude one or the other. This was a complaint I had on the Unity Assets store about VRBasics. I went with EventListeners for a bunch of things that previously took place in an Update function. This is a huge improvement for connecting classes of objects and will help a lot with organizing code.
I also put an enormous amount of effort into the new Connectable class of object. I started work on this 4 years ago in VRBasics knowing there will always be a need to dynamically attach and detach objects to and from one another. It sounds simple right? Just parent objects to one another and then break them apart. Unfortunately, it is much more complicated than that. I had it working, meh, “OK”, but I never quite nailed it. This time I “THINK” I nailed it. The key was making Connectables follow a hierarchal structure. A Matriarchal structure keeps a female Connector at the root of a hierarchy. The male connectors handle all the connecting and detaching in a Matriarchal structure. Think of it like, the males come and go but the females keep the family together. Then everything is opposite for a Patriarchal structure. Male connectors are the glue and females do the connecting and detaching. With these two structures you just have to think about which one fits the type of objects you are using as your Connectables. Are they bowls that stack on top of each other? The would be a Matriarchal structure. Are they Lego blocks that snap together? That would be a Patriarchal structure.
I also included a new feature on the Controller class that allows you to toggle on and off if you can grab Connectables from the middle and pick up just those above it in the stack or does a grab in the middle of a stack of objects pick up the entire stack. Again, this is a decision that needs to be made about what type of objects are connected.
All this work is both for this particular project and for extending my code base beyond it. It’s kind of crazy when you think about trying to program things to behave as close as possible to reality. Reality is pretty complicated after-all. With this new code base, hopefully, virtual reality just got a little less complicated.
December 20, 2019
This will be the last post for 2019! I continue to think this is the best example of educational VR I have had the pleasure of working on. It checks all the boxes of what good VR should be. It allows the user to go to a place, build a machine, interact with that machine, collect data from that machine, and do calculations with that data. It’s a rock star example of not just using VR to replicate or replace reality purely for the technical achievement of doing so, but instead it uses VR to enhance the teaching and learning of reality. It even improves upon reality in this case because it eliminates some of the mundane tasks and time burdens that reality has while maintaining the core knowledge to be gained.
I exit this year with the video below that shows the objective system working. This system, along with now the functioning collection of user data, guides the user through the steps of the experiments linearly. I have also constructed a UI system which hopefully contains all the buckets necessary to show any information needed for not only this first experiment, and it’s corresponding set of objectives, but for all the experiments and machines to follow in this experience.
Although not as far along as I may have like to have been, it is fun to look back and see what great progress we have made. Obviously there is still much to do. I have so many things in mind to do including a custom designed controller, with a way to switch back and forth between controller, hands, or hands and controller look. I also want to do a tutorial mode that will be another notch in this experience’s belt as a great example of a completely self-contained VR experience. I am of course itching to work on some of the Polish Phase of this project. I think it will really bring it to life once I spend some time making it look prettier. But I am really sticking to the process that I preach, staying in the Prototype phase before jumping to the Polish phase. I think it’s a really smart way to work and doesn’t waste time.
Anyway, happy holidays everyone. Can’t wait to continue working on this in 2020!
January 13, 2020
I have done so much in the past few weeks, including over break, that it’s a little overwhelming to even recap at this point. Note to self, don’t do so much in-between blog posts.
During every VR project, I obviously have to spend time developing content specific systems that solve the immediate needs of the project. I also have to keep my eyes open for opportunities to create global systems. These systems are not always immediately identifiable. Many times I end up roughing in a seemingly one off solution before recognizing the potential of a more extendable global one. During the last few weeks I developed and implemented several new global systems.
Controller Touch and Display
This new system provides a user with three controller display options. Option one displays a virtual representation of the physical controllers the user is holding in reality. Option two shows hand models on top of the controllers. The hands animate with the corresponding button inputs provided by the user. Option three shows just the hand models, no controllers.
This system also contains pop up boxes for each button on the controller. The boxes can be used to display tips on how to interact with the virtual experience. My thinking is that these boxes, in combination with the display options, will make it much easier to build tutorials for experiences going forward.
ABSwitches
In any interactive game, or VR experience, there lots of things that have a state, need to change to another state, and then are able to return to their original state. Buttons, doors, things that change color, move or rotate position, etc. all can be handled using this new base class. The switch is controlled using a co-routine, duration for speed, and an animation curve to control easing. The switches are fully interruptible, When interrupted, duration will automatically adjust depending on distance from target of a switch to maintain a constant speed.
Canvas Button Group
This is a perfect example of what I mentioned earlier about creating a seemingly one off system and realizing it would be better to have a global, more extendable solution for. I created the calculator for this experiment by laying out each individual button in the Unity Canvas. Carefully spacing them out and creating individual sprites for them. This is obviously very time consuming. When I realized I needed a keyboard, for logging in purposes, I was not looking forward to going through that process again. So instead of working harder I decided to work smarter and write a script that could layout a group of buttons for me. The script takes the amount of available space, the number of rows and columns of buttons needed, and uses gutters and margins to figure out how big each button needs to be and lays them out in a grid. I then extend this global system to include display options for each button and even add functionality to them. All elements are then created with code, at runtime. No more laying out individual canvas elements by hand. A HUGE time saver.
One last thing about global systems and then I’ll move on to specific project updates. I may have mentioned earlier in the lengthy post that I have rewritten so much of my base code, along with a bunch of brand new things, that it was time to rename and rebrand it. The new code base is called:
I won’t get too much into this here because it deserves it’s own post, website, documentation, tutorials and asset store release. All of which I intend on doing when I get the chance. Most developers use a base code package called VRTK. In fact, the one developer who has been hired by CIE is learning to use it now. I have never used it. It was release right around the same time as my original base code called VRBasics was released. Listening to questions about VRTK from the developer who is learning it, I am confident that PuddleJump is as good if not better. It’s simply not as well documented because I don’t have the cycles.
Ok, back to project specific updates.
It’s funny how things come back around full circle sometimes. 6 years ago, I gave a talk called Attraction Classroom. In that talk, and corresponding blog post, I raised the question of what would classrooms, and learning, be like if they were designed in the same way attractions at Disney World are designed. (Please read the post if you are interested in knowing more) This was just a talk. Never did I think i would have any ability to make this philosophy a reality. But now, I get to design immersive experiences, and the Attraction Classroom design philosophy is completely relevant and achievable.
I design all of my VR work thinking about if this experience was a real place, like an attraction at Disney World, how would it be designed? There is a great talk given by Disney Imagineers at GDC that equates the design of Disney attractions to game design. (This video was from 2019, Attraction Classroom was created in 2014, so I didn’t steal the idea) BTW, game design, attraction design, immersive design, learning design have a ton of things in common. I think you just have to be a little open minded to see that.
In this update’s video I think you can start to see some of the design thinking and concepts beginning to take shape. The idea is to combine all the smaller systems I have been working on into one cohesive and engaging immersive experience.
The experience starts in a large open space. The tutorial (obviously not complete yet) will walk the user through all the control interfaces, using simplified content from the upcoming experience. This way when more complicated task come later on, the user will remember the simpler ones from the beginning. You’ll see some of the new global systems in play right off the bat here with the controller display options and the tip boxes appearing. The tutorial will end with the successful attachment of wheels to the robot.
The robot will serve several purposes, and tie together several lose ends, that have been missing from the design concept. The robot will carry the Experiment Guide that keeps track of the objectives required of the user. It will also move and lead the user through the environment. It will serve as a companion to the user. A companion that the user will be both assisted by, and help along the way, by providing him with additional parts. Parts that the user will learn about by building smaller machines during the experiments. He also has a door on the front of him. I envision this being a vacuum/recycling system that will suck up any parts the user accidentally drops on the ground and return them to their Parts Library on the handheld menu.
After the tutorial, the door to the hallway on the right opens. The robot will lead the user down the hall. The design of the hallway is smaller and curved. Small to encourage focus and curved to inspire curiosity. “What’s around the bend?” While going down the hallway, the user will collect parts that will be used in the next experiment.
Through the door at the other end of the hall is the lab. It is here where the experiments will take place. The first experiment is the Rectilinear experiment where the user learns about springs.
When the experiment is complete, the large door in front of the lab bench will open, revealing a real world example of something related to parts involved in the experiment. A car with shock absorbers for example that uses springs. The robot will lead the user through the center chamber, past the real world example, and back into the larger foyer where the experience began.
Here the robot will prompt the user to add a part from the previous experiment to him to add to his overall functionality and help with his performance. A spring to his wheels perhaps.
Once the part has been added, the hallway door on the left side of the foyer will open. The robot will again lead the user down the hallway and the experience loop will continue.
It might be a little hard to image right now, given the still early prototype stage of everything, but I think all the major pieces of a great immersive experience are here now. Pansy had mentioned to keep in mind that this experience might not be just for college students. Younger kids might be able to get something out of it too. I think the robot character adds the missing piece to that puzzle as well.
The environment is designed so that the user can traverse it in a figure 8 loop. The lab will be different each time it is entered, set up with a new experiment, with new parts collected from the hallway. Hopefully this will keep the experience fresh while being mindful of production time and resources.
January 28, 2020
I worked a bunch on the tutorial over the past week or so. See video below:
For some reason RockVR, the plugin I had been using to capture these videos in Unity, stopped working this week. Not sure why. It was even doing things that effected my code, like not allowing coroutines to finish. Oh well, I used OBS to capture this video. Had to add a render camera to the application to get the proper FOV but I think it worked pretty well.
I was thinking this week about how layered this application is and came up with a list of things that make this project a quality example of what VR can be. This is just to name a few.
- Tutorial
- Progression of difficulty
- Strong sense of immersion
- Quality interactions
- Real world application
- Data collection
- Provides User’s with a Companion (Robot)
- Sense of Purpose or Role (help the Robot while learning about mechanical parts)
Experiences like this are extremely time consuming to produce because so much content is linear in nature and exists outside of a traditional game loop. It requires a lot of content to support the narrative. Content that is additional to the global systems in place to deliver it. The more content, the longer the narrative, the more diversity in the interactions, the more time to produce.
Since I have been recently exposed, and sucked into, the research world I have begun to think about research more and how it relates to work I am doing. A question I think worth asking is as more and more things exist outside of a traditional, repeatable game loop, and interactions become more diverse and complex, does the cognitive load on the user become too high? It would be an interesting research topic to see when the cognitive load reaches a threshold that starts to effect the digestion of the learning objectives. I think one way to test this would be to have stripped down, short version of this application. Pansy and I talked about this briefly in a different context. We talked about the possibility of making a version without the math for younger kids. But a short version, where you just get right to the heart of the experiments, might be worth considering as well.
Anyway, good progress happening. We are close to a completed loop for how the overall experience will eventually function now. Still need to get a real world example in the center room. And I am still waiting on getting a plug-in that allows me to display Math equations. :/ Might have to just buy it myself. 🙁
February 2, 2020
I met with Pansy on Wednesday for over 2 hours. We talked a lot about time frames about what we need to try and accomplish for 1.) our demo at the TLT Symposium, 2.) for student user testing before the end of the semester, 3.) and before any grant submissions.
Before this meeting I was feeling like we were in a pretty good place. However, a new idea came out of our discussion this week. This new idea is a good one but, one I was not anticipating, and is one that is going to take some time to accomplish. The idea is that after a user generates graph data, by participating in the small machine experiments, that data can then be used to make the real world example behave using that data. It’s a neat idea. To see how experiments done with small machines can be transferred to real world examples of larger machines. I had originally thought that the real world examples would move but only in an observational fashion and using some predetermined data-set. This new idea will not only make it more interactive but will also tie to closely to the dynamic data being generated by experiments.
I have been thinking about how to accomplish this for the past few days. Not only how it will work technically, but also about how the user experience will work with it. I will be adding a new tab to the handheld menu for graph data. After a series of experiments are complete, and the graph data has been generated, this will give the user an interface to view that recorded data that they can take with them to the real world example area of the experience. It will also give the user an interface to play back the data and see it effect the real world example.
Do accomplish this, I need to both move the data from one display format to another and allow it to move in 3D space itself, because the data will be attached to the handheld menu. Unfortunately, the way graph data is currently rendered does not allow me to do that. I had to come up with a new method. To be honest, I have never been in love with the Line Renderer method of doing this in the first place. It’s computationally heavy and the rendering of it is wonky because it continually turns the resulting mesh of the rendered line to face the camera causing a fair amount of jitter.
I have been exploring a different way of rendering lines which uses some lower level programming to produce the rasterization of a line of pixels. I have never done this before but the initial tests are very promising. This method will produce lines that are computationally much less expensive, be able to move freely in space with a parent object, and eliminate any jitter from the render.
So it looks like I have the first part of this brand new system figured out. I now have to build the interface to interact with the data, and create the real world example that responds to that interaction. The real world example for the Rectilinear experiments will shock absorbers from a car.
I am hoping this new system doesn’t take too much time to develop, because as I mentioned, it is something I hadn’t counted on. But I think it is a great idea and hopefully it will add to the already robust number of compelling interactions we have built into this experience.
February 20, 2020
March 9, 2020
About ready for the Symposium Demo session!
(To be continued…)
Leave a Reply
You must be logged in to post a comment.