In our continued work with the College of Health and Human Sciences, I’ve been helping to recreate and update a program used in a past joint-study they did with the University of Maryland. This VR Buffet project was about seeing how people picked food portions in a realistic virtual buffet and if those choices mirrored real world choices. It was originally created using photogrammetry to capture the buffet foods in high quality and made in the Unreal engine. The issue came from wanting to do further study and expansion with this environment after the fact. This new study was to be conducted in a different, more limited space requiring some adjustments to the setup. Another issue was that we wanted to use a new VR headset in this buffet: the HP Reverb G2 Omnicept Edition. This headset includes Tobii eye tracking systems and heart rate monitoring. We wanted to capture the data from these sensors as part of this extended study. The Unreal build wasn’t compatible and would take more effort to adjust and then make all these changes, so CIE offered to recreate the experience in Unity with some new features.
Maryland provided all the assets created for the previous iteration of the buffet. In the original version the program functioned by just dropping the player into the buffet in the center of the buffet. They could physically walk around (no teleporting or stick movement) and grab food items, adjusting their portion size while held, and place them onto a plate. Drinks and soups were acquired by placing a bowl or glass into the respective filling zones and watching them slowly fill with the content. Once the user finished making their plate and brought it to the end counter, the researcher would press a button to log the food choices.
Recreating this setup in Unity using the provided model and texture assets was simple enough, especially as we used this experience to test the new PuppetJump SDK that was developed by our development team lead Zac Zidik. This new SDK uses Unity’s OpenXR options to make VR development more streamlined and ubiquitous, able to easily incorporate options to run on many different VR setups. This was important as one of the key aspects of this update was making it compatible with the Reverb G2. I will be making a different post about working with PuppetJump in the future. To put it simply, PuppetJump greatly sped up the development of recreating the buffet and it’s base functionality, giving us more time to extend the program features.
First, in the main buffet itself, a button was added for the user to actually check out their food themselves, rather than having the researcher need to hit a button to record the data. The setup was also made to work better in a smaller testing space via the inclusion of some togglable control stick movement options. These are controlled by the researchers outside of the headset to ensure the user doesn’t accidentally activate them. In this movement mode, the buffet itself can also be raised and lowered to adjust to the given user’s height. Once a good starting point is established and the buffet is at a proper height, the researchers can press another button to log the positional data. This saves the location for future use, allowing them to set a good starting point for their testing space. The original buffet was made to test in a very large area while this updated version had a tighter space so being able to get a starting point with more control was important. The buffet itself was also brought in closer to account for this limited space.
Part of the expansion of this project involved including new scenes to serve other functions. One being a tutorial segment wherein new participants could be guided through the basic controls of the program. In particular, we wanted a more neutral environment free of the high quality food objects as to not detract from their presence later on. The user can freely explore their movement and serving options and get over any uncertainty if they are new to VR in this space before continuing to the main research area. This tutorial includes all aspects of the buffet the user will experience: grabbing “foods” and adjusting their size; using the plates, bowls and glasses properly; and pressing the button to mark the end of their task. The tutorial was also made togglable in the start menu of the program for users that had already experienced the program before.
The other major thing we wanted from this updated program was the data gathering options of the Reverb G2. The plan in the coming research is to include data from both the program itself, the Reverb G2 Omnicept sensors, and a bio-vest with more physiological data sensors. To sync up the data streams from the vest and the Omnicept sensors, a calibration scene was added to the program. In this part, the user is given tasks on a screen in the scene along with guidance from the researchers outside the headset when needed. First the participant picks up a cube to confirm they understand the grabbing process and put them in the right position. From there, they are asked to look at cubes that spawn to their left and right, with this twisting and motion meant to help with syncing our data. The user is then prompted to take a seat, at which point the researchers guide them to a seat before pressing a button on their end to proceed. A slideshow of random food and non-food items plays on the screen with the intention of monitoring user eye dilation and heart rate for spikes in interest. Once this is done the user stands up and looks at the side cubes again before being brought to the full, proper buffet portion. This calibration portion was also made togglable like the tutorial, though this is more for testing purposes.
On the topic of data, the Reverb G2 Omnicept sensors were surprisingly easy to implement. HP actually released a full Unity plugin package for it including examples and thorough documentation. The actual input of the plugin is very simple, just an object called the HP Glia that has a bunch of event listeners looking for the headset data. You can then tie anything to the events as they send out the data in sets (i.e. Heart Rate or Eye Tracking) and apply the information from there. The only part that took some figuring out was that to get the Tobii eye tracking working, a calibration program has to be run for the user beforehand. Once that is done, the eye tracking actually gets called about every frame, making it a good option for calling a data gathering script to get the eye data and whatever the most recent heart rate reading was. The issue with this is that the amount of data is intense when you’re recording it every frame at near 90FPS. Stored straight this data easily floods the system in a few seconds and causes major slowdown. To alleviate this issue, I modified the data gatherer from the IVAN program to gather two “Physio Data” strings. The data is streamed into one of the strings and after a set time it begins saving the data to file and clearing that string. While it does this, to prevent data loss, the other string becomes the target of the data stream until the time allots again. By bouncing between these two threads of data we were able to avoid any slowdown in the scene while recording these massive strings of text. The text was also laid out in a CSV format so they can later be brought into a more manageable data reading program. Aside from the sensor data, we also recorded the user’s position and rotation with the intent to try and make a heatmap or recreation of the path taken during the experience.
This project was meant to be a quick recreation, but thanks to the ease of use of PuppetJump and the sensor integration we were able to expand the project with many more features both for convenience and for additional research data. I’m very interested in what patterns we may find with all this data we’ll be getting. I don’t have much experience with exploring such dense data though, so that part will be handled by our research associates. If this goes well, there may be more merit in continuing to look at such data in future projects for further insights.