• Log In
 Visit the Pennsylvania State University Home Page

XR Explorations

By Alex Fatemi

Developing 3D Areas from Point Cloud Data

We took some time recently for an experiment in digitizing a bit of Penn State’s campus through the use of point cloud data. By taking a lot of images of the area and putting them through a program, we generated a fairly large point cloud of one of the greenhouses on campus. From there, the point cloud was given to me as a blocky 3D mesh and I attempted to make a workable VR environment from it. There were a lot of issues in the process however which we need to keep in mind when going forward with future attempts.

To start, here’s one of the images of the greenhouse from the data gathering.

real greenhouse

A few big issues stand out that came up from the choice of location. Firstly, the point cloud really did NOT like recreating the semi-transparent tarp the greenhouse is wrapped in. This may have also been due to the fact that the chosen area was simply too large. The resulting point cloud had some clear features present and captured the ground well enough, but the top part was almost entirely missing. A bigger issue that came up, however, was that we couldn’t get the color values of the point cloud to transfer over to Maya correctly. As a result, I ended up having to work off of an incomplete grey blob of a mesh. The point cloud was also absolutely massive in Maya, which would have been an issue if not for the high end computer I use for CIE work.

how it imported

It wasn’t all bad, however. There was enough visible structure to get the most important thing down here: the shape of the area. Luckily, despite the size, the greenhouse is structurally a simple pattern of metal frames lined up with the tarp on top. Within the point cloud, even without color, you can clearly make out where the metal frames jut out from the rest of the data, which let me create the greenhouse’s “skeleton” fairly easily. Then it was a simple matter to put the tarp part on top and match the ground shape somewhat. This is the general benefit of using the point cloud as everything comes out at least proportionally right. The scale was a bit off but as long as I had one point of reference with a measurable value (i.e. a doorway) I could scale everything else up to match it.

wip greenhouse

After this I went in and did some refining to make the ground match the point cloud well enough. Since the plants couldn’t be discerned from the ground in the point cloud, I just followed a general shape with some grooves. I also added in more details from the structure of the greenhouse like the black cloth on the sides and some props (i.e. the boxes in the back and hoses going along the length). This was also a chance to try some simple plant modeling with the spinach present in the greenhouse. I created two leaf patterns and arranged them in two different setups. Once in Unity, I made prefabs of the plants and spread them all along the rows in somewhat random patterns. I also gave them a script to randomize the scale and rotation of each plant somewhat to create some more variation. At this point, most of what I was doing was based more on the real picture references we had rather than anything of the point cloud data. After all the material application and cleanup, the final result turned out pretty well. The overall proportions are very close to the original, though some aspects clearly didn’t transfer over cleanly.

greenhouse in Unity

Overall, this process has some promise but I think we need to pick our subjects a bit better. A higher quality point cloud would certainly provide better results, which would mean it’d be better to pick a smaller area that can have a finer detail count. Getting the colors into Maya would also help immensely as that would let me more easily differentiate parts of the cloud when it’s not super clear like with the ground vs the plants. The big takeaway should really be that point cloud data is only really good for expediting some initial setups within the modeling though. It only helps with getting the proportional placement of things quickly. Once those objects are placed, everything else was better handled via just looking at the direct reference photo.

Character Creation Pipeline 2022

We’ve started work on a variety of other projects requiring a few new approaches to aspects. First one I’ll be discussing here is our updated pipeline utilizing Character Creator 3 and modern URP Unity.

Creation aspects within CC3 haven’t changed too much since my post back in 2020 on the subject. We are using some new bases and outfits, but overall the creation process is the same, including clothing reduction and game-ready base model within CC3 once finished building the character up. What starts to change up is in our export. Overall a lot of the settings are the same: Clothed FBX export “To Unity 3D”, with or without LoD based on use case. Turns out we don’t have to include the T-Pose every time now as that seems to be set up properly regardless.

Within Unity is the real update. It seems over the last couple years Reallusion themselves dropped their personal pipeline support for newer versions of Unity. Instead, they released the plugin source code and some other members of the Reallusion forum took it upon themselves to develop a solution for the various Unity pipelines, including the Universal Render Pipeline we at CIE are focusing on. Luckily, the solution is set up in a convenient Git Package which we have become well versed in integrating to our projects. This new solution also comes with some unique aspects and options that we’ll be looking into in the future, but for now the main factors is the automatic Prefab setup that is similar to the old system. Unlike that system, this one easily supports multiple character imports, organizing them into a list for easy switching and applying the construction process. It also sets up all the materials for proper URP readability.

From there, cleanup on the models is fairly simple. First, a quick texture fix needed for these stylized models. For some reason the occlusion map on the eye comes in at 100%, causing the area around the eyeball to be darkened to the point it turns black. This may work for subtle eye movements where the eye doesn’t move much as it creates a nice shading effect, but with larger eye movements the forced shadow becomes obviously off. Turning this down or off is required for these characters at least.

Next is a new setup for the character animator to make sharing animations easier: the main body is given a “CharAnim” controller that contains the Humanoid rig animation data we’ve gone over before. This allows them all to use the same humanoid animations made by the Center or pulled from Mixamo. Back when first talking about this character pipeline I mentioned issues with the Blendshape animation not being able to share between characters. I didn’t realize it at the time, but the solution was quite simple: using a second animator entirely for Blendshapes. Every CC3 character is structured the same way, with a unique name on the top level but sharing the name of the pieces beneath, particularly “CC_Base_Body”. This is the skin of the character and where the Blendshape data is actually controlled. The reason I couldn’t share the animation data between things before was cause I was trying to do it from the same top level animator. As such, every character had a unique path to their Blendshapes (i.e. “Neurtal_Female_Base/CC_Base_Body” vs “Neurtal_Male_Base/CC_Base_Body”). By creating a second animator on the CC_Base_Body object itself, the path is now uniform, meaning every CC3 character can share Blendshapes as long as they all have those Blendshapes named the same way (which they do by default). So we can place things like blinking and mood shifting into this animator and connect it to the Dialogue System I developed to work with these previously.

Finally we apply the eye tracker setup I used previously. Take two Empty Objects from the positions of the eye joints and pull them forward. Create a new Empty Object under the facial joint so it remains connected when not tracking that is between the two eye objects. Place the eye objects under this tracker. Then simply apply Look At constraints to the eye Joints, pointing them at the original Empty Objects we dragged from their position. Make sure the eyes are facing forward in the Look At and the eyes should now track the Tracker as it moves. Then I put the simple “Eye Tracker” script I put together on the tracker object which creates a gizmo for easy visualizing and the option to stick the Tracker to a specific object in the scene (i.e. the Player Camera).

All of this means we now have a much more streamlined pipeline for getting characters into our projects. This was done based on one we’re working on which will involve interacting with stylized characters that use the same model but different skin tones, so we needed a nice process for developing nuance with iterations within the characters. Below is the WIP characters we developed to start with as we established this updated pipeline. We’ll share more about the project as it gets going, but for now, the system is progressing smoothly.

 

Sequence Managers

This is basically a follow-up to my post long ago about my Dialogue and Cutscene Management Systems. A lot has changed in them since that initial post and they have been refined into a single package that will hopefully serve CIE needs in future projects. We’ve begun focusing on developing packages that can be imported to new projects easily to facilitate certain functions and this new “Sequence Manager” package is meant to be a simple way to get both sequenced cutscenes and dialogue trees in.

Starting with the Cutscene Manager, here’s how it looks in the demo with a simple sequence built in.

cutscene manager component

Multiple cutscenes can be added, with each cutscene being an array of “Actions” with a name attached. Actions include variables for naming, timing and Unity Events. Before I had a “move” option tied to this script but I’ve since realized that should be it’s own script that is just called with the events. I may reintroduce the “New Actors” list, which allowed for referencing objects that spawned into the program during the scene, if we find we end up needing it.

The Cutscene Manager is a simple script that takes these actions sequences and just runs through them, pausing if told to “Wait For Input”, waiting for “Set Duration” or just moving on sequentially when set to “Go Next Immediately” when needed (i.e. if a frame update is required before proceeding to the next action). It also has some Unity Events it can call when the cutscene starts or ends. In the above example, the cutscene is started by a button press in the scene rather than starting on Awake. It then waits for a set time (which we’ll go over in the Dialogue Manager) before presenting the options for continuing the dialogue. It then waits for an input to continue that comes from the option button presses.

The Dialogue Manager is a similar, but more complex script.

Like Cutscene Manager, the main point of the Dialogue Manager is a collection of “Dialogue Sets” which are themselves a collection of “Dialogue Nodes”. Each node has a variety of data that could be important to a dialogue scripts that can be input. The Dialogue Manager only handles data, it doesn’t display anything. So a different script has to take the dialogue nodes and apply them to objects as needed (in this example, the “DemoDialogueReader” in the “Dialogue Changed” event. The script handles moving between nodes through either direct continuation or jumping between nodes as needed, with many points being options for ending the dialogue. This lets us set up quick and easy dialogue trees to use for exchanges between the user and the system.

Each Dialogue Node contains information for the dialogue such as a target object that is speaking, text and audio being stated, and any events that need to trigger in that moment. There is also a set of variables for handling character animation as needed. The Animation Set contains data points like options to call Animator.Set____ functions and timing options separated from the usual timing options of the dialogue node. This lets us be able to keep separated the timing of the animation ending and the intended timing of the node ending. Finally, the nodes contain output options. When left empty, the node will assume the dialogue should end in that point. If only 1 output is set, the dialogue will continue straight on to the next node (unless told to wait). When multiple output options are present, the node will automatically assume it must wait and will only proceed when given a “LoadNode(int/string)” command so it knows which output node to go on to.

These managers can be a bit daunting to understand at first, but having used them for multiple projects now, I can say they work very well together with custom scene managers to make easy scene flows with various options. I hope we’ll get a lot of use out of them and can improve them further in the future.

Model Fidelity Exploration

As part of our continuing research explorations with the College of Health and Human Sciences, we decided to run an experiment involving participant reactions to virtual models of food. This was to see if there was a notable difference in how a participant would react to a “high fidelity” virtual food and a “low fidelity” version of the same food. If the difference turns out to be not as substantial as we assumed, then it opens up more development options using lower fidelity models and unique stylization without worrying about these options affecting their reaction as to how it presents as food. To that end, I set up a pipeline to take high fidelity photogrammetry models we purchased and reduce their visual quality through removal of details and stylization.

The models were purchased through Turbosquid. We went with this model package from evermotion as it included a lot of high quality food objects that clearly used real photographs for the modeling and texturing to provide a good high quality standard to start with. We wanted the high and low fidelity versions to remain relatively close for the sake of comparison, and when it comes to something like that it is much easier to remove detail from something than it is to add it. This reduction process started with some experimentation in reducing the poly count of the objects and modifying the textures through various methods.

Poly reduction is an automatic process that can easily be applied to all sorts of models. The percentage some get decreased had to be adjusted due to some models being more or less poly dense. For example, the bread above was easily dropped through a 90% poly reduction as the original was fairly dense and the shape allowed for way fewer polygons while retaining relatively the same form. But when tried with some grape models in the package we bought, where each grape was a lower poly sphere, the reduction had to be reduced to around 50% or else the grapes began to just turn into cubes or triangles.

The texture reduction took a bit more processing. The first attempt was done by merely reducing the actual image quality of the texture. This effect somewhat works, but the result wasn’t quite as strong as we had hoped to make it appear “less detailed”. After a while, I found a good process in applying the Cutout filter in Photoshop. The filter settings had to be adjusted on a case by case basis with each food (some contained more colors or small details than others that were lost with lower Cutout settings) but overall the process of applying the Cutout option to each food texture worked perfectly to reduce the fidelity of the foods while retaining just enough visual distinction.

As one final step of obstruction to lower the visual fidelity of the food, a toon shader was applied within Unity. A toon shader is a material shader that takes lighting effects within the program and applies them along flat levels onto the texture. So rather than having smooth shading on the object, it is shaded in steps based on certain light thresholds. This creates a kind of “flat” look to the object when set up properly. Toon shaders also include an outline around the object to make it stand out distinctly. For this project, we used the Flat Kit package from Dustyroom, purchased through the Unity Asset Store. The result when paired with the lower poly models and cutout textures was a perfect balance of reducing the quality of the models while retaining just enough fidelity to tell they are the same food object. This pipeline provided us a ton of food options for this small study along with some tools we may use in the future if we desire projects using some more stylized setups.

VR Buffet Project

In our continued work with the College of Health and Human Sciences, I’ve been helping to recreate and update a program used in a past joint-study they did with the University of Maryland. This VR Buffet project was about seeing how people picked food portions in a realistic virtual buffet and if those choices mirrored real world choices. It was originally created using photogrammetry to capture the buffet foods in high quality and made in the Unreal engine. The issue came from wanting to do further study and expansion with this environment after the fact. This new study was to be conducted in a different, more limited space requiring some adjustments to the setup. Another issue was that we wanted to use a new VR headset in this buffet: the HP Reverb G2 Omnicept Edition. This headset includes Tobii eye tracking systems and heart rate monitoring. We wanted to capture the data from these sensors as part of this extended study. The Unreal build wasn’t compatible and would take more effort to adjust and then make all these changes, so CIE offered to recreate the experience in Unity with some new features.

Maryland provided all the assets created for the previous iteration of the buffet. In the original version the program functioned by just dropping the player into the buffet in the center of the buffet. They could physically walk around (no teleporting or stick movement) and grab food items, adjusting their portion size while held, and place them onto a plate. Drinks and soups were acquired by placing a bowl or glass into the respective filling zones and watching them slowly fill with the content. Once the user finished making their plate and brought it to the end counter, the researcher would press a button to log the food choices.

Recreating this setup in Unity using the provided model and texture assets was simple enough, especially as we used this experience to test the new PuppetJump SDK that was developed by our development team lead Zac Zidik. This new SDK uses Unity’s OpenXR options to make VR development more streamlined and ubiquitous, able to easily incorporate options to run on many different VR setups. This was important as one of the key aspects of this update was making it compatible with the Reverb G2. I will be making a different post about working with PuppetJump in the future. To put it simply, PuppetJump greatly sped up the development of recreating the buffet and it’s base functionality, giving us more time to extend the program features.

First, in the main buffet itself, a button was added for the user to actually check out their food themselves, rather than having the researcher need to hit a button to record the data. The setup was also made to work better in a smaller testing space via the inclusion of some togglable control stick movement options. These are controlled by the researchers outside of the headset to ensure the user doesn’t accidentally activate them. In this movement mode, the buffet itself can also be raised and lowered to adjust to the given user’s height. Once a good starting point is established and the buffet is at a proper height, the researchers can press another button to log the positional data. This saves the location for future use, allowing them to set a good starting point for their testing space. The original buffet was made to test in a very large area while this updated version had a tighter space so being able to get a starting point with more control was important. The buffet itself was also brought in closer to account for this limited space.

Part of the expansion of this project involved including new scenes to serve other functions. One being a tutorial segment wherein new participants could be guided through the basic controls of the program. In particular, we wanted a more neutral environment free of the high quality food objects as to not detract from their presence later on. The user can freely explore their movement and serving options and get over any uncertainty if they are new to VR in this space before continuing to the main research area. This tutorial includes all aspects of the buffet the user will experience: grabbing “foods” and adjusting their size; using the plates, bowls and glasses properly; and pressing the button to mark the end of their task. The tutorial was also made togglable in the start menu of the program for users that had already experienced the program before.

The other major thing we wanted from this updated program was the data gathering options of the Reverb G2. The plan in the coming research is to include data from both the program itself, the Reverb G2 Omnicept sensors, and a bio-vest with more physiological data sensors. To sync up the data streams from the vest and the Omnicept sensors, a calibration scene was added to the program. In this part, the user is given tasks on a screen in the scene along with guidance from the researchers outside the headset when needed. First the participant picks up a cube to confirm they understand the grabbing process and put them in the right position. From there, they are asked to look at cubes that spawn to their left and right, with this twisting and motion meant to help with syncing our data. The user is then prompted to take a seat, at which point the researchers guide them to a seat before pressing a button on their end to proceed. A slideshow of random food and non-food items plays on the screen with the intention of monitoring user eye dilation and heart rate for spikes in interest. Once this is done the user stands up and looks at the side cubes again before being brought to the full, proper buffet portion. This calibration portion was also made togglable like the tutorial, though this is more for testing purposes.

On the topic of data, the Reverb G2 Omnicept sensors were surprisingly easy to implement. HP actually released a full Unity plugin package for it including examples and thorough documentation. The actual input of the plugin is very simple, just an object called the HP Glia that has a bunch of event listeners looking for the headset data. You can then tie anything to the events as they send out the data in sets (i.e. Heart Rate or Eye Tracking) and apply the information from there. The only part that took some figuring out was that to get the Tobii eye tracking working, a calibration program has to be run for the user beforehand. Once that is done, the eye tracking actually gets called about every frame, making it a good option for calling a data gathering script to get the eye data and whatever the most recent heart rate reading was. The issue with this is that the amount of data is intense when you’re recording it every frame at near 90FPS. Stored straight this data easily floods the system in a few seconds and causes major slowdown. To alleviate this issue, I modified the data gatherer from the IVAN program to gather two “Physio Data” strings. The data is streamed into one of the strings and after a set time it begins saving the data to file and clearing that string. While it does this, to prevent data loss, the other string becomes the target of the data stream until the time allots again. By bouncing between these two threads of data we were able to avoid any slowdown in the scene while recording these massive strings of text. The text was also laid out in a CSV format so they can later be brought into a more manageable data reading program. Aside from the sensor data, we also recorded the user’s position and rotation with the intent to try and make a heatmap or recreation of the path taken during the experience.

This project was meant to be a quick recreation, but thanks to the ease of use of PuppetJump and the sensor integration we were able to expand the project with many more features both for convenience and for additional research data. I’m very interested in what patterns we may find with all this data we’ll be getting. I don’t have much experience with exploring such dense data though, so that part will be handled by our research associates. If this goes well, there may be more merit in continuing to look at such data in future projects for further insights.

Expanded Gitlab Information

This is an addendum to my previous post on setting up Github with Gitlab. We’ve decided to start doing some expanded things with Gitlab, so felt I should lay out some things for those new to the system in general.

[Read more…]

Dialogue and Cutscene Management Systems

This is a larger bit of code I’ve been working on for the past couple weeks. Initially, it was just meant to be a simple visual dialogue system for use in some current CIE projects. But as things went on and I started thinking of a way to showcase it, I ended up putting together a whole cutscene system with a similar structure. The video below showcases both systems working in tandem to make a single auto running cutscene.

[Read more…]

Further Avatar Development

Showcased below are some new avatars I’ve developed for CIE projects. We’re currently looking into the strengths and weaknesses between realistic and stylized avatars.

floatingAvatars

standingAvatars

[Read more…]

Character Creator 3 to Unity

I’ve been taking some time to go back to Character Creator 3 and really try to learn what it has to offer. To this end, I’m working through a lot of tutorials both on their official site and from other freelancers on Youtube. I started from the end of the pipeline though for the sake of some project deliverables we needed ASAP. A project required two generic characters with the same animations, so I started by finding out all I could about exporting characters from CC3 to Unity as efficiently as possible. From the tutorials, I was able to figure out a lot of the things I was missing from the export settings.

[Read more…]

Setting Up GitLab with GitHub

Decided to finally get to know GitLab a bit more for the sake of backing up my project and hopefully setting things for version control if I work collaboratively in the future. Thing is, I hate git shell and command line stuff and vastly prefer the ease of GitHub. So I took some time to figure out the handshake between them. Here I’ll try to explain the setup process as simply as I can.

[Read more…]

  • 1
  • 2
  • Next Page »

Recent Posts

  • Non-Human Avatar Development
  • Home Well Experience for Farm Show
  • Environmental Development 2022
  • Developing 3D Areas from Point Cloud Data
  • Microsoft Rocketbox Avatar Library Review

Recent Comments

    Archives

    • February 2023
    • August 2022
    • June 2022
    • February 2022
    • September 2021
    • August 2021
    • November 2020
    • October 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019

    Categories

    • Assets
    • Mechanics
    • Personal Exploration
    • Pipeline Development
    • Project Development
    • Research Exploration
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
     Visit the Pennsylvania State University Home Page
    Copyright 2025 © The Pennsylvania State University Privacy Non-Discrimination Equal Opportunity Accessibility Legal