We took some time recently for an experiment in digitizing a bit of Penn State’s campus through the use of point cloud data. By taking a lot of images of the area and putting them through a program, we generated a fairly large point cloud of one of the greenhouses on campus. From there, the point cloud was given to me as a blocky 3D mesh and I attempted to make a workable VR environment from it. There were a lot of issues in the process however which we need to keep in mind when going forward with future attempts.
To start, here’s one of the images of the greenhouse from the data gathering.
A few big issues stand out that came up from the choice of location. Firstly, the point cloud really did NOT like recreating the semi-transparent tarp the greenhouse is wrapped in. This may have also been due to the fact that the chosen area was simply too large. The resulting point cloud had some clear features present and captured the ground well enough, but the top part was almost entirely missing. A bigger issue that came up, however, was that we couldn’t get the color values of the point cloud to transfer over to Maya correctly. As a result, I ended up having to work off of an incomplete grey blob of a mesh. The point cloud was also absolutely massive in Maya, which would have been an issue if not for the high end computer I use for CIE work.
It wasn’t all bad, however. There was enough visible structure to get the most important thing down here: the shape of the area. Luckily, despite the size, the greenhouse is structurally a simple pattern of metal frames lined up with the tarp on top. Within the point cloud, even without color, you can clearly make out where the metal frames jut out from the rest of the data, which let me create the greenhouse’s “skeleton” fairly easily. Then it was a simple matter to put the tarp part on top and match the ground shape somewhat. This is the general benefit of using the point cloud as everything comes out at least proportionally right. The scale was a bit off but as long as I had one point of reference with a measurable value (i.e. a doorway) I could scale everything else up to match it.
After this I went in and did some refining to make the ground match the point cloud well enough. Since the plants couldn’t be discerned from the ground in the point cloud, I just followed a general shape with some grooves. I also added in more details from the structure of the greenhouse like the black cloth on the sides and some props (i.e. the boxes in the back and hoses going along the length). This was also a chance to try some simple plant modeling with the spinach present in the greenhouse. I created two leaf patterns and arranged them in two different setups. Once in Unity, I made prefabs of the plants and spread them all along the rows in somewhat random patterns. I also gave them a script to randomize the scale and rotation of each plant somewhat to create some more variation. At this point, most of what I was doing was based more on the real picture references we had rather than anything of the point cloud data. After all the material application and cleanup, the final result turned out pretty well. The overall proportions are very close to the original, though some aspects clearly didn’t transfer over cleanly.
Overall, this process has some promise but I think we need to pick our subjects a bit better. A higher quality point cloud would certainly provide better results, which would mean it’d be better to pick a smaller area that can have a finer detail count. Getting the colors into Maya would also help immensely as that would let me more easily differentiate parts of the cloud when it’s not super clear like with the ground vs the plants. The big takeaway should really be that point cloud data is only really good for expediting some initial setups within the modeling though. It only helps with getting the proportional placement of things quickly. Once those objects are placed, everything else was better handled via just looking at the direct reference photo.