The objective of this project is to generate a python-based script that will compare a machine learning-generated head mesh and the 3-D scanning generated avatar mesh to then draw statistical conclusions on the validity of the machine learning-generated avatar model for use in diagnosis and treatment of traumatic head injuries.

Sponsor

 

Team Members

Cyrus Darvish | Casey Barnes | Adam Petro | Josh Coleman | Robert Gillen | Megan Perillo | Madison Lee | | | | |

 

Project Poster

Click on any image to enlarge.


Project Video

video player icon

Project Summary PDF

pdf icon


Project Summary

Overview

Traumatic brain injuries are a major health concern for many athletes and soldiers. Therefore, technology that is capable of quickly diagnosing and tracking real time injuries is of vital importance. Currently, the sponsor has been working to create a sensor-enabled, cloud-based computing platform that can predict brain injuries based on sensors implanted in a mouth guard. The basic idea is that once a player experiences an impact, the sensor will collect the data of accelerations of the player and send it to an Application Programming Interface. Currently, the sponsor has created an Avatar3D technology that will transform a two-dimensional image into a three-dimensional surface. Radial basis functions are then used to scale a template finite element mesh of the skull and brain to individual specific finite element mesh. The goal of this project is to quantify the errors between the real three-dimensional surface based on a laser scan of the head and the Avatar3D STK prediction of the head. In order to complete the study, the goal was to have 30 human subjects of different demographics to acquire data. Unfortunately, due to the limitations of COVID-19, we were unable to gain permission to scan subjects outside of our team since subjects would need to remove their masks. Once the data, or face scans, are acquired, a Python program will be used to generate a code that will compare scans to the algorithm generated surface. Metrics, such as the Hausdorff distance, were used in order to assess the similarities. In order to assess these metrics, a code was generated. The code first reads meshes from Standford Triangle Format (.ply) files and stores the read vertices as 3D vectors and faces as sets of indices for the three points that make up one triangle. This is achieved by using the plyfile Python module. In addition to only reading the vertices and faces, a list of all the triangles is created using the vertices and face indices. One triangle is stored as a list of three 3D vectors, and the collection of all triangles is stored as a list of these triangle structures.
Next, the input meshes are normalized such that the 3D heads they represent have their centers located at the origin of the global coordinate system, are facing the same direction, and are the same size. This step is necessary to ensure that the future subfunctions of the program compute the difference between the shapes of the meshes while ignoring differences in how they are stored. This is achieved by applying transformations that preserve shape, known as similarity transformations, to the two meshes. The similarity transformations this project uses are translation, rotation, and uniform scaling. While current plans are to have this step integrated into the code, there is a chance that normalization is done manually before the files are input into the program. Once the two meshes are normalized, a basis for comparison can be formed.

Objectives

The purpose of this capstone design project is to evaluate the accuracy of the Avatar3D technology in representing the true shape of the human head. To make individual-specific finite element models, individuals are asked to take a profile or ‘selfie’ upon account creation. When the selfie is uploaded, it is sent to the Avatar3D Application Programming Interface (API) which transforms the two-dimensional image into a three-dimensional surface. Then, radial basis functions are used to scale a template finite element mesh of the skull and brain to a ‘target’ three-dimensional surface created from the avatar algorithm, which relies on machine learning and machine vision. The goal is to gather enough data to have a statistical description on the accuracy of the algorithm in representing the true head shape. The general idea is to generate avatars using the machine learning/machine vision approach and compare that to ‘fully resolved’ approach where laser scanning is used to generate point clouds of the true head shape. It is the goal that by the end of this project, a statement of accuracy will be created that can determine the success of the Avatar3D technology.
Development of a code that can determine the similarity of two surface meshes will have a significant impact in the medical device and diagnostic industry. In relevance to this project, a code that can assess similarity of these computer-generated scans will allow for research in traumatic brain injuries to advance further. Since this ‘similarity code’ does not exist, there are no constraints related to the implementation of this code. The goal of our assignment is to use metrics such as the Hausdorff distance, facial landmark positions, and alignment of the two surfaces to compare their similarity.

Approach

To establish a basis of comparison, first head scans were collected from a 360 degree scan which generated a 3D sensor mesh created from ISense using an Ipad, Isense application, and itSeez3D application. Then, a machine learning mesh was generated with Avatar SDK, which is generated through the mere input of a selfie. These two meshes were then inputed into the designed code for comparison. Mesh normalization is then completed manually.
Only the face is pertinent to the project, so the program isolates the meshes between the chin and the top of the hairline vertically, and between the ears horizontally.
Now the meshes can be compared using 3 chosen metrics: Height Difference, Orientation Difference, and Hausdorff Distance as the primary metric.
A colorized mesh is rendered in preparation for the visual output displays. These colorized meshes are provided for each metric, along with a corresponding histogram. For the colorized mesh, blue indicates 5 mm of difference. Numerical data is also transformed into histograms which a Plaintext file develops output showing statistical values for each of the metrics to better understand the data being presented.

Outcomes

Due to the present COVID-19 pandemic, the Pennsylvania State University decided to not allow any human subjects testing for any university-affiliated research. Because of this ruling, the team was only able to sample themselves, thus bringing the data pool down to 4 subjects. From the limited amount of data collected, the team concludes that, when the facial region is isolated, the AvatarSDK software would likely be able to capture the likeness of a human face with the same accuracy as a 3D scan using an itSeez sensor. In the future, Kraft Laboratories will collect more data and be able to determine the full statistical significance of the data.