The study involves capturing, analyzing, and classifying the movements of objects with different orientations, which can be applied in diverse areas, such as human movements and fluid dynamics.
A Pilot Study for Developing Mobile App and Cloud Computing for Upper Extremities Motion Analysis
Shoulder and elbow range of motion (ROM) impairment significantly affects an individual’s quality of life, yet existing healthcare solutions for evaluating ROMs are often limited by cost, time, and accessibility constraints, particularly in smaller cities. In response, this paper presents a case study on a novel mobile and cloud computing application leveraging Machine Learning and Artificial Intelligence advancements to address these challenges. ROMs calculated by the mobile application are evaluated against a reference standard of motion capture (Qualisys 10-camera system) for specific upper extremity movements. Results demonstrate the proposed mobile and cloud application’s accurate ROM measurement, with slight deviations at peak ROM compared to the Qualisys system. By providing a mobile, cost-effective solution, the proposed application aims to enhance diagnostic capabilities and address the critical need for automatic assessment of motions to support clinical and healthcare decision-making.
- P. S. Kumthekar et al., “A Pilot Study for Developing Mobile App and Cloud Computing for Upper Extremities Motion Analysis,” 2024 IEEE Cloud Summit, Washington, DC, USA, 2024, pp. 24-27, doi: 10.1109/Cloud-Summit61220.2024.00011.
- keywords: {Cloud computing;Accuracy;Tracking;Shoulder;Medical services;Motion capture;Mobile applications;Cloud Computing;Mobile Computing;Upper Extremity Movement;Range of Motion;Motion Tracking},
Recognition and Classification of Vortical Flows using Artificial Neural Networks and Graftieaux’s Identification Criteria
One of the main challenges of studying unsteady fluid mechanics is the modeling and analyzing of the nonlinear wing-vortex interaction. The topology of the wake structure can significantly alter the aerodynamic outcomes of the maneuvering object. Identifying and tracking vortical structures are difficult because existing methods require computationally expensive algorithms and human input. The application of machine learning techniques in classifying these structures has the potential to streamline vortex identification. Instead of using computationally expensive algorithms requiring fluid dynamic analysis to define large- scale geometric vortex properties, a neural network could learn from a predefined training dataset and identify vortical structures from images of a flowfield. The objective of this study is to develop a convolutional neural network model to recognize and classify vortical structures in an unsteady flow. The application of this network on a vorticity field yields a distribution of vortex probabilities at each location in the flowfield, which is then used to identify vortex centroids. Recording the identified vortex centroids across multiple time steps allows certain parameters such as vortex trajectory and velocity to be calculated for further analysis of the wake structure. The performance of this method is then compared to that of existing methods for its accuracy and computational efficiency.
Read more about this work at:
- O’Donoghue, Dylan T., Chang-Kwon Kang, and Truong Tran (2023). “Recognition and Classification of Vortical Flows using Artificial Neural Networks and Graftieaux’s Identification Criteria.” AIAA SCITECH 2023 Forum. https://doi.org/10.2514/6.2023-2633
Autonomous Body Movements Tracking
Lower-Gait Tracking Application Using Smartphones and Tablets
The advent of Artificial Intelligent (AI) and Machine Learning (ML) has enabled smart devices such as smartphones and tablets to detect and keep track of moving objects in three-dimensional (3D) space. These AI/ML models, such as the ML Kit Pose Detection API by Google and Augmented Reality Kit by Apple Inc., allow the mobile camera to capture a person’s motion to collect robust and repeatable data for functional tasks by accurately determining multidimensional kinematics across joints. With this technology, a cost-effective mobile gait analysis application was developed using a single camera on the commercial smart device. The Lower-Body Motion Tracking version 1.0.1 (LGait) application is designed to support the decision-making of clinicians quantifying mobility by calculating and analyzing the 3D kinematics of walking. The LGait app is designed to run on Apple (Apple Inc., USA) iOS mobile devices (iPhone and iPad). For capturing the body motion kinematics, the Apple ARKit-3 is powered by machine learning models running on the Apple Neural Engine chip using Xcode 11 IDE and Swift programming language. The LGait application provides all the significant features to support lower-limb mobility gait analysis. Two main features of the mobile application are real-time 3D motion capturing and 3D gait joint angle calculation. Results of tests comparing kinematics acquired from a Vicon motion capture system to the LGait application show a compatible measurement. The proposed applications of the LGait application include the classification of mobility for clinical diagnosis and patient monitoring.
Read more about this work at:
- Tran, T.X., Kang, Ck., Mathis, S.L. (2022). Lower-Gait Tracking Application Using Smartphones and Tablets. In: Comito, C., Forestiero, A., Zumpano, E. (eds) Integrating Artificial Intelligence and IoT for Advanced Health Informatics. Internet of Things. Springer, Cham. https://doi.org/10.1007/978-3-030-91181-2_1
- T. X. Tran, C. -k. Kang and S. L. Mathis, “Lower-Gait Tracking Mobile Application: A Case Study of Lower body Motion Capture Comparison Between Vicon T40 System and Apple Augmented Reality,” 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2020, pp. 2654-2656, doi: 10.1109/BIBM49941.2020.9313201.
- https://ieeexplore.ieee.org/abstract/document/9313201
More showcase examples:
Motion Tracking and Machine Learning to Analyze Upper Extremity Functions