JOURNAL OF BIOMECHANICS, cilt.144, 2022 (SCI-Expanded)
Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8 degrees) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.