APPLIED SCIENCES-BASEL, sa.13, 2024 (SCI-Expanded)
Throughout the evolution of machine learning, the size of models has steadily increased as researchers strive for higher accuracy by adding more layers. This escalation in model complexity necessitates enhanced hardware capabilities. Today, state-of-the-art machine learning models have become so large that effectively training them requires substantial hardware resources, which may be readily available to large companies but not to students or independent researchers. To make the research on machine learning models more accessible, this study introduces a size reduction technique that leverages stages in pyramid training and similarity comparison. We conducted experiments on classification, segmentation, and object detection tasks using various network configurations. Our results demonstrate that pyramid training can reduce model complexity by up to 70% while maintaining accuracy comparable to conventional full-sized models. These findings offer a scalable and resource-efficient solution for researchers and practitioners in hardware-constrained environments.