TY - JOUR AU - Smarts, Nyalalani AU - Selvaraj , Rajalakshm AU - Kuthadi, Venumadhav PY - 2025 TI - A Conceptual Framework for Efficient and Sustainable Pruning Techniques in Deep Learning Models JF - Journal of Computer Science VL - 21 IS - 5 DO - 10.3844/jcssp.2025.1113.1128 UR - https://thescipub.com/abstract/jcssp.2025.1113.1128 AB - This research paper proposes a conceptual framework and optimization algorithm for pruning techniques in deep learning models, its focus is on key challenges such as model size, computational efficiency, inference speed and sustainable technology development. The aim of the framework is to transition from large neural networks to sparse, efficient models, indicating the benefits of pruning in improving model scalability and applicability of the pruned models. The proposed framework focuses on reducing the model size, optimizing training schedules and facilitating efficient deployment in real-world devices. The development of the framework involves four stages: Reviewing critical research concepts, identifying relationships between concepts and designing the pruning framework. Furthermore, this study also introduces a new multi-objective optimization algorithm that formalizes the trade-offs between accuracy, computational cost, inference time and energy consumption in the pruning process. Our experiments demonstrate the method's effectiveness in achieving notable model compression while preserving competitive performance on a sentiment analysis and linguistic acceptability tasks using Stanford Sentiment Treebank (SST-2) and Corpus of Linguistic Acceptability (CoLA) datasets. The results of our experiments show the BERT Base model being pruned to 25 million parameters gaining an accuracy of 96.3% on SST-2 dataset and F1-score of 95.2%. Furthermore, the pruned model demonstrated F1 score of 82.3 and 56% of Matthews correlation coefficient in CoLA dataset respectively. This framework, along with the algorithm, serves as a reference for researchers and practitioners, who can select a suitable approach based on the specific application requirements for pruning deep learning models.