TY - JOUR AU - Phang, Maxson AU - Liawatimena, Suryadiputra PY - 2025 TI - Self-Supervised Contrastive Learning for Steering Angle Prediction in Autonomous Driving Simulations JF - Journal of Computer Science VL - 21 IS - 9 DO - 10.3844/jcssp.2025.2081.2087 UR - https://thescipub.com/abstract/jcssp.2025.2081.2087 AB - Predicting steering angles is a crucial task in autonomous driving, with end-to-end deep learning being a widely used approach. However, such models often suffer from overfitting to learned trajectories, limiting their generalization to new data. To address this, self-supervised contrastive learning is explored as an alternative, allowing models to learn meaningful representations without labeled data. The primary objective is to enhance model generalization, particularly in highway driving scenarios. This study compares the effectiveness of Triplet loss and NT-Xent loss using different encoder architectures. The results show that Triplet loss significantly improves training stability and generalization, while strong data augmentation further enhances performance. MobileNet outperforms ResNet50 in fine-tuning, achieving an MSE of 137.281, demonstrating the potential of lightweight models in contrastive learning tasks. However, compared to the end-to-end Vision Transformer model proposed by Sonata et al. (2023), which achieves an MSE of 2.991, the proposed method still falls short, highlighting the need for further optimization. A major limitation is dataset imbalance, with a predominance of straight-road samples leading to poor performance on sharp turns. Future work should focus on dataset balancing, leveraging deeper encoder architectures, and incorporating hybrid learning approaches to improve generalization. These findings contribute to the development of self-supervised learning for autonomous driving, balancing computational efficiency with predictive accuracy.