This paper addresses the challenge of sub-optimal performance in deep neural networks trained for image classification under non-matching distribution scenarios. Spurious correlations, patterns in training data irrelevant to objects, can lead to accuracy loss during testing. We assess the robustness of pre-trained models to spurious correlations by subjecting them to datasets containing such correlations. We compare the effectiveness of two training methods; fine-tuning and backbone freezing. Additionally, we explore the impact of robust training on our model collection by applying the DFR method to both frozen and fine-tuned model backbones.