Folding Fan Cropping and Splicing (FFCS) Data Augmentation


Image processing
Data augmentation




Convolutional neural networks (CNNs) are widely used to tackling complex tasks, which are prone to overfitting if the datasets are noisy. Therefore, we propose folding fan cropping and splicing (FFCS) regularization strategy to enhance representation abilities of CNNs. In particular, we propose two different methods considering the effect of different segmentation numbers on classification results. One is the random folding fan method, and the other is the fixed folding fan method. Experimental results showed that FFCS reduced the classification errors both with the value of 0.88% on CIFAR-10 dataset and 1.86% on ImageNet dataset. Moreover, FFCS consistently outperformed Mixup and Random Erasing approaches on classification tasks. Therefore, FFCS effectively prevents overfitting and reduces the impact of background noises on classification tasks.


LeCun Y, Bottou L, Bengio Y, et al., 2021, Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 2278¬–2324.

Steiner A, Kolesnikov A, Zhai X, et al., 2021, How to Train Your ViT? Data, Augmentation, and Regularization in Vision Transformers. Transactions on Machine Learning Research. ArXiv.

Thanh DNH, Thanh LT, Nguyen NH, et al., 2020, Adaptive Total Variation L1 Regularization for salt and Pepper Image Denoising. Optik, 208: 163677.

Lewkowycz A, Gur-Ari G, 2020, On the Training Dynamics of Deep Networks with L2 Regularization. 34th Conference on Neural Information Processing Systems, 4790–4799.

Morerio P, Cavazza R, Volpi R, et al., 2017, Curriculum Dropout.2017 IEEE International Conference on Computer Vision (ICCV), 3564–3572.

Wan L, Zeiler M, Zhang S, et al., 2013, Regularization of Neural Networks using DropConnect. Proceedings of International Conference on Machine Learning, 1058–1066.

Murugan P, Shanmugasundaram D, 2017, Regularization and Optimization Strategies in Deep Convolutional Neural Network. ArXiv.

Zhang Z, Li X, Zhang H, et al., 2021, Triplet Deep Subspace Clustering via Self-Supervised Data Augmentation. Proceedings of 2021 IEEE International Conference on Data Mining (ICDM), 946–955.

Sivaraman A, Kim S, Kim M, 2021, Personalized Speech Enhancement Through Self-Supervised Data Augmentation and Purification. Proceedings of INTERSPEECH, 2676–2680.

Bari Saiful M, Mohiuddin T, Joty ST, 2020, UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 1978–1992.

Lowell D, Howard B, Liptopn ZC, et al., 2021, Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 4992–5001.

Devries T, Taylor GW, 2017, Improved Regularization of Convolutional Neural Networks with Cutout. ArXiv.

Zhang H, Cisse M, Dauphin YN, et al., 2018, mixup: Beyond Empirical Risk Minimization. ArXiv. http:/

Singh KK, Yu H, Sarmasi A, 2018, Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond. ArXiv.

Yun S, Han D, Oh SJ, et al., 2019, CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 6022–6031.

Zhong Z, Zheng L, Kang G, et al., 2017, Random Erasing Data Augmentation. Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), 13001–13008.

Chen P, Liu S, Zhao H, et al., 2020, GridMask Data Augmentation. ArXiv. arXiv.2001.04086

Li P, Li X, Long X, et al., 2020, FenceMask: A Data Augmentation Approach for Pre-extracted Image Features. ArXiv.

Hendrycks D, Mu N, Cubuk ED, et al., 2019, AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. ArXiv.

Gong C, Wang D, Li M, et al., 2021, KeepAugment: A Simple Information-Preserving Data Augmentation Approach. Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1055–1064.

Zhu J-Y, Park T, Isola P, et al., 2017, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), 2242–2251.

Mirza M, Osindero S, 2014, Conditional Generative Adversarial Nets. ArXiv. arXiv.1411.1784

Cubuk ED, Zoph B, Mané D, et al., 2019, AutoAugment: Learning Augmentation Strategies from Data. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 113–123.

Lim S, Kim I, Kim T, et al., 2019, Fast AutoAugment. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 1–11.

Cubuk ED, et al., Randaugment: Practical Automated Data Augmentation with a Reduced Search Space. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 3008–3017.

Buslaev AV, Parinov A, Khvedchenya E, et al., 2018, Albumentations: Fast and Flexible Image Augmentations. ArXiv.

Takahashi, Ryo et al. RICAP: Random Image Cropping and Patching Data Augmentation for Deep CNNs. Proceedings of the 10th Asian Conference on Machine Learning, 785–798.

Zagoruyko S, Komodakis N, 2015, Wide Residual Networks. ArXiv. Xiv.1605.07146

He K, Zhang X, Ren S, et al., 2016, Deep Residual Learning for Image Recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778.

Lee C-Y, Xie S, Gallagher P, et al., 2014, Deeply-Supervised Nets. ArXiv.

Romero A, Ballas N, Kahou SE, et al., 2014, FitNets: Hints for Thin Deep Nets. ArXiv.

Springenberg JT, Dosovitskiy A, Brox T, et al., 2014, Striving for Simplicity: The All Convolutional Net. ArXiv.

He K, Zhang X, Ren S, et al., 2015, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),1026–1034.

Russakovsky O, Deng J, Su H, et al., 2015, ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115: 211–252.