[1]唐凯,林宁,林振超,等.采用改进YOLOv9的X射线图像焊缝缺陷检测算法[J].华侨大学学报(自然科学版),2025,46(5):505-512.[doi:10.11830/ISSN.1000-5013.202508018]
 TANG Kai,LIN Ning,LIN Zhenchao,et al.X-Ray Image Weld Defect Detection Algorithm Using Improved YOLOv9[J].Journal of Huaqiao University(Natural Science),2025,46(5):505-512.[doi:10.11830/ISSN.1000-5013.202508018]
点击复制

采用改进YOLOv9的X射线图像焊缝缺陷检测算法()
分享到:

《华侨大学学报(自然科学版)》[ISSN:1000-5013/CN:35-1079/N]

卷:
第46卷
期数:
2025年第5期
页码:
505-512
栏目:
出版日期:
2025-09-20

文章信息/Info

Title:
X-Ray Image Weld Defect Detection Algorithm Using Improved YOLOv9
文章编号:
1000-5013(2025)05-0505-08
作者:
唐凯1 林宁2 林振超2 黄凯2 郑力新3
1. 华侨大学 信息科学与工程学院, 福建 厦门 361021;2. 华侨大学 福建省特种设备检验研究院, 福建 泉州 362021;3. 华侨大学 工学院, 福建 泉州 362021
Author(s):
TANG Kai1 LIN Ning2 LIN Zhenchao2HUANG Kai2 ZHENG Lixin3
1. College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; 2. Fujian Special Equipment Inspection and Research Institute, Huaqiao University, Quanzhou 362021, China; 3. College of Engineering, Huaqiao University, Quanzhou 362021, China
关键词:
焊缝缺陷检测 微小缺陷 YOLOv9算法 深度学习
Keywords:
weld defect detection tiny defect YOLOv9 algorithm deep learning
分类号:
TP391.41;TU229
DOI:
10.11830/ISSN.1000-5013.202508018
文献标志码:
A
摘要:
针对焊缝缺陷检测中传统方法效率低、主观性强的问题,且现有深度学习模型在微小缺陷识别、复杂背景抗干扰和多尺度适应性上存在不足,提出改进YOLOv9的X射线图像焊缝缺陷检测(YOLOv9s-GMS)算法。YOLOv9s-GMS算法通过引入GhostConv模块,增强对微小特征提取能力;添加单头自注意力(SHSA)模块,聚焦缺陷区域并抑制背景干扰;设计MSDCA模块,强化多尺度焊缝特征表达能力;结合Shape-IoU损失函数,提升不规则缺陷定位精度。在HWDXray数据集上的实验结果表明:YOLOv9s-GMS算法的mAP@0.5达0.954,F1为0.933,R为0.903,显著优于YOLOv5、YOLOv9s及Faster R-CNN等算法,有效提升了焊缝缺陷检测的精度、鲁棒性及多尺度适应性。
Abstract:
To address the issues of low efficiency and strong subjectivity in traditional methods for weld defect detection, as well as the shortcomings of existing deep learning models in recognizing tiny defects, resisting complex background interference, and adapting to multiple scales, this paper proposes an improved YOLOv9-based X-ray weld defect detection algorithm, named YOLOv9s-GMS. The algorithm introduces the GhostConv module to enhance the ability of extracting tiny features, the added single-head self-attention(SHSA)module is to focus on defect regions while suppressing background interference. The MSDCA module is to strengthen multi-scale weld feature representation. In addetion, a Shape-IoU loss function is employed to improve the localization accuracy of irregular defects. Experiments results on the HWDXray dataset show that the YOLOv9s-GMS achieves an mAP@0.5 of 0.954, an F1-score of 0.933, and a recall of 0.903, significantly outperforming algorithms such as YOLOv5, YOLOv9s and Faster R-CNN. These results demonstrate that the proposed method effectively improves accuracy, robustness, and multi-scale adaptability in weld defect detection.

参考文献/References:

[1] 张宽,王树强,芦伟.管道焊接中焊缝缺陷检测的研究现状[J].管道技术与设备,2024(6):25-30.DOI:10.3969/j.issn.1004-9614.2024.06.006.
[2] 李亚森,李晔,李赵辉.基于深度学习的焊缝缺陷检测方法综述 [J].焊接技术,2024,53(4):6-13.DOI:10.13846/j.cnki.cn12-1070/tg.2024.04.004.
[3] REN Shaoqing,HE Kaiming,GIRSHICK R B,et al.Faster R-CNN: Towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.DOI:10.1109/TPAMI.2016.2577031.
[4] REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas:IEEE Press,2016:779-788.DOI:10.48550/arXiv.1506.02640.
[5] WANG Guanqiang,ZHANG Chizhou,CHEN Mingsong,et al.YOLO-MSAPF: Multiscale alignment fusion with parallel feature filtering model for high accuracy weld defect detection[J].IEEE Transactions on Instrumentation and Measurement,2023,72:1-14.DOI:10.1109/TIM.2023.3302372.
[6] 李其鹏,缪海波,李志文,等.基于改进YOLOv7-tiny焊缝表面缺陷检测算法 [J].焊接技术,2024,53(7):123-126.DOI:10.13846/j.cnki.cn12-1070/tg.2024.07.012.
[7] 王合佳,林宁,林振超,等.改进YOLO的X射线管道焊缝检测算法[J].华侨大学学报(自然科学版),2024,45(6):766-775.DOI:10.11830/ISSN.1000-5013.202403040.
[8] WANG C Y,YEH I H,LIAO H Y M.YOLOv9: Learning what you want to learn using programmable gradient information[C]//European Conference on Computer Vision.Cham:Springer Nature Switzerland,2024:1-21.DOI:10.48550/arXiv.2402.13616.
[9] WANG C Y,LIAO H Y M,WU Y H,et al.CSPNet: A new backbone that can enhance learning capability of CNN[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.Seattle:IEEE Press,2020:390-391.DOI:10.1109/CVPRW50498.2020.00203.
[10] CHEN Guo,ZHENG Yindong,CHEN Zhe,et al.ELAN: Enhancing temporal action detection with location awareness[C]//IEEE International Conference on Multimedia and Expo(ICME).Brisbane:IEEE Press,2023:1020-1025.DOI:10.1109/ICME55011.2023.00179.
[11] FENG Chengjian,ZHONG Yujie,GAO Yu,et al.TOOD: Task-aligned one-stage object detection[C]//IEEE/CVF International Conference on Computer Vision(ICCV).Montreal:IEEE Press,2021:3490-3499.DOI:10.48550/arXiv.2108.07755.
[12] LI Xiang,WANG Wenhai,WU Lijun,et al.Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection[J].Advances in Neural Information Processing Systems,2020,33:21002-21012.DOI:10.48550/arXiv.2006.04388.
[13] HAN Kai,WANG Yunhe,TIAN Qi,et al.GhostNet: More features from cheap operations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE Press,2020:1580-1589.DOI: 10.1109/CVPR42600.2020.00165.
[14] YUN S,RO Y.SHViT: Single-head vision transformer with memory efficient macro design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE Press,2024:5756-5767.DOI:10.48550/arXiv.2401.16456.
[15] ZHANG Hao,ZHANG Shuaijie.Shape-IoU: More accurate metric considering bounding box shape and scale[EB/OL].(2025-07-22)[2025-09-18] .https://arxiv.org/pdf/2312.17663.
[16] CHEN Yinpeng,DAI Xiyang,LIU Mengchen,et al.Dynamic convolution: Attention over convolution kernels[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE Press,2020:11030-11039.DOI:10.48550/arXiv.1912.03458.
[17] QIN Zequn,ZHANG Pengyi,WU Fei,et al.FcaNet: Frequency channel attention networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Seattle:IEEE Press,2021:783-792.DOI:10.1109/ICCV48922.2021.00082.
[18] CHEN Jin,WANG Xijun,GUO Zichao,et al.Dynamic region-aware convolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE Press,2021:8064-8073.DOI:10.48550/arXiv.2003.12243.
[19] MERY D,RIFFO V,ZSCHERPE L,et al.UGDXray: The database of X-ray images for nondestructive testing[J].Journal of Nondestructive Evaluation,2015,34(4):1-12.DOI:10.1007/s10921-015-0315-7.

备注/Memo

备注/Memo:
收稿日期: 2025-08-01
通信作者: 郑力新(1967-),男,教授,博士,主要从事图像分析、机器视觉、深度学习方法的研究。E-mail:zlx@hqu.edu.cn。
基金项目: 福建省科技计划项目(2020Y0039); 福建省泉州市科技计划项目(2020C0042R)http://hdxb.hqu.edu.cn/
更新日期/Last Update: 2025-09-20