参考文献/References:
[1] World Health Organization.Global status report on road safety 2018: Summary [R].Geneva:WHO,2018.
[2] LI Xiaofei,FABIAN F,YANG Yue,et al.A new benchmark for vison-based cyclist detection[C]//Proceedings of the IEEE Intelligent Vehicles Symposium(Ⅳ).Gothenburg:IEEE Press,2016:1109-1114.
[3] LIN Hanhe,FELIX W S.HELMET dataset[EB/OL].(2020-03-06)[2022-10-23] .https://osf.io/4pwj8/.
[4] 陈闯闯,胡绍方.密集场景下头盔佩戴智能检测研究[J].智能计算机与应用,2020,10(9):223-224.
[5] WANG Wei,GAO Song,SONG Renjie.A safety helmet detection method based on the combination of SSD and HSV color space[M]//KIM H.IT Convergence and Security.Washington D C:Mineralogical Society of America,2021:117-211.DOI:10.1007/978-981-15- 9354-3_12.
[6] 刘琛,王江涛,王明阳.引入视觉机制的SSD网络在摩托车头盔佩戴检测中的应用[J].电子测量与仪器学报,2021,35(3):144-151.DOI:10.13382/j.jemi.B2003332.
[7] 冉险生,陈卓,张禾.改进YOLOv2算法的道路摩托车头盔检测[J].电子测量技术,2021,44(24):105-115.DOI: 10.19651/j.cnki.emt.2107718.
[8] 冉险生,张之云,陈卓,等.基于改进DeepSORT算法的摩托车头盔佩戴检测[EB/OL].(2022-07-27)[2022-10-27] .http://kns.cnki.net/kcms/detail/11.2127.TP.20220726.1653.016.html.
[9] REN Shaoqing,HE Kaiming,GIRSHICK R,et al.Faster R-CNN: Towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,39(6):1137-1149.DOI:10.1109/tpami.2016.2577031.
[10] LIU Wei,ANGUELOV D,ERHAN D,et al.Ssd: Single shot multibox detector[C]//European Conference on Computer Vision.Amsterdam:Springer,2016:21-37.DOI:10.1007/978-3-319-46448-0_2.
[11] REDMON J,FARHADI A. YOLOv3: An incremental improvement[EB/OL].(2018-04-08)[2022-10-23] . https://doi.org/10.48550/arXiv.1804.02767.
[12] BOCHKOVSKIY A,WANG C Y,LIAO H Y M.YOLOv4: Optimal speed and accuracy of object detection[EB/OL].(2020-04-23)[2022-10-23] .https://doi.org/10.48550/arXiv.2004.10934.
[13] WANG C Y,BOCHKOVSKIY A,LIAO H Y M.YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[EB/OL].(2022-07-06)[2022-10-23] .https://doi.org/10.48550/arXiv.2207.02696.
[14] ZHOU Xingyi,WANG Dequan,KR?HENBüHL P. Objects as points[EB/OL].(2019-04-16)[2022-10-23] .https://doi.org/10.48550/arXiv.1904.07850.
[15] LIN T Y,GOYAL P,GIRSHICK R,et al.Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision.Venice:IEEE Press,2017:2980-2988.DOI:10.1109/iccv.2017.324.
[16] DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.An image is worth 16×16 words: Transformers for image recognition at scale[ED/OL].(2020-10-22)[2022-10-25] .http://www.arxiv-vanity.com/papers/2010.11929/.
[17] TAN Mingxing,LE Q V.Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International Conference on Machine Learning.Long Beach:[s.n.],2019:6105-6114.DOI:10.48550/arXiv.1905.11946.
[18] 柳长源,何先平,毕晓君.融合注意力机制的高效率网络车型识别[J].浙江大学学报(工学版),2022,56(4):775-782.DOI:10.3 785/j.issn.1008-973X.2022.04.017.
[19] 陶英杰,张维纬,马昕,等.面向无人机视频分析的车辆目标检测方法[J].华侨大学学报(自然科学版),2022,43(1):111-118.DOI:10.11830/ISSN.1000-5013.202011014.
[20] LIU Songtao,HUANG Di.Receptive field block net for accurate and fast object detection[C]//Proceedings of the European Conference on Computer Vision.Munich:[s.n.],2018:385-400.DOI:10.48550/arXiv.1711.07767.
[21] WANG Qilong,WU Banggu,ZHU Pengfei,et al.ECA-Net: Efficient channel attention for deep convolutional neural networks[C]//IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE Press,2020:13-19.DOI:10.48550/arXiv.1910.03151.
[22] YANG Lingxiao,ZHANG Ruyuan,LI Lida,et al.Simam: A simple, parameter-free attention module for convolutional neural networks[C]//International Conference on Machine learning.[S.l.]:IEEE Press,2021:11863-11874.
[23] 王年涛,王淑青,黄剑锋,等.基于改进YOLOv5神经网络的绝缘子缺陷检测方法[J].激光杂志,2022,43(8):60-65.DOI:10.14016/j.cnki.jgzz.2022.08.060.
[24] LIU Ze,LIN Yutong,CAO Yue,et al.Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Montreal:IEEE Press,2021:10012-10022.DOI:10.48550/arXiv.2103.14030.