[1]雷昊翔,张富举,孙浩,等.全局特征信息共享交互的无标记弱边缘细胞实例分割网络[J].华侨大学学报(自然科学版),2025,46(5):513-527.[doi:10.11830/ISSN.1000-5013.202507008]
 LEI Haoxiang,ZHANG Fuju,SUN Hao,et al.Label-Free Weak-Edge Cell Instance Segmentation Network for Global Feature Information Sharing and Interaction[J].Journal of Huaqiao University(Natural Science),2025,46(5):513-527.[doi:10.11830/ISSN.1000-5013.202507008]
点击复制

全局特征信息共享交互的无标记弱边缘细胞实例分割网络()
分享到:

《华侨大学学报(自然科学版)》[ISSN:1000-5013/CN:35-1079/N]

卷:
第46卷
期数:
2025年第5期
页码:
513-527
栏目:
出版日期:
2025-09-20

文章信息/Info

Title:
Label-Free Weak-Edge Cell Instance Segmentation Network for Global Feature Information Sharing and Interaction
文章编号:
1000-5013(2025)05-0513-15
作者:
雷昊翔12 张富举12 孙浩12 洪岚12 傅玉青12 杜永兆12
1. 华侨大学 工学院, 福建 泉州 362021;2. 华侨大学 物联网产业学院, 福建 泉州 362021
Author(s):
LEI Haoxiang12 ZHANG Fuju12 SUN Hao12HONG Lan12 FU Yuqing12 DU Yongzhao12
1. College of Engineering, Huaqiao University, Quanzhou 362021, China; 2. Internet of Things Industry College, Huaqiao University, Quanzhou 362021, China
关键词:
无标记细胞 实例分割 跨阶段多分支边缘自感知模块 多分支聚合下采样 损失函数
Keywords:
label-free cells instance segmentation cross-phase multi-branch edge self-awareness module multi-branch aggregated downsampling loss function
分类号:
TP751
DOI:
10.11830/ISSN.1000-5013.202507008
文献标志码:
A
摘要:
针对无标记显微图像对比度低、细胞边缘弱、形态多样不规则导致的实例分割精度不足的问题,提出一种基于Protonet理论的全局特征信息共享交互网络。首先,设计跨阶段多分支边缘自感知模块,以增强对低对比度图像的特征表达及对不规则细胞几何形态的适应能力;其次,构建多分支聚合下采样架构,以缓解骨干网络与特征金字塔中的特征丢失问题;最后,提出特征信息共享解耦分割头,实现分类与回归任务的跨层级联合优化,并引入Shape-IoU损失函数,以提高细胞轮廓定位精度。结果表明:文中方法在简单、中等和困难3类分割任务中的mAP@0.50分别达到了86.73%、56.63%和45.13%,可有效提升无标记显微图像中弱边缘细胞实例分割的精度,具有有效性与实用性。
Abstract:
Aiming at the problem of insufficient instance segmentation accuracy caused by low contrast, weak cell edges, and diverse irregular shapes in label-free microscopic images, a global feature information sharing and interaction network based on Protonet theory is proposed. First, a cross-stage multi-branch edge self-awareness module is designed to enhance feature representation of low contrast images and improve adaptability to irregular cell geometries. Second, a multi-branch aggregated downsampling architecture is constructed to alleviate feature loss problem in both the backbone network and feature pyramid. Finally, a feature information sharing decoupled segmentation head is proposed to achieve cross-level joint optimization of classification and regression tasks, while a Shape-IoU loss function is introduced to improve cell contour localization accuracy. Experimental results show that the proposed method achieves mAP@0.50 of 86.73%, 56.63% and 45.13% on easy, medium and difficult segmentation tasks, respectively. These results demonstrate that the proposed method can effectively improve the segmentation accuracy of weak cell edges instances in label-free microscopic images, confirming its effectiveness and practicality.

参考文献/References:

[1] MOEN E,BANNO D,KUDO T,et al.Deep learning for cellular image analysis[J].Nature Methods,2019,16(12).DOI:10.1038/s41592-019-0403-1.
[2] LALIT M,TOMANCAK P,JUG F.EmbedSeg: Embedding-based instance segmentation for biomedical microscopy data[J].Medical Image Analysis,2022,81:102523.DOI:10.1016/j.media.2022.102523.
[3] 王宜东,杜永兆,黎玲,等.基于细胞核引导的明场显微图像细胞分割方法[J].激光与光电子学进展,2023,60(14):145-156.DOI:10.3788/LOP222437.
[4] CAICEDO J C,GOODMAN A,KARHOHS K W,et al.Nucleus segmentation across imaging experiments: The 2018 data science bowl[J].Nature Methods,2019,16(12):1247-1253.DOI:10.1038/s41592-019-0612-7.
[5] EDLUND C,JACKSON T R,KHALID N,et al.LIVECell: A large-scale dataset for label-free livecell segmentation[J].Nature Methods,2021,18(9):1038-1045.DOI:10.1038/S41592-021-01249-6.
[6] 佟雷,吴梅,刘卓晟.坏死性凋亡在病毒感染性疾病中的研究进展[J].华侨大学学报(自然科学版),2025,46(3):241-247.DOI:10.11830/ISSN.1000-5013.202501026.
[7] H?RST F,REMPE M,HEINE L,et al.CellViT: Vision transformers for precise cell segmentation and classification[J].Medical Image Analysis,2024,94:103143.DOI:10.1016/j.media.2024.103143.
[8] LIN Hao,LIN Meimin,CHANG Weijie,et al.MSTA-YOLO: A novel retinal ganglion cell instancesegmentation method using a task-aligned coupled head and efficient multi-scale attention for glaucoma analysis[J].Biomedical Signal Processing and Control,2025,106:107695.DOI:10.1016/j.bspc.2025.107695.
[9] 李东明.医学显微细胞图像分割研究[D].长春:长春理工大学,2021.DOI:10.26977/d.cnki.gccgc.2021.000029.
[10] SHAKED N T,BOPPART S A,WANG Lihong,et al.Label-free biomedical optical imaging[J].Naturephotonics,2023,17(12):1031-1041.DOI:10.1038/s41566-023-01299-6.
[11] DU Yongzhao,LIU Bo,CHEN Haixin,et al.Label-free microscopic cell images adaptive enhancement via weighted fusion of bright, dark, and weak structure features[J].Biomedical Signal Processingand Control,2024,91:105973.DOI:10.1016/j.bspc.2024.105973.
[12] YIN Zhaozheng,KANADE T,CHEN Mei.Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation[J].Medical Image Analysis,2012,16(5):1047-1062.DOI:10.1016/j.media.2011.12.006.
[13] WANG Xinwei,WANG Hao,WANG Jinlu,et al.Single-shot isotropic differential interference contrast microscopy[J].Nature Communications,2023,14(1):2063.DOI:10.1038/s41467-023-37606-6.
[14] JIANG Wenchao,YIN Zhaozheng.Seeing the invisible in differential interference contrast microscopy images[J].Medical Image Analysis,2016,34:65-81.DOI:10.1016/j.media.2016.04.010.
[15] SCHWENDY M,UNGER R E,PAREKH S H.EVICAN: A balanced dataset for algorithm development in cell and nucleus segmentation[J].Bioinformatics,2020,36(12):3863-3870.DOI:10.1093/bioinformatics/btaa225.
[16] MA?KA M,ULMAN V,DELGADO-RODRíGUEZ P,et al.The cell tracking challenge: 10 years of objective benchmarking[J].Nature Methods,2023,20(7):1010-1020.DOI:10.1038/s41592-023-01879-y.
[17] ANTONELLI L,POLVERINO F,ALBU A,et al.ALFI: Cell cycle phenotype annotations of label-free time-lapse imaging data from cultured human cells[J].Scientific Data,2023,10(1):677.DOI:10.1038/s41597-023-02540-1.
[18] KHALID N,MUNIR M,EDLUND C,et al.DeepCeNS: An end-to-end pipeline for cell and nucleus segmentation in microscopic images[C]//International Joint Conference on Neural Networks.Shenzhen:IEEE Press,2021:1-8.DOI:10.1109/IJCNN52387.2021.9533624.
[19] ZHU Yanming,YIN Xuefei,MEIJERING E.A compound loss function with shape aware weight map for microscopy cell segmentation[J].IEEE Transactions on Medical Imaging,2022,42(5):1278-1288.DOI:10.1109/TMI.2022.3226226.
[20] WAN Zhijiang,LI Manyu,WANG Zihan,et al.CellT-Net: A composite transformer method for 2-D cell instance segmentation[J].IEEE Journal of Biomedical and Health Informatics,2023,28(2):730-741.DOI:10.1109/JBHI.2023.3265006.
[21] FINDER S E,AMOYAL R,TREISTER E,et al.Wavelet convolutions for large receptive fields[C]//European Conference on Computer Vision.Cham:Springer Nature,2024:363-380.
[22] MAO Anqi,MOHRI M,ZHONG Yutao.Cross-entropy loss functions: Theoretical analysis and applications[C]//International Conference on Machine Learning.Honolulu:PMLR,2023:23803-23828.
[23] LI Xiang,WANG Wenhai,WU Lijun,et al.Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection[J].Advances in Neural Information Processing Systems,2020,33:21002-21012.
[24] ZHANG Hao,ZHANG Shuaijie.Shape-IoU: More accurate metric considering bounding box shape and scale[EB/OL].(2024-01-02)[2025-07-02] .https://doi.org/10.48550/arXiv.2312.17663.

相似文献/References:

[1]杨天成,杨建红,陈伟鑫.图像抠图与copy-paste结合的数据增强方法[J].华侨大学学报(自然科学版),2023,44(2):243.[doi:10.11830/ISSN.1000-5013.202209025]
 YANG Tiancheng,YANG Jianhong,CHEN Weixin.Data Enhancement Method Combining Image Matting and Copy-Paste[J].Journal of Huaqiao University(Natural Science),2023,44(5):243.[doi:10.11830/ISSN.1000-5013.202209025]
[2]赵崟昊,刘炳辰,杨建红,等.RGB-D多模态融合与深度特征增强的固废检测网络[J].华侨大学学报(自然科学版),2025,46(2):133.[doi:10.11830/ISSN.1000-5013.202410016]
 ZHAO Yinhao,LIU Bingchen,YANG Jianhong,et al.Solid Waste Detection Network With RGB-D Multimodal Fusion and Deep Feature Enhancement[J].Journal of Huaqiao University(Natural Science),2025,46(5):133.[doi:10.11830/ISSN.1000-5013.202410016]

备注/Memo

备注/Memo:
收稿日期: 2025-07-19
通信作者: 杜永兆(1985-),男,教授,博士,主要从事光学成像与图像处理、工业视觉与智能算法、偏振视觉与图像智能计算的研究。 E-mail:yongzhaodu@126.com。
基金项目: 福建省财政拨款项目(5032501)https://hdxb.hqu.edu.cn/
更新日期/Last Update: 2025-09-20