[1]王蒙昊,方慧娟,龚亨翔,等.应用多尺度混合卷积网络的脑电信号特征提取与识别[J].华侨大学学报(自然科学版),2023,44(5):628-635.[doi:10.11830/ISSN.1000-5013.202306016]
 WANG Menghao,FANG Huijuan,GONG Hengxiang,et al.Electro-Encephalogram Feature Extraction and Recognition Using Multi-Scale Hybrid Neural Network[J].Journal of Huaqiao University(Natural Science),2023,44(5):628-635.[doi:10.11830/ISSN.1000-5013.202306016]
点击复制

应用多尺度混合卷积网络的脑电信号特征提取与识别()
分享到:

《华侨大学学报(自然科学版)》[ISSN:1000-5013/CN:35-1079/N]

卷:
第44卷
期数:
2023年第5期
页码:
628-635
栏目:
出版日期:
2023-09-20

文章信息/Info

Title:
Electro-Encephalogram Feature Extraction and Recognition Using Multi-Scale Hybrid Neural Network
文章编号:
1000-5013(2023)05-0628-08
作者:
王蒙昊12 方慧娟12 龚亨翔12 罗继亮12
1. 华侨大学 信息科学与工程学院, 福建 厦门361021;2. 华侨大学 福建省电机控制与系统优化调度工程技术研究中心, 福建 厦门 361021
Author(s):
WANG Menghao12 FANG Huijuan12GONG Hengxiang12 LUO Jiliang12
1. College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; 2. Fujian Engineering Technology Research Center of Motor Control and System Optimal Schedule, Huaqiao University, Xiamen 361021, China
关键词:
卷积神经网络 通道注意力机制 脑电识别 特征提取
Keywords:
convolutional neural network channel attention mechanism EEG recognition feature extraction
分类号:
TP391
DOI:
10.11830/ISSN.1000-5013.202306016
文献标志码:
A
摘要:
为了解决脑电信号特征提取能力不足导致的分类准确率不高的问题,提出一种全新的混合神经网络模型(EEG-MSTNet模型),实现脑电信号的时-频-空域特征提取和识别.首先,EEG-MSTNet模型采用一种适合脑电信号特点的多尺度卷积,提取4组不同大小卷积核的特征,并拼接在一起,从而增强对原始脑电信号的时频域提取能力.其次,通过通道注意力机制进一步提取信号的空间特征和高维时域特征,最终用于脑电信号识别.EEG-MSTNet模型在BCI Competition Ⅳ Dataset 2a 数据集上进行测试,结果表明:EEG-MSTNet模型的每个模块都对分类准确率的提升做出了贡献,最高分类准确率为95.83%,平均准确率为83.52%,明显优于其他模型.
Abstract:
In order to solve of low classification accuracy problem caused by insufficient feature extraction ability of electro-encephalogram signals, a novel hybrid neural network model(EEG-MSTNet model)is proposed to achieve time-frequency-spatial domain feature extraction and recognition of EEG signals. Firstly, EEG MSTNet model adopts a multi-scale convolution that is suitable for the characteristics of EEG signals, four sets of features of different sizes convolutional kernels are extracted, and they are concatenated together to enhance the time-frequency domain extraction ability of the original EEG signals. Secondly, the spatial features and high-dimensional temporal domain features of the signals are further extracted through the channel attention mechanism, and ultimately used for EEG signals recognition. The EEG-MSTNet model is tested on the BCI Competition Ⅳ Dataset 2a dataset, the results show that each module of the EEG-MSTNet model contri- to the improvement of classification accuracy, with a maximum classification accuracy of 95.83% and an average accuracy of 83.52%, which is significantly better than that of the other models.

参考文献/References:

[1] VAUGHAN T M,HEETDERKS W J,TREJO L J,et al.Brain-computer interface technology: A review of the second International Meeting[J].IEEE Transactions on Neural Systems and Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society,2003,11(2):94-109.DOI:10.1109/tnsre.2003.814799.
[2] V?RBU K,MUHAMMAD N,MUHAMMAD Y.Past, present, and future of EEG-based BCI applications[J].Sensors,2022,22(9):3331.DOI:10.3390/s22093331.
[3] SCHWARTZ A B,CUI X T,WEBER D J,et al.Brain-controlled interfaces: Movement restoration with neural prosthetics[J].Neuron,2006,52(1):205-220.DOI:10.1016/j.neuron.2006.09.019.
[4] KHADEMI Z,EBRAHIMI F,KORDY H M.A review of critical challenges in MI-BCI: From conventional to deep learning methods[J].Journal of Neuroscience Methods,2023,383:109736.DOI:10.1016/j.jneumeth.2022.109736.
[5] ANG K K,CHIN Z Y,WANG Chuanchu,et al.Filter bank common spatial pattern algorithm on BCI competition Ⅳ datasets 2a and 2b[J].Frontiers in Neuroscience,2012,6:39.DOI:10.3389/fnins.2012.00039.
[6] BASHAR S K,HASSAN A R,BHUIYAN M I H.Identification of motor imagery movements from EEG signals using dual tree complex wavelet transform[C]//International Conference on Advances in Computing, Communications and Informatics.[S.l.]:IEEE Press,2015:290-296.DOI:10.1109/ICACCI.2015.7275623.
[7] LAWHERN V J,SOLON A J,WAYTOWICH N R,et al.EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces[J].Journal of Neural Engineering,2018,15(5):056013.DOI:10.1088/1741-2552/aace8c.
[8] LAWRENCE S,GILES C L,TSOI A C,et al.Face recognition: A convolutional neural-network approach[J].IEEE Transactions on Neural Networks,1997,8(1):98-113.DOI:10.1109/72.554195.
[9] INGOLFSSON T M,HERSCHE M,WANG X,et al.EEG-TCNet: An accurate temporal convolutional network for embedded motor-imagery brain-machine interfaces[C]//IEEE International Conference on Systems,Man,and Cybernetics.[S.l.]:IEEE Press,2020:2958-2965.DOI:10.1109/SMC42975.2020.9283028.
[10] BAI S,KOLTER J Z,KOLTUN V.An empirical evaluation of generic convolutional and recurrent networks for sequence modeling[EB/OL].(2018-03-04)[2023-05-24] .https://arxiv.org/abs/1803.01271.
[11] MANE R,CHEW E,CHUA K,et al.FBCNet: A multi-view convolutional neural network for brain-computer interface[EB/OL].(2021-03-17)[2023-05-24] .https://arxiv.org/abs/2104.01233.
[12] SONG Yonghao,ZHENG Qingqing,LIU Bingchuan,et al.EEG-Conformer: Convolutional transformer for EEG decoding and visualization[J].IEEE Transactions on Neural Systems and Rehabilitation Engineering,2022,31:710-719.DOI:10.1109/TNSRE.2022.3230250.
[13] 孔祥浩,马琳,薄洪健,等.CNN与CSP相结合的脑电特征提取与识别方法研究[J].信号处理,2018,34(2):164-173.DOI:10.16798/j.issn.1003-0530.2018.02.006.
[14] ZHANG Ruilong,ZONG Qun,DOU Liqian,,et al.A novel hybrid deep learning scheme for four-class motor imagery classification[J].Journal of Neural Engineering,2019,16(6):066004.DOI:10.1088/1741-2552/ab3471.
[15] KHADEMI Z,EBRAHIMI F,KORDY H M.A transfer learning-based CNN and LSTM hybrid deep learning model to classify motor imagery EEG signals[J].Computers in Biology and Medicine,2022,143:105288.DOI:10.1016/j.compbiomed.2022.105288.
[16] HU Jie,SHEN Li,SUN Gang.Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Ccomputer Vision and Pattern Recognition.Salt Lake City:IEEE Press,2018:7132-7141.DOI:10.1109/CVPR.2018.00745.
[17] VASWANI A,SHAZEER N,PARMAR N,et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.Long Beach:Curran Associates Inc.2017:6000-6010.DOI: 10.48550/arXiv.1706.03762.
[18] CHEN Xia,TENG Xiangbin,CHEN Han,et al.Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX[EB/OL].(2022-07-15)[2023-05-24] .https://arxiv.org/abs/2207.12369.
[19] BRUNNER C,LEEB R,MüLLER-PUTZ G,et al.BCI Competition 2008-Graz data set A[J].Institute for Knowledge Discovery(Laboratory of Brain-Computer Interfaces), Graz University of Technology,2008,16:1-6.
[20] MUSALLAM Y K,ALFASSAM N I,MUHAMMAD G,et al.Electroencephalography-based motor imagery classification using temporal convolutional network fusion[J].Biomedical Signal Processing and Control,2021,69:102826.DOI:10.1016/j.bspc.2021.102826.
[21] SALAMI A,ANDREU-PEREZ J,GILLMEISTER H.EEG-ITNet: An explainable inception temporal convolutional network for motor imagery classification[J].IEEE Access,2022,10:36672-36685.DOI:10.1109/ACCESS.2022.3161489.
[22] YANG Lie,SONG Yonghao,MA Ke,et al.Motor imagery EEG decoding method based on a discriminative feature learning strategy[J].IEEE Transactions on Neural Systems and Rehabilitation Engineering,2021,29:368-379.DOI:10.1109/TNSRE.2021.3051958.
[23] AMIN S U,ALTAHERI H,MUHAMMAD G,et al.Attention-inception and long-short-term memory-based electroencephalography classification for motor imagery tasks in rehabilitation[J].IEEE Transactions on Industrial Informatics,2021,18(8):5412-5421.DOI:10.1109/TII.2021.3132340.
[24] ALTAHERI H,MUHAMMAD G,ALSULAIMAN M,et al.Deep learning techniques for classification of electroencephalogram(EEG)motor imagery(MI)signals: A review[J].Neural Computing and Applications,2021,35(1):1-42.DOI:10.1007/s00521-021-06352-5.
[25] KIM S J,LEE D H,LEE S W.Rethinking CNN architecture for enhancing decoding performance of motor imagery-based EEG signals[J].IEEE Access,2022,10:96984-96996.DOI:10.1109/ACCESS.2022.3204758.

备注/Memo

备注/Memo:
收稿日期: 2023-06-16
通信作者: 方慧娟(1979-),女,讲师,博士,主要从事脑机接口、脑电数据分析、Petri网、智能控制和机器人的研究.E-mail:huijuan.fang@163.com.
基金项目: 国家自然科学基金资助项目(61973130)http://www.hdxb.hqu.edu.cn
更新日期/Last Update: 2023-09-20