Abstract:The electroencephalograph (EEG) driven brain-computer interaction can promote daily life and rehabilitation training for physically disabled people, however, EEG has several problems such as low signal-noise ratio, significant individual difference, and these problems result in the low accuracy and efficiency for EEG feature extraction and classification. In the context of reducing numbers of electrodes and increasing identified classes, this study proposed an approach to classify motor imagery (MI) EEG signal based on convolutional neural network (CNN). Firstly, based on existed approaches, experiments were conducted and the CNN was constructed with three convolution layers, three pooling layers, and two full-connection layers. Secondly, MI experiment was conducted with the imagination of left hand movement, right hand movement, foot movement, and resting state, and the MI EEG data were collected at the same time. Thirdly, the MI EEG data set were used to build the classification model based on CNN, and the experiment results indicate that the average accuracy of classification is 82.81%, which is higher than the related classification algorithms. Finally, the classification model was applied in the online classification of MI EEG, and a BCI prototype system was designed and implemented to drive the real-time human-robot interaction. The prototype system can help users to control motion states of the humanoid robot, such as raising hands, moving forward. Furthermore, the experimental results show that the average accuracy of robot controlling reaches to 80.31%, and it verifies the proposed approach not only can classify MI EEG data with high accuracy in real time, but also promote applications of human-robot interaction with BCI.