基于精细神经元的类脑感知学习模型
作者:
中图分类号:

TP18

基金项目:

国家自然科学基金(61425025);国家重点研发计划(2020AAA0130400)


Biophysically Detailed Model for Brain-like Perceptual Learning
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [44]
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    大脑如何实现学习以及感知功能对于人工智能和神经科学领域均是一个重要问题.现有人工神经网络由于结构和计算机制与真实大脑相差较大,无法直接用于理解真实大脑学习以及处理感知任务的机理.树突神经元模型是一种对大脑神经元树突信息处理过程进行建模仿真的计算模型,相比人工神经网络更接近生物真实.使用树突神经网络模型处理学习感知任务对理解真实大脑的学习过程有重要作用.然而,现有基于树突神经元网络的学习模型大都局限于简化树突模型,无法完整建模树突的信号处理过程.针对这一问题,提出一种基于精细中型多棘神经元网络的学习模型,使得精细神经网络可以通过学习完成相应感知任务.实验表明,在经典的图像分类任务上,所提模型可以达到很好的分类性能.此外,精细神经网络对于噪声干扰有很强的鲁棒性.对网络特性进行进一步分析,发现学习后网络中的神经元表现出了刺激选择性这种神经科学中的经典现象,表明所提模型具有一定的生物可解释性,同时也表明刺激选择特性可能是大脑通过学习完成感知任务的一种重要特性.

    Abstract:

    How brains realize learning and perception is an essential question for both artificial intelligence and neuroscience communities. Since the existing artificial neural networks (ANNs) are different from the real brain in terms of structures and computing mechanisms, they cannot be directly used to explore the mechanisms of learning and dealing with perceptual tasks in the real brain. The dendritic neuron model is a computational model to model and simulate the information processing process of neuron dendrites in the brain and is closer to biological reality than ANNs. The use of the dendritic neural network model to deal with and learn perceptual tasks plays an important role in understanding the learning process in the real brain. However, current learning models based on dendritic neural networks mainly focus on simplified dendritic models and are unable to model the entire signal-processing mechanisms of dendrites. To solve this problem, this study proposes a learning model of the biophysically detailed neural network of medium spiny neurons (MSNs). The neural network can fulfill corresponding perceptual tasks through learning. Experimental results show that the proposed model can achieve high performance on the classical image classification task. In addition, the neural network shows strong robustness under noise interference. By further analyzing the network features, this study finds that the neurons in the network after learning show stimulus selectivity, which is a classical phenomenon in neuroscience. This indicates that the proposed model is biologically plausible and implies that stimulus selectivity is an essential property of the brain in fulfilling perceptual tasks through learning.

    参考文献
    [1] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553):436-444.[doi:10.1038/nature14539]
    [2] 孟令睿, 丁光耀, 徐辰, 钱卫宁, 周傲英. 基于深度学习的新型视频分析系统综述. 软件学报, 2022, 33(10):3635-3655. http://www.jos.org.cn/1000-9825/6631.htm
    Meng LR, Ding GY, Xu C, Qian WN, Zhou AY. Survey of novel video analysis systems based on deep learning. Ruan Jian Xue Bao/Journal of Software, 2022, 33(10):3635-3655 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6631.htm
    [3] Schrittwieser J, Antonoglou I, Hubert T, Simonyan K, Sifre L, Schmitt S, Guez A, Lockhart E, Hassabis D, Graepel T, Lillicrap T, Silver D. Mastering Atari, Go, chess and shogi by planning with a learned model. Nature, 2020, 588(7839):604-609.[doi:10.1038/s41586-020-03051-4]
    [4] Vinyals O, Babuschkin I, Czarnecki WM, Mathieu M, Dudzik A, Chung J, Choi DH, Powell R, Ewalds T, Georgiev P, Oh J, Horgan D, Kroiss M, Danihelka I, Huang A, Sifre L, Cai T, Agapiou JP, Jaderberg M, Vezhnevets AS, Leblond R, Pohlen T, Dalibard V, Budden D, Sulsky Y, Molloy J, Paine TL, Gulcehre C, Wang ZY, Pfaff T, Wu YH, Ring R, Yogatama D, Wünsch D, McKinney K, Smith O, Schaul T, Lillicrap T, Kavukcuoglu K, Hassabis D, Apps C, Silver D. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 2019, 575(7782):350-354.[doi:10.1038/s41586-019-1724-z]
    [5] Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587):484-489.[doi:10.1038/nature16961]
    [6] 黄铁军, 施路平, 唐华锦, 潘纲, 陈云霁, 于俊清. 多媒体技术研究:2015——类脑计算的研究进展与发展趋势. 中国图象图形学报, 2016, 21(11):1411-1424.[doi:10.11834/jig.20161101]
    Huang TJ, Shi LP, Tang HJ, Pan G, Chen YJ, Yu JQ. Research on multimedia technology 2015-advances and trend of brain-like computing. Journal of Image and Graphics, 2016, 21(11):1411-1424 (in Chinese with English abstract).[doi:10.11834/jig.20161101]
    [7] 曾毅, 刘成林, 谭铁牛. 类脑智能研究的回顾与展望. 计算机学报, 2016, 39(1):212-222.[doi:10.11897/SP.J.1016.2016.00212]
    Zeng Y, Liu CL, Tan TN. Retrospect and outlook of brain-inspired intelligence research. Chinese Journal of Computers, 2016, 39(1):212-222 (in Chinese with English abstract).[doi:10.11897/SP.J.1016.2016.00212]
    [8] Lillicrap TP, Santoro A, Marris L, Akerman CJ, Hinton G. Backpropagation and the brain. Nature Reviews Neuroscience, 2020, 21(6):335-346.[doi:10.1038/s41583-020-0277-3]
    [9] Sacramento J, Costa RP, Bengio Y, Senn W. Dendritic cortical microcircuits approximate the backpropagation algorithm. In:Proc. of the 32nd Conf. on Neural Information Processing Systems. Montréal:Curran Associates Inc., 2018. 8735-8746.
    [10] Payeur A, Guerguiev J, Zenke F, Richards BA, Naud R. Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Nature Neuroscience, 2021, 24(7):1010-1019.[doi:10.1038/s41593-021-00857-x]
    [11] Guerguiev J, Lillicrap TP, Richards BA. Towards deep learning with segregated dendrites. eLife, 2017, 6:e22901.[doi:10.7554/eLife.22901]
    [12] Poirazi P, Papoutsi A. Illuminating dendritic function with computational models. Nature Reviews Neuroscience, 2020, 21(6):303-321.[doi:10.1038/s41583-020-0301-7]
    [13] Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, Migliore M, Ness TV, Plesser HE, Schürmann F. The scientific case for brain simulations. Neuron, 2019, 102(4):735-744.[doi:10.1016/j.neuron.2019.03.027]
    [14] Beniaguev D, Segev I, London M. Single cortical neurons as deep artificial neural networks. Neuron, 2021, 109(17):2727-2739. e3.[doi:10.1016/j.neuron.2021.07.002]
    [15] Poirazi P, Brannon T, Mel BW. Pyramidal neuron as two-layer neural network. Neuron, 2003, 37(6):989-999.[doi:10.1016/S0896-6273(03)00149-1]
    [16] Häusser M, Mel B. Dendrites:Bug or feature? Current Opinion in Neurobiology, 2003, 13(3):372-383.
    [17] Poirazi P, Mel BW. Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron, 2001, 29(3):779-796.[doi:10.1016/S0896-6273(01)00252-5]
    [18] Moldwin T, Kalmenson M, Segev I. The gradient clusteron:a model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. PLoS Computational Biology, 2021, 17(5):e1009015.[doi:10.1371/journal.pcbi.1009015]
    [19] Bicknell BA, Häusser M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron, 2021, 109(24):4001-4017. e10.[doi:10.1016/j.neuron.2021.09.044]
    [20] Moldwin T, Segev I. Perceptron learning and classification in a modeled cortical pyramidal cell. Frontiers in Computational Neuroscience, 2020, 14:33.[doi:10.3389/fncom.2020.00033]
    [21] Chavlis S, Poirazi P. Drawing inspiration from biological dendrites to empower artificial neural networks. Current Opinion in Neurobiology, 2021, 70:1-10.[doi:10.1016/j.conb.2021.04.007]
    [22] Gerfen CR, Surmeier DJ. Modulation of striatal projection systems by dopamine. Annual Review of Neuroscience, 2011, 34:441-466.[doi:10.1146/annurev-neuro-061010-113641]
    [23] Kreitzer AC. Physiology and pharmacology of striatal neurons. Annual Review of Neuroscience, 2009, 32:127-147.[doi:10.1146/annurev.neuro.051508.135422]
    [24] Hjorth JJJ, Kozlov A, Carannante I, Nylen JF, Lindroos R, Johansson Y, Tokarska A, Dorst MC, Suryanarayana SM, Silberberg G, Kotaleski JH, Grillner S. The microcircuits of striatum in silico. Proceedings of the National Academy of Sciences of the United States of America, 2020, 117(17):9554-9565.[doi:10.1073/pnas.2000671117]
    [25] Urbanczik R, Senn W. Learning by the dendritic prediction of somatic spiking. Neuron, 2014, 81(3):521-528.[doi:10.1016/j.neuron.2013.11.030]
    [26] Zhang YC, He G, Liu XF, Hjorth JJJ, Kozlov A, He YT, Zhang SJ, Ma L, Kotaleski J, Tian YH, Grillner S, Du K, Huang TJ. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Technical Report, Beijing:Peking University, 2022.
    [27] Payeur A, Béïque JC, Naud R. Classes of dendritic information processing. Current Opinion in Neurobiology, 2019, 58:78-85.[doi:10.1016/j.conb.2019.07.006]
    [28] Schiess M, Urbanczik R, Senn W. Somato-dendritic synaptic plasticity and error-backpropagation in active dendrites. PLoS Computational Biology, 2016, 12(2):e1004638.[doi:10.1371/journal.pcbi.1004638]
    [29] Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 1952, 117(4):500-544.[doi:10.1113/jphysiol.1952.sp004764]
    [30] Rall W. Branching dendritic trees and motoneuron membrane resistivity. Experimental Neurology, 1959, 1(5):491-527.[doi:10.1016/0014-4886(59)90046-9]
    [31] Bower JM, Beeman D. Neural modeling with GENESIS. In:Bower JM, Beeman D, eds. The Book of GENESIS:Exploring Realistic Neural Models with the General Neural Simulation System. New York:Springer, 1998. 17-27.
    [32] Hines ML, Carnevale NT. The NEURON simulation environment. Neural Computation, 1997, 9(6):1179-1209.[doi:10.1162/neco.1997.9.6.1179]
    [33] Kumbhar P, Hines M, Fouriaux J, Ovcharenko A, King J, Delalondre F, Schürmann F. CoreNEURON:An optimized compute engine for the NEURON simulator. Frontiers in Neuroinformatics, 2019, 13:63.[doi:10.3389/fninf.2019.00063]
    [34] Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature, 1986, 323(6088):533-536.[doi:10.1038/323533a0]
    [35] Lindroos R, Dorst MC, Du K, Filipović M, Keller D, Ketzef M, Kozlov AK, Kumar A, Lindahl M, Nair AG, Pérez-Fernández J, Grillner S, Silberberg G, Kotaleski JH. Basal ganglia neuromodulation over multiple temporal and structural scales-simulations of direct pathway MSNs investigate the fast onset of dopaminergic effects and predict the role of Kv4.2. Frontiers in Neural Circuits, 2018, 12:3.[doi:10.3389/fncir.2018.00003]
    [36] Nayebi A, Ganguli S. Biologically inspired protection of deep networks from adversarial attacks. arXiv:1703.09202, 2017.
    [37] Krotov D, Hopfield J. Dense associative memory is robust to adversarial inputs. Neural Computation, 2018, 30(12):3151-3167.[doi:10.1162/neco_a_01143]
    [38] van der Walt S, Schönberger JL, Nunez-Iglesias J, Boulogne F, Warner JD, Yager N, Gouillart E, Yu T. scikit-image:Image processing in Python. PeerJ, 2014, 2:e453.[doi:10.7717/peerj.453]
    [39] Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 1968, 195(1):215-243.[doi:10.1113/jphysiol.1968.sp008455]
    [40] Quiroga RQ. Concept cells:The building blocks of declarative memory functions. Nature Reviews Neuroscience, 2012, 13(8):587-597.[doi:10.1038/nrn3251]
    [41] Cazé RD, Jarvis S, Foust AJ, Schultz SR. Dendrites enable a robust mechanism for neuronal stimulus selectivity. Neural Computation, 2017, 29(9):2511-2527.[doi:10.1162/neco_a_00989]
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

张祎晨,何干,杜凯,黄铁军.基于精细神经元的类脑感知学习模型.软件学报,2024,35(3):1403-1417

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-06-29
  • 最后修改日期:2022-09-09
  • 在线发布日期: 2023-05-10
  • 出版日期: 2024-03-06
文章二维码
您是第20054042位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号