Prototype Learning in Machine Learning: A Literature Review
Author:
Affiliation:

Fund Project:

Science and Technology Innovation 2030-"New Generation Artificially Intelligence" Major Project (2018AAA0102101); National Natural Science Foundation of China (U1936212, 61976018)

  • Article
  • | |
  • Metrics
  • |
  • Reference [132]
  • |
  • Related [20]
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    With the in-depth penetration of information technology in various fields, there are many data in the real world. This can help data-driven algorithms in machine learning obtain valuable knowledge. Meanwhile, high-dimension, excessive redundancy, and strong noise are inherent characteristics of these various and complex data. In order to eliminate redundancy, discover data structure, and improve data quality, prototype learning is developed. By finding a prototype set from the target set, the data in the sample space can be reduced, and then the efficiency and effectiveness of machine learning algorithms can be improved. Its feasibility has been proven in many applications. Thus, the research on prototype learning has been one of the hot and key research topics in the field of machine learning recently. This study mainly introduces the research background and application value of prototype learning. Meanwhile, it also provides an overview of specialties of various related methods in prototype learning, quality evaluation of prototypes, and typical applications. Then, the research progress of prototype learning with respect to supervision mode and model design is presented. In particular, the former involves unsupervision, semi-supervision, and full supervision mode, and the latter compares four kinds of prototype learning methods based on similarity, determinantal point process, data reconstruction, and low-rank approximation, respectively. Finally, this study looks forward to the future development of prototype learning.

    Reference
    [1] Naisbitt J, Wrote; Mei Y, Trans. Megatrends: Ten New Directions Transforming Our Lives. Beijing: China Federation of industry and Commerce Press, 2009. 91-91(in Chinese).
    [2] Gao Y, Ma JY, Zhao MB, Liu W, Yuille AL. NDDR-CNN: Layerwise feature fusing in multi-task CNNS by neural discriminative dimensionality reduction. In: Proc. of the Computer Vision and Pattern Recognition. 2019. 3205-3214.
    [3] Esser E, Moller M, Osher S, Sapiro G, Xin J. A convex model for non-negative matrix factorization and dimensionality reduction on physical space. IEEE Trans. on Image Processing, 2012, 21(7): 3239-3252.
    [4] Liu H, Motoda H. Computational Methods of Feature Selection. CRC Press, 2007.
    [5] Guyon I, Elisseeff A. An introduction to variable and feature selection. Journal of Machine Learning Research, 2003, 3(Mar.): 1157-1182.
    [6] Zhang XX, Zhu ZF, Zhao Y. Sparsity induced prototype learning via l1-norm grouping. Journal of Visual Communication and Image Representation, 2018, 57: 192-201.
    [7] Bien J, Tibshirani R. Prototype selection for interpretable classification. The Annals of Applied Statistics, 2011, 5: 2403-2424.
    [8] http://khoury.neu.edu/home/eelhami/tutorial_cvpr2018.htm
    [9] http://khoury.neu.edu/home/eelhami/tutorial_cvpr2016.htm
    [10] Jiang WH. Research on sample selection and its applications in pattern recognition [Ph.D. Thesis]. Nanjing: Nanjing University of Science and Technology, 2008(in Chinese with English abstract).
    [11] Xiong L. Research on the data selection and learning algorithms under big data [Ph.D. Thesis]. Xi’an: Xidian University, 2014(in Chinese with English abstract).
    [12] Chen S, Zhang CS. Selecting informative universum sample for semi-supervised learning. In: Proc. of the Int’l Joint Conf. on Artificial Intelligence. 2009. 1016-1021.
    [13] Qian C, Shi JC, Yu Y, Tang K, Zhou ZH. Parallel Pareto optimization for subset selection. In: Proc. of the Int’l Joint Conf. on Artificial Intelligence. 2016. 1939-1945.
    [14] Wang SS, Zhang ZH. Efficient algorithms and error analysis for the modified Nyström method. In: Proc. of the Artificial Intelligence and Statistics. 2014. 996-1004.
    [15] Yang HM, Zhang XY, Yin F, Liu CL. Robust classification with convolutional prototype learning. In: Computer Vision and Pattern Recognition. 2018. 3474-3482.
    [16] Deng DY. Research on data reduction based on rough sets and extension of rough set models [Ph.D. Thesis]. Beijing: Beijing Jiaotong University, 2007(in Chinese with English abstract).
    [17] Coleman C, Yeh C, Mussmann S, Mirzasoleiman B, Bailis P, Liang P, Zaharia M. Selection via proxy: Efficient data selection for deep learning. In: Proc. of the Int’l Conf. on Learning Representations. 2020.
    [18] Li YF, Zhou ZH. Improving semi-supervised support vector machines through unlabeled instances selection. In: Proc. of the AAAI Conf. on Artificial Intelligence. 2011.
    [19] Wei K, Liu YZ, Kirchhoff K, Bilmes J. Using document summarization techniques for speech data subset selection. In: Proc. of the 2013 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2013. 721-726.
    [20] Liu HP, Liu YH, Sun FC. Robust exemplar extraction using structured sparse coding. IEEE Trans. on Neural Networks and Learning Systems, 2014, 26(8): 1816-1821.
    [21] Dornaika F, Aldine IK. Decremental sparse modeling representative selection for prototype selection. Pattern Recognition, 2015, 48(11): 3714-3727.
    [22] Har-Peled S, Kushal A. Smaller coresets for k-median and k-means clustering. Discrete & Computational Geometry, 2007, 37(1): 3-19.
    [23] Liu YN, Li JZ, Gao H. Research on core-sets selection on massive incomplete data. Chinese Journal of Computers, 2018, 41(4): 915-930(in Chinese with English abstract).
    [24] Takigawa I, Kudo M, Nakamura A. Convex sets as prototypes for classifying patterns. Engineering Applications of Artificial Intelligence, 2009, 22(1): 101-108.
    [25] Rahmani M, Atia GK. Spatial random sampling: A structure-preserving data sketching tool. IEEE Signal Processing Letters, 2017, 24(9): 1398-1402.
    [26] Paul S, Bappy JH, Roy-Chowdhury AK. Non-uniform subset selection for active learning in structured data. In: Computer Vision and Pattern Recognition. 2017. 6846-6855.
    [27] Loosli G, Canu S, Bottou L. Training invariant support vector machines using selective sampling. Large Scale Kernel Machines, 2007, 2.
    [28] Kaushal V, Iyer R, Kothawade S, Mahadev R, Doctor K, Ramakrishnan G. Learning from less data: A unified data subset selection and active learning framework for computer vision. In: Proc. of the IEEE Winter Conf. on Applications of Computer Vision. 2019. 1289-1299.
    [29] Fu TF, Gao T, Xiao C, Ma TF, Sun JM. Pearl: Prototype learning via rule learning. In: Proc. of the ACM Int’l Conf. on Bioinformatics, Computational Biology and Health Informatics. 2019. 223-232.
    [30] Zheng S, Zhu ZF, Zhang XX, Cheng J, Zhao Y. Distribution-induced bidirectional generative adversarial network for graph representation learning. In: Computer Vision and Pattern Recognition. 2020. 7224-7233.
    [31] Cong Y, Liu J, Sun G, You QZ, Li YC, Luo JB. Adaptive greedy dictionary selection for Web media summarization. IEEE Trans. on Image Processing, 2016, 26(1): 185-195.
    [32] Singh A, Virmani L, Subramanyam AV. Image corpus representative summarization. In: Proc. of the 5th IEEE Int’l Conf. on Multimedia Big Data. 2019. 21-29.
    [33] Gillenwater J, Kulesza A, Fox E, Taskar B. Expectation maximization for learning determinantal point processes. In: Advances in Neural Information Processing Systems. 2014. 3149-3157.
    [34] Elhamifar E, Kaluza MCDP. Subset selection and summarization in sequential data. In: Advances in Neural Information Processing Systems. 2017. 1035-1045.
    [35] Wang DD, Zhu SH, Li T, Gong YH. Comparative document summarization via discriminative sentence selection. ACM Trans. on Knowledge Discovery from Data, 2013, 7(1): 1-18.
    [36] Elhamifar E, Sapiro G, Sastry SS. Dissimilarity-based sparse subset selection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2016, 38(11): 2182-2197.
    [37] Wang HX, Yuan JS. Representative selection on a hypersphere. IEEE Signal Processing Letters, 2018, 25(11): 1660-1664.
    [38] Cornuejols G, Fisher ML, Nemhauser GL. Exceptional paper-location of bank accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science, 1977, 23(8): 789-810.
    [39] Frey B, Dueck D. Clustering by passing messages between data points. Science, 2007, 315(5814): 972-976.
    [40] Misra I, Shrivastava A, Hebert M. Data-driven exemplar model selection. In: Proc. of the IEEE Winter Conf. on Applications of Computer Vision. 2014. 339-346.
    [41] Gan M, Chen GY, Chen L, Chen CLP. Term selection for a class of separable nonlinear models. IEEE Trans. on Neural Networks and Learning Systems, 2020, 31(2): 445-451.
    [42] Zhang XX, Zhu ZF, Zhao Y, Zhao YW. ProLFA: Representative prototype selection for local feature aggregation. Neurocomputing, 2020, 381: 336-347.
    [43] Garcia S, Derrac J, Cano JR, Herrera F. Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000, 34(3): 417-435.
    [44] Nandan M, Khargonekar PP, Talathi SS. Fast SVM training using approximate extreme points. Journal of Machine Learning Research, 2014, 15(1): 59-98.
    [45] Liu ZZ, Zhang XX, Zhu ZF, Zheng S, Zhao Y. Convolutional prototype learning for zero-shot recognition. Image and Vision Computing, 2020, 98: 103924.
    [46] Zhang XX, Gui SP, Zhu ZF, Zhao Y, Liu J. Hierarchical prototype learning for zero-shot recognition. IEEE Trans. on Multimedia, 2020, 22(7): 1692-1703.
    [47] Pekalska E, Duin RP, Paclık P. Prototype selection for dissimilarity-based classifiers. Pattern Recognition, 2006, 39: 189-208.
    [48] Dong NQ, Xing EP. Few-shot semantic segmentation with prototype learning. British Machine Vision Conf., 2018, 3(4).
    [49] Gong BQ, Chao WL, Grauman K, Sha F. Diverse sequential subset selection for supervised video summarization. In: Proc. of the Advances in Neural Information Processing Systems. 2014. 2069-2077.
    [50] Xia GY, Chen BJ, Sun HJ, Liu QS. Nonconvex low-rank kernel sparse subspace learning for keyframe extraction and motion segmentation. IEEE Trans. on Neural Networks and Learning Systems, 2020.
    [51] Hartline J, Mirrokni V, Sundararajan M. Optimal marketing strategies over social networks. In: Proc. of the Int’l Conf. on World Wide Web. 2008. 189-198.
    [52] Zaeemzadeh A, Joneidi M, Rahnavard N, Shah M. Iterative projection and matching: Finding structure-preserving representatives and its application to computer vision. In: Computer Vision and Pattern Recognition. 2019. 5414-5423.
    [53] Joshi S, Boyd S. Sensor selection via convex optimization. IEEE Trans. on Signal Processing, 2008, 57(2): 451-462.
    [54] Krause A, Singh A, Guestrin C. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 2008, 9(Feb.): 235-284.
    [55] Hadlock F. Finding a maximum cut of a planar graph in polynomial time. SIAM Journal on Computing, 1975, 4(3): 221-225.
    [56] Motwani R, Raghavan P. Randomized Algorithms. Cambridge: Cambridge University Press, 1995.
    [57] Wang SC, Meng JJ, Yuan JS, Tan YP. Joint representative selection and feature learning: A semi-supervised approach. In: Computer Vision and Pattern Recognition. 2019. 6005-6013.
    [58] Shah A, Ghahramani Z. Determinantal clustering process—A nonparametric Bayesian approach to kernel based semisupervised clustering. In: Proc. of the Uncertainty in Artificial Intelligence. 2013. 566-575.
    [59] Sener O, Savarese S. Active learning for convolutional neural networks: A core-set approach. In: Proc. of the Int’l Conf. on Learning Representations. 2018.
    [60] Yeh CK, Kim J, Yen IEH, Ravikumar PK. Representer point selection for explaining deep neural networks. In: Proc. of the Advances in Neural Information Processing Systems. 2018. 9291-9301.
    [61] Xu C, Elhamifar E. Deep supervised summarization: Algorithm and application to learning instructions. In: Advances in Neural Information Processing Systems. 2019. 1107-1118.
    [62] Kohonen T. The self-organizing map. Proc. of the IEEE, 1990, 78(9): 1464-1480.
    [63] Liu CL, Nakagawa M. Evaluation of prototype learning algorithms for nearest-neighbor classifier in application to handwritten character recognition. Pattern Recognition, 2001, 34(3): 601-615.
    [64] Elhamifar E, Sapiro G, Vidal R. See all by looking at a few: Sparse modeling for finding representative objects. In: Proc. of the Computer Vision and Pattern Recognition. 2012. 1600-1607.
    [65] Zhang XX, Zhu Z, Zhao Y, Chang DX. Learning a general assignment model for video analytics. IEEE Trans. on Circuits and Systems for Video Technology, 2017, 28(10): 3066-3076.
    [66] Charikar M, Guha S, Tardos E, Shmoys DB. A constant-factor approximation algorithm for the k-median problem. Journal of Computer and System Science, 2002, 65(1): 129-149.
    [67] Carbonell J, Goldstein J. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: Proc. of the Research and Development in Information Retrieval. 1998. 335-336.
    [68] Meng JJ, Wang HX, Yuan JS, Tan YP. From keyframes to key objects: Video summarization by representative object proposal selection. In: Computer Vision and Pattern Recognition. 2016. 1039-1048.
    [69] Cong Y, Yuan JS, Luo JB. Towards scalable summarization of consumer videos via sparse dictionary selection. IEEE Trans. on Multimedia, 2011, 14(1): 66-75.
    [70] Kulesza A, Taskar B. Structured determinantal point processes. In: Advances in Neural Information Processing Systems. 2010. 1171-1179.
    [71] Elhamifar E, Kaluza MCDP. Online summarization via submodular and convex optimization. In: Computer Vision and Pattern Recognition. 2017. 1783-1791.
    [72] Das A, Kempe D. Approximate submodularity and its applications: Subset selection, sparse approximation and dictionary selection. Journal of Machine Learning Research, 2018, 19(1): 74-107.
    [73] Schlegel M, Pan YC, Chen JC, White M. Adapting kernel representations online using submodular maximization. In: Proc. of the Int’l Conf. on Machine Learning. 2017. 3037-3046.
    [74] Zhang XX, Zhu ZF, Zhao Y, Kong DQ. Self-supervised deep low-rank assignment model for prototype selection. In: Proc. of the Int’l Joint Conf. on Artificial Intelligence. 2018. 3141-3147.
    [75] Kulesza A, Taskar B. k-DPPS: Fixed-size determinantal point processes. In: Proc. of the Int’l Conf. on Machine Learning. 2011. 1193-1200.
    [76] Kulesza A, Taskar B, et al. Determinantal point processes for machine learning. Foundations and Trends® in Machine Learning, 2012, 5(2-3): 123-286.
    [77] Derezinski M, Warmuth MK, Hsu DJ. Leveraged volume sampling for linear regression. In: Advances in Neural Information Processing Systems. 2018. 2505-2514.
    [78] Cong Y, Yuan JS, Liu J. Sparse reconstruction cost for abnormal event detection. In: Computer Vision and Pattern Recognition. 2011. 3449-3456.
    [79] Wang HX, Kawahara Y, Weng CQ, Yuan JS. Representative selection with structured sparsity. Pattern Recognition, 2017, 63: 268-278.
    [80] Zhang XX, Zhu ZF, Zhao Y. Sparsity induced prototype learning via l1-norm grouping. Journal of Visual Communication and Image Representation, 2018, 57: 192-201.
    [81] Yang CL, Shen JL, Peng JY, Fan JP. Image collection summarization via dictionary learning for sparse representation. Pattern Recognition, 2013, 46(3): 948-961.
    [82] Bien J, Xu Y, Mahoney MW. CUR from a sparse optimization viewpoint. In: Advances in Neural Information Processing Systems. 2010. 217-225.
    [83] Williams CK, Seeger M. Using the nyström method to speed up kernel machines. In: Advances in Neural Information Processing Systems. 2001. 682-688.
    [84] Kaufman L, Rousseeuw P. Clustering by means of medoids. In: Proc. of the Statistical Data Analysis Based on the l1-norm and Related Methods. 1987. 405-416.
    [85] Drineas P, Mahoney MW. On the Nyström method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 2005, 6(Dec.): 2153-2175.
    [86] Li M, Bi W, Kwok JT, Lu BL. Large-scale Nyström kernel matrix approximation using randomized SVD. IEEE Trans. on Neural Networks and Learning Systems, 2014, 26(1): 152-164.
    [87] Zhang XX, Zhu ZF, Zhao Y, Chang DX, Liu J. Seeing all from a few: l1-norm-induced discriminative prototype selection. IEEE Trans. on Neural Networks and Learning Systems, 2019, 30(7): 1954-1966.
    [88] Drineas P, Mahoney MW, Muthukrishnan S. Subspace sampling and relative-error matrix approximation: Column-based methods. In: Proc. of the Int’l Workshop on Approximation Algorithms for Combinatorial Optimization and the Int’l Workshop on Randomization and Approximation Techniques in Computer Science. Berlin, Heidelberg: Springer, 2006. 316-326.
    [89] Boutsidis C, Sun J, Anerousis N. Clustered subset selection and its applications on it service metrics. In: Proc. of the ACM Conf. on Information and Knowledge Management. 2008. 599-608.
    [90] Elhamifar E, Sapiro G, Vidal R. Finding exemplars from pairwise dissimilarities via simultaneous sparse recovery. In: Advances in Neural Information Processing Systems. 2012. 19-27.
    [91] Frey BJ, Dueck D. Mixture modeling by affinity propagation. In: Advances in Neural Information Processing Systems. 2006. 379-386.
    [92] Lashkari D, Golland P. Convex clustering with exemplar-based models. In: Advances in Neural Information Processing Systems. 2008. 825-832.
    [93] Yue Y, Joachims T. Predicting diverse subsets using structural SVMs. In: Proc. of the Int’l Conf. on Machine Learning. 2008. 1224-1231.
    [94] Prasad A, Jegelka S, Batra D. Submodular meets structured: Finding diverse subsets in exponentially-large structured item sets. In: Advances in Neural Information Processing Systems. 2014. 2645-2653.
    [95] Sánchez JS, Pla F, Ferri FJ. Prototype selection for the nearest neighbour rule through proximity graphs. Pattern Recognition Letters, 1997, 18(6): 507-513.
    [96] Zhang K, Chao WL, Sha F, Grauman K. Summary transfer: Exemplar-based subset selection for video summarization. In: Computer Vision and Pattern Recognition. 2016. 1059-1067.
    [97] Decaestecker C. Finding prototypes for nearest neighbour classification by means of gradient descent and deterministic annealing. Pattern Recognition, 1997, 30(2): 281-288.
    [98] Lee YJ, Mangasarian OL. RSVM: Reduced support vector machines. In: Proc. of the Int’l Conf. on Data Mining. 2001. 1-17.
    [99] Lee SW, Song HH. Optimal design of reference models for large-set handwritten character recognition. Pattern Recognition, 1994, 27(9): 1267-1274.
    [100] Duda RO, Hart PE, Stork DG. Pattern Classification. John Wiley & Sons, 2012.
    [101] Huang Z. Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Mining and Knowledge Discovery, 1998, 2(3): 283-304.
    [102] Elhamifar E. Sequential facility location: Approximate submodularity and greedy algorithm. In: Proc. of the Int’l Conf. on Machine Learning. 2019. 1784-1793.
    [103] Elhamifar E, Naing Z. Unsupervised procedure learning via joint dynamic summarization. In: Proc. of the Int’l Conf. on Computer Vision. 2019. 6341-6350.
    [104] Macchi O. The coincidence approach to stochastic point processes. Advances in Applied Probability, 1975, 7(1): 83-122.
    [105] Sharghi A, Borji A, Li C, Yang TB, Gong BQ. Improving sequential determinantal point processes for supervised video summarization. In: Proc. of the European Conf. on Computer Vision. 2018. 517-533.
    [106] Gartrell M, Brunel VE, Dohmatob E, Krichene S. Learning nonsymmetric determinantal point processes. In: Advances in Neural Information Processing Systems. 2019. 6715-6725.
    [107] Gillenwater J, Kulesza A, Taskar B. Discovering diverse and salient threads in document collections. In: Proc. of the Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. 2012. 710-720.
    [108] Wu BY, Jia F, Liu W, Ghanem B. Diverse image annotation. In: Computer Vision and Pattern Recognition. 2017. 2559-2567.
    [109] Chan TF. Rank revealing QR factorizations. Linear Algebra and Its Applications, 1987, 88: 67-82.
    [110] Deshpande A, Rademacher L. Efficient volume sampling for row/column subset selection. In: Proc. of the Annual Symp. on Foundations of Computer Science. 2010. 329-338.
    [111] Farahat AK, Elgohary A, Ghodsi A, Kamel MS. Greedy column subset selection for large-scale data sets. Knowledge and Information Systems, 2015, 45(1): 1-34.
    [112] Wang YN, Singh A. Provably correct algorithms for matrix column subset selection with selectively sampled data. Journal of Machine Learning Research, 2017, 18(1): 5699-5740.
    [113] Smola AJ, Schökopf B. Sparse greedy matrix approximation for machine learning. In: Proc. of the Int’l Conf. on Machine Learning. 2000. 911-918.
    [114] Ouimet M, Bengio Y. Greedy spectral embedding. In: Proc. of the Int’l Conf. on Artificial Intelligence and Statistics. 2005.
    [115] Farahat A, Ghodsi A, Kamel M. A novel greedy algorithm for Nyström approximation. In: Proc. of the Int’l Conf. on Artificial Intelligence and Statistics. 2011. 269-277.
    [116] Gittens A, Mahoney MW. Revisiting the Nyström method for improved large-scale machine learning. Journal of Machine Learning Research. 2016, 17(1): 3977-4041.
    [117] Kumar S, Mohri M, Talwalkar A. Sampling methods for the Nyström method. Journal of Machine Learning Research, 2012, 13(Apr.): 981-1006.
    [118] Kumar S, Mohri M, Talwalkar A. Sampling techniques for the nystrom method. In: Proc. of the Artificial Intelligence and Statistics. 2009. 304-311.
    [119] Zhang K, Kwok JT, Parvin B. Prototype vector machine for large scale semi-supervised learning. In: Proc. of the Int’l Conf. on Machine Learning. 2009. 1233-1240.
    [120] Rudi A, Camoriano R, Rosasco L. Less is more: Nyström computational regularization. In: Advances in Neural Information Processing Systems. 2015. 1657-1665.
    [121] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015.
    [122] Wang T, Zhu JY, Torralba A, Efros AA. Dataset distillation. arXiv preprint arXiv: 1811.10959, 2018.
    [123] Rahmani M, Atia G. Robust and scalable column/row sampling from corrupted big data. In: Proc. of the Int’l Conf. on Computer Vision Workshops. 2017. 1818-1826.
    [124] Sedghi M, Geo M, Atia G. A multi-criteria approach for fast and robust representative selection from manifolds. IEEE Trans. on Knowledge and Data Engineering, 2020. [doi: 10.1109/TKDE.2020.3024099]
    [125] Kirsch A, van Amersfoort J, Gal Y. Batchbald: Efficient and diverse batch acquisition for deep Bayesian active learning. In: Advances in Neural Information Processing Systems. 2019. 7026-7037.
    [126] Xie PT, Salakhutdinov R, Mou LT, Xing EP. Deep determinantal point process for large-scale multi-label classification. In: Proc. of the Int’l Conf. on Computer Vision. 2017. 473-482.
    附中文参考文献:
    [1] Naisbitt J, 著; 梅艳, 译. 大趋势: 改变我们生活的十个新方向. 北京: 中华工商联合出版社, 2009. 91.
    [10] 姜文瀚. 模式识别中的样本选择研究及其应用[博士学位论文]. 南京: 南京理工大学, 2008.
    [11] 熊霖. 大数据下的数据选择与学习算法研究[博士学位论文]. 西安: 西安电子科技大学, 2014.
    [16] 邓大勇. 基于粗糙集的数据约简及粗糙集扩展模型的研究[博士学位论文]. 北京: 北京交通大学, 2007.
    [23] 刘永楠, 李建中, 高宏. 海量不完整数据的核心数据选择问题的研究. 计算机学报, 2018, 41(4): 915-930.
    Cited by
Get Citation

张幸幸,朱振峰,赵亚威,赵耀.机器学习中原型学习研究进展.软件学报,2022,33(10):3732-3753

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 26,2020
  • Revised:January 22,2021
  • Online: May 21,2021
  • Published: October 06,2022
You are the first2034785Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063