Edge Sampling Based Network Embedding Model
Author:
Affiliation:

Clc Number:

TP18

Fund Project:

National Natural Science Foundation of China (61572376);111 Project (B07037)

  • Article
  • | |
  • Metrics
  • |
  • Reference [25]
  • |
  • Related [20]
  • |
  • Cited by [1]
  • | |
  • Comments
    Abstract:

    With the development of online social networks such as Weibo, WeChat and Facebook, network representation learning has drawn widespread research interests from academia and industry. Traditional network embedding models exploit the spectral properties of matrix representations of graphs, which suffer from both computation and performance bottlenecks when applied to real world networks. Recently, a lot of neural network based embedding models are presented in the literature. They are computationally efficient and preserve the network structure information well. The vertices in the network are connected to various types of relations, which convey rich information. However, such important information are neglected by all existing models. This paper proposes NEES, an unsupervised network embedding model to encode the relations. It first obtains the edge vectors by edge sampling to reflect the relation types of the edges. Then, it uses the edge vectors to learn a low dimension representation for each node in the graph. Extensive experiments are conducted on several social networks and one citation network. The results show that NEES model outperforms the state-of-the-art methods in multi-label classification and link prediction tasks. NEES is also scalable to large-scale networks in the real world.

    Reference
    [1] Zhu P, Qian T, Zhong M, Li X. Inferring users' gender from interests:A tag embedding approach. In:Proc. of the Neural Information Processing. Springer Int'l Publishing, 2016. 86-94.[doi:10.1007/978-3-319-46681-1_11]
    [2] Fouss F, Pirotte A, Renders JM, Saerens M. Random-Walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Trans. on Knowledge & Data Engineering, 2007,19(3):355-369.[doi:10.1109/TKDE. 2007.46]
    [3] Chandola V, Banerjee A, Kumar V. Anomaly detection:A survey. ACM Computing Surveys, 2009,41(3):1-58.[doi:10.1145/1541880.1541882]
    [4] Chang S, Han W, Tang J, Qi GJ, Aggarwal CC, Huang TS. Heterogeneous network embedding via deep architectures. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2015. 119-128.[doi:10.1145/2783258.2783296]
    [5] Bengio Y, Courville A, Vincent P. Representation learning:A review and new perspectives. IEEE Trans. on Pattern Analysis & Machine Intelligence, 2013,35(8):1798.[doi:10.1109/TPAMI.2013.50]
    [6] Gallagher B, Eliassirad T. Leveraging label-independent features for classification in sparsely labeled networks:An empirical study. Lecture Notes in Computer Science, 2010,5498:1-19.[doi:10.1007/978-3-642-14929-0_1]
    [7] Henderson K, Gallagher B, Li L, Akoglu L, Eliassi-Rad T, Tong H, Faloutsos C. It's who you know:Graph mining using recursive structural features. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2011. 663-671.[doi:10. 1145/2020408.2020512]
    [8] Chen WZ, Zhang Y, Li XM. Representation learning on network. Big Data, 2015,1(3):8-22(in Chinese with English abstract).
    [9] Venna J, Kaski S. Local multidimensional scaling. Neural Networks the Official Journal of the Int'l Neural Network Society, 2006, 19(6-7):889.[doi:10.1016/j.neunet.2006.05.014]
    [10] Tenenbaum JB, Silva VD, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science, 2000, 290(5500):2319-2323.[doi:10.1126/science.290.5500.2319]
    [11] Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000,290(5500):2323-2326.[doi:10.1126/science.290.5500.2323]
    [12] Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In:Proc. of the Int'l Conf. on Neural Information Processing Systems:Natural and Synthetic. MIT Press, 2001. 585-591.
    [13] Altman NS. An introduction to kernel and nearest-neighbor nonparametric regression. American Statistician, 1992,46(3):175-185.[doi:10.1080/00031305.1992.10475879]
    [14] Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. Computer Science, 2013.
    [15] Perozzi B, Al-Rfou R, Skiena S. DeepWalk:Online learning of social representations. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2014. 701-710.[doi:10.1145/2623330.2623732]
    [16] Grover A, Leskovec J. node2vec:Scalable feature learning for networks. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2016. 855.[doi:10.1145/2939672.2939754]
    [17] Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q. LINE:Large-Scale information network embedding. In:Proc. of the Int'l Conf. on World Wide Web. ACM Press, 2015. 1067-1077.[doi:10.1145/2736277.2741093]
    [18] Li AQ, Ahmed A, Ravi S, Smola AJ. Reducing the sampling complexity of topic models. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2014. 891-900.[doi:10.1145/2623330.2623756]
    [19] Morin F, Bengio Y. Hierarchical probabilistic neural network language model. Aistats, 2005.
    [20] Bordes A, Glorot X, Weston J, Bengio Y. A semantic matching energy function for learning with multi-relational data. Machine Learning, 2014,94(2):233-259.[doi:10.1007/s10994-013-5363-6]
    [21] Bordes A, Usunier N, Weston J, Yakhnenko O. Translating embeddings for modeling multi-relational data. In:Proc. of the Advances in Neural Information Processing Systems. 2013. 2787-2795.
    [22] Wang D, Cui P, Zhu W. Structural deep network embedding. In:Proc. of the Int'l Conf. on Knowledge Discovery and Data Mining. ACM Press, 2016. 1225-1234.[doi:10.1145/2939672.2939753]
    [23] Leskovec J, Kleinberg J, Faloutsos C. Graph evolution:Densification and shrinking diameters. ACM Trans. on Knowledge Discovery from Data, 2006,1(1):2.[doi:10.1145/1217299.1217301]
    附中文参考文献:
    [8] 陈维政,张岩,李晓明.网络表示学习.大数据,2015,1(3):8-22.
Get Citation

陈丽,朱裴松,钱铁云,朱辉,周静.基于边采样的网络表示学习模型.软件学报,2018,29(3):756-771

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:July 10,2017
  • Revised:September 05,2017
  • Online: December 05,2017
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063