ZHU Di , ZHANG Bo-Wen , CHENG Ya-Qi , LIU Xin-Yue , WU Wen-Long , WANG Tie-Xin , WEN Hao , LI Bo-Han
2023, 34(10):4439-4462. DOI: 10.13328/j.cnki.jos.006884
Abstract:The development of information system is in the critical stage from perceptual intelligence to cognitive intelligence. Traditional information systems are difficult to meet the development requirements, and digital transformation is imperative. Digital thread is a full-life-cycle data processing framework that maps and analyzes the physical world and digital space by connecting data from different life cycle stages. Knowledge graph is a structured semantic knowledge base that describes concepts and relationships in the physical world in the form of symbols, forming systematic construction and reasoning process driven by knowledge. Both of them are of great significance to the research of knowledge enabled information system. This study reviews the current research status, development and challenges of the new generation knowledge enabled information system. First, starting from the digital thread system, the concept and development of digital thread, the six-dimensional data structure and six data processing stages of digital thread are introduced. Then, the generally accepted definitionand the development of knowledge graph system are given, and the structure and methods of knowledge graph are summarized. Finally, the direction of combining digital thread with knowledge graph is analyzed and explored, the benefits of KG4DT (knowledge graph for digital thread) and DT4KG (digital thread for knowledge graph) are listed, and the open questions are raised about the new generation of knowledge enabled information systems in the future.
ZHAO Ying-Fu , JIN Fu-Sheng , LI Rong-Hua , QIN Hong-Chao , CUI Peng , WANG Guo-Ren
2023, 34(10):4463-4476. DOI: 10.13328/j.cnki.jos.006881
Abstract:Recently, graph-based convolutional neural networks (GCNs) have attracted much attention by generalizing convolutional neural networks to graph data, which includes redefining the convolution and the pooling operations on graphs. Due to the limitation that graph data can only focus on dyadic relations, it cannot perform well in real practice. In contrast, a hypergraph can capture high-order data interaction and is easy to deal with complex data representation using its flexible hyperedges. Nevertheless, the existing methods for hypergraph convolutional networks are still not mature, currently there is no effective operation for hypergraph pooling. Therefore, a hypergraph pooling network with self-attention mechanism is proposed. Using hypergraph structure for data modeling, this model can learn hidden node features with high-order data information through hyper-convolution operation which introduces self-attention mechanism, select important nodes both on structure and content through hyper-pooling operation, and then obtain more accurate hypergraph representation. Experiments on text classification, dish classification, and protein classification tasks show that the proposed method outperforms recent state-of-the art methods.
BING Rui , YUAN Guan , MENG Fan-Rong , WANG Sen-Zhang , QIAO Shao-Jie , WANG Zhi-Xiao
2023, 34(10):4477-4500. DOI: 10.13328/j.cnki.jos.006883
Abstract:As a heterogeneous graph representation learning method, heterogeneous graph neural networks can effectively extract complex structural and semantic information from heterogeneous graphs, and have achieved excellent performance in node classification and connection prediction tasks, which provides strong support for the representation and analysis of knowledge graphs. Due to the existence of some noise interaction or missing interaction in the heterogeneous graph, the heterogeneous graph neural network incorporates erroneous neighbor features when nodes are aggregated and updated, thus affecting the overall performance of the model. In order to solve the above problems, this study proposes a heterogeneous graph structure learning model enhanced by multi-view contrastive. Firstly, the semantic information in the heterogeneous graph is maintained by using the meta path, and the similarity graph is generated by calculating the feature similarity between the nodes under each meta-path, which is fused with the meta-path graph to optimize the graph structure. By comparing the similarity graph and meta-path graph as different views, the graph structure is optimized without the supervision information, and the dependence on the supervision signal is eliminated. Finally, in order to solve the problem that the learning ability of neural network model is insufficient at the initial stage of training and there are often error interactions in the generated graph structure, this study designs a progressive graph structure fusion method. Through incremental weighted addition of meta-path graph and similarity graph, the weight of similarity graph is changed in the fusion process, it not only prevents erroneous interactions from being introduced in the initial stage of training, but also achieves the purpose of using the interaction in similarity graph to suppress interference interaction or complete missing interaction, thus the structure of heterogeneous graph is optimized. The node classification and node clustering are selected as the verification tasks of graph structure learning. The experimental results on four real heterogeneous graph datasets prove that the heterogeneous graph structure learning method proposed in this study is feasible and effective. Compared with the optimal comparison model, the performance of proposed model has been significantly improved under two evaluation metrics.
SUN Ze-Qun , CUI Yuan-Ning , HU Wei
2023, 34(10):4501-4517. DOI: 10.13328/j.cnki.jos.006887
Abstract:Knowledge graphs (KGs) store a great amount of structured knowledge and semantic information. They have been widely used by many knowledge-powered intelligent applications. With the rapid development of these applications, their requirements for knowledge also change. A single KG usually suffers from the incompleteness issue and is therefore unable to meet the requirement. This suggests an urgent demand for supporting new data sources and fusing multi-sourced knowledge. The conventional paradigm for KG representation learning and application only considers a single KG while ignores the knowledge transfer between different sources. Joint representation learning on multi-sourced KGs can bring performance improvement, but it cannot support the extended representation learning of new KGs. To resolve these issues, this paper presents a new paradigm, i.e., lifelong representation learning on multi-sourced KGs. Given a sequence of multi-sourced KGs, lifelong representation learning aims at benefiting from the previously-learned KG and embedding model when learning a new KG. To this end, this study proposes a lifelong learning framework based on linked entity replay. First, it designs a Transformer-based KG embedding model that leverages relation correlations for link prediction between entities. Second, it proposes a linked subgraph generation method. It leverages the entity alignment between different sources to build the subgraph and replays the linked entities to enable lifelong learning and knowledge transfer. Finally, it uses a dynamic model structure with model parameters and embeddings stored for each KG to avoid catastrophic forgetting. Experiments on benchmarks show that the proposed KG embedding model can achieve the state-of-the-art performance in link prediction, and the lifelong representation learning framework is effective and efficient in multi-sourced knowledge transfer compared with baselines.
LIU Kang-Zheng , ZHAO Feng , JIN Hai
2023, 34(10):4518-4532. DOI: 10.13328/j.cnki.jos.006885
Abstract:Temporal knowledge graph (TKG) reasoning has attracted significant attention of researchers. Existing TKG reasoning methods have made great progress through modeling historical information. However, the time-variability problem and unseen entity (relation) problem are still two major challenges that hinder the further improvement of this field. Moreover, since the structural information and temporal dependencies of the historical subgraph sequence have to be modeled, the traditional embedding-based methods often have high time consumption in the training and predicting processes, which greatly limits the application of the reasoning model in real-world scenarios. To address these issues, this study proposes a frequency statistical network for TKG reasoning, namely FS-Net. On the one hand, FS-Net continuously generates time-varying scores for the predictions at the changing timestamps based on the latest short-term historical fact frequency statistics. On the other hand, based on the fact frequency statistics at the current timestamp, FS-Net supplements the historical unseen entities (relations) for the predictions; specially, FS-Net does not need training, and has a very high time efficiency. The experiments on two TKG benchmark datasets demonstrate that FS-Net has a great improvement compared with the baseline models.
CHEN Zi-Rui , WANG Xin , WANG Chen-Xu , ZHANG Shao-Wei , YAN Hao-Yu
2023, 34(10):4533-4547. DOI: 10.13328/j.cnki.jos.006888
Abstract:A knowledge hypergraph is a form of a heterogeneous graph that represents the real world through n-ary relations. However, both in general and specific domains, existing knowledge hypergraphs often suffer from incompleteness. Therefore, it is a challenging task to reason the missing links through the existing links in the knowledge hypergraph. Currently, most research employs knowledge representation learning methods based on n-ary relations to carry out link prediction tasks in knowledge hypergraphs. However, these methods only learn embedding vectors of entities and relations from hyperedges with unknown temporal information, neglecting the impact of temporal factors on the dynamic evolution of facts, resulting in poor predictive performance in dynamic environments. Firstly, based on the definition of temporal knowledge hypergraph that proposed by this study for the first time, a link prediction model is proposed for temporal knowledge hypergraphs. Simultaneously, static and dynamic representations of entities are learnt from their roles, positions, and timestamps of temporal hyperedges, which are merged in a certain proportion and utilized as final entity embedding vectors for link prediction tasks to realize the full exploitation of hyperedge temporal information. At the same time, it is theoretically proved that the proposed model is fully expressive and has linear space complexity. In addition, a temporal knowledge hypergraph dataset CB67 is constructed from the public business data of listed companies, and a large number of experimental evaluations are conducted on this dataset. The experimental results show that the proposed model can effectively perform the link prediction task on the temporal knowledge hypergraph dataset.
FANG Yang , TAN Zhen , CHEN Zi-Yang , XIAO Wei-Dong , ZHANG Ling-Ling , TIAN Feng
2023, 34(10):4548-4564. DOI: 10.13328/j.cnki.jos.006886
Abstract:In recommendationsystem, cold-start issue is challenging due to the lack of interactions between new users or new items. Such issue could be alleviated via data-level and model-level strategies. Traditional data-level methods employ side information like feature information to enhance the learning of user and item embeddings. Recently, heterogeneous information networks (HINs) have been incorporated into the recommendationsystem as they provide more fruitful auxiliary information and meaningful semantics. However, these models are unable to capture the structural and semantic information comprehensively and neglect the unlabeled information of HINs during training. Model-level methods propose to apply the meta-learning framework which naturally fits into the cold-start issue, as it learns the prior knowledge from similar tasks and adapts to new tasks quickly with few labeled samples. Therefore, a contrastive meta-learning framework on HINs named CM-HIN is proposed, which addresses the cold-start issue in both data level and model level. Specifically, metapath and network schema views are exploredto describe the higher-order and local structural information of HINs. Within metapath and network schema views, contrastive learning is adopted to mine the unlabeled information of HINs and these two viewsare incorporated. Extensive experiments on three benchmark datasets demonstrate that CM-HIN outperforms all state-of-the-art baselines in three cold-start scenarios.
BI Xin , NIE Hao-Jie , ZHAO Xiang-Guo , YUAN Ye , WANG Guo-Ren
2023, 34(10):4565-4583. DOI: 10.13328/j.cnki.jos.006889
Abstract:Knowledge graph based question answering (KGQA) analyzes natural language questions, performs reasoning over knowledge graphs, and ultimately returns accurate answers to them. It has been widely used in intelligent information services, such as modern search engines, and personalized recommendation. Considering the high cost of manual labeling of reasoning steps as supervision in the relation-supervised learning methods, scholars began to explore weak supervised learning methods, such as reinforcement learning, to design knowledge graph based question answering models. Nevertheless, as for the complex questions with constraints, existing reinforcement learning-based KGQA methods face two major challenges: (1) multi-hop long path reasoning leads to sparsity and delay rewards; (2) existing methods cannot handle the case of reasoning path branches with constraint information. To address the above challenges in constrained question answering tasks, a reward shaping strategy with constraint information is designed to solve the sparsity and delay rewards. In addition, reinforcement learning based constrained path reasoning model named COPAR is proposed. COPAR consists of an action determination strategy based on attention mechanism and an entity determination strategy based on constraint information. Itis capable of selecting the correct relations and entities according to the question constraint information, reducing the search space of reasoning, and ultimately solving the reasoning path branching problem. Moreover, an ambiguity constraint processing strategy is proposed to effectively solve the ambiguity problem of reasoning path. The performance of COPAR is verified and compared using benchmark datasets of knowledge graph based question answering task. The experimental results indicate that, compared with the existing methods, the performance on datasets of multi-hop questions is relatively improved by 2%-7%; the performance on datasets of constrained questions is higher than the rival models, and the accuracy is improved by at least 7.8%.
QIAO Shao-Jie , YANG Guo-Ping , YU Yong , HAN Nan , QIN Xiao , QU Lu-Lu , RAN Li-Qiong , LI He
2023, 34(10):4584-4600. DOI: 10.13328/j.cnki.jos.006882
Abstract:The question-answering system based on knowledge graphs can analyze user questions, and has become an effective way to retrieve relevant knowledge and automatically answer the given questions. The knowledge graph-based question-answering system usually uses a neural program induction model to convert natural language question into a logical form, and the answer can be obtained by executing the logical form on the knowledge graph. However, the knowledge question-answering system by using pre-trained language models and knowledge graphs involves two challenges: (1) given the QA (question-answering) context, relevant knowledge needs to be identified from a large KG (knowledge graph); (2) it isneeded to perform the joint reasoning on QA context and KG. Based on these challenges, a language model-driven knowledge graph question-answering model is proposed, which connects the QA context and KG to form a joint graph, and uses a language model to calculate the relevance of the given QA context nodes and KG nodes, and a multi-head graph attention network is employed to update the node representation. Extensive experiments on the CommonsenseQA, OpenBookQA and MedQA-USMLE real datasets are conducted to evaluate the performance of QA-KGNet and the experimental results show that QA-KGNet outperforms existing benchmark models and exhibits excellent structured reasoning capability.
LI Ge , PENG Xin , WANG Qian-Xiang , XIE Tao , JIN Zhi , WANG Ji , MA Xiao-Xing , LI Xuan-Dong
2023, 34(10):4601-4606. DOI: 10.13328/j.cnki.jos.007008
Abstract:The generative pertained transformer-based large language models (LLMs) are setting off a wave in the field of artificial intelligence and continue to penetrate their influence into more fields. The LLMs such as ChatGPT have demonstrated their ability and potential to provide people with a certain degree of assistance through natural language-based interaction in many software engineering tasks, and they are developing into a natural language-based human-machine collaborative tool for software development and evolution. From the perspective of human-machine collaborative software development and evolution, the LLMs, as a software tool, present two major features. One is the natural language-based human-machine interaction, which greatly expands the human-machine collaboration workspace and improves the efficiency and flexibility of human-machine collaboration. The second is to generate predictive contents based on accumulated knowledge of software development and evolution, targeting a given software development and evolution task, which can provide a certain degree of support and assistance for the software development and evolution task. However, since LLMs are essentially mathematical models based on probability and statistical principles and training date, with inexplicability and uncertainty, the contents generated by LLMs are predictive and lack the judgments for trustworthiness. As opposed to the tasks that humans need to perform in software development and evolution, which are typically decision-making tasks with trustworthiness guarantees, LLMs, as a software tool, not only provide assistance to people in software development and evolution featuring human-machine collaboration but also bring many challenges. This study analyzes and clarifies the challenges brought by the LLMs, such as how to construct LLMs that are more helpful for software development and evolution, how to guide LLMs to generate predictive contents that are more helpful for software development and evolution, and how to develop and evolve high-quality software systems based on the predictive contents generated by LLMs.
SHI Jing , ZHANG Ao , BAI Xiao-Ying , CAI Hua-Qian , LIU Xuan-Zhe
2023, 34(10):4607-4635. DOI: 10.13328/j.cnki.jos.006677
Abstract:Distributed ledger (DL), as a distributed data management architecture, maintains data records (the ledgers) across distributed nodes based on consensus mechanisms and protocols. It can comprehensively record all information of data ownership, transmission, and trading chains in distributed ledgers. Additionally, data will be not tampered and denied throughout the life cycle of data production and transactions, providing an endorsement for data rights confirmation, protection, and audit. Blockchain is a typical implementation of DL systems. With the emerging digital economy applications including digital currency and data asset trading, DL technologies receive increasingly widespread attention. However, system performance is one of the key technical bottlenecks for large-scale application of DL systems, and ledger performance optimization has become a focus of the academia and industry. The study investigates the methods, technologies, and typical solutions of DL performance optimization from four perspectives of system architecture, ledger data structure, consensus mechanism, and message communication.
LI Shuo , LIU Jie , WANG Shuai , TIAN Hao-Xiang , YE Dan
2023, 34(10):4636-4660. DOI: 10.13328/j.cnki.jos.006666
Abstract:During software development, developers use third-party libraries extensively to achieve code reuse. Due to the dependencies among different third-party libraries, the incompatibilities among them lead to errors during the installing, loading, or calling of those libraries and ultimately result in system anomalies. Such a problem is called a dependency conflict (DC, also referred to as conflict dependency or CD) issue of third-party libraries. The root cause of such issues is that the third-party libraries loaded fail to cover the required features (e.g., methods) cited by the software. DC issues often occur during the download and install, project compiling, and running of third-party libraries and are difficult to locate. Fixing DC issues requires developers to know the differences among the versions of the third-party libraries they use accurately, and the complex dependencies among the third-party libraries increase the difficulty in this work. To identify the DC issues in the software before its running and to deal with the system anomalies caused by those issues during running, researchers around the world have conducted various studies on such issues. This study presents a systematic review of this research topic from four aspects, including the empirical analysis of third-party library usage, the cause analysis of DC issues, and the detection methods and common fixing ways for such issues. Finally, the potential research opportunities in this field are discussed.
SHEN Si-Jie , CHEN Rong , CHEN Hai-Bo , ZANG Bin-Yu
2023, 34(10):4661-4680. DOI: 10.13328/j.cnki.jos.006665
Abstract:As the scale of business data increases, distributed online analytical processing (OLAP) is widely performed in business intelligence (BI), enterprise resource planning (ERP), user behavior analysis, and other application scenarios to support large-scale data analysis. Moreover, distributed OLAP overcomes the limitations of single-machine storage and stores data in memory to improve the performance of OLAP. However, after the in-memory distributed OLAP eliminates disk input/output (I/O), the join operation becomes one of its new performance bottlenecks. As a common practice in OLAP, the join operation involves a huge amount of data accessing and computation operations. By analyzing existing methods for the join operation, this study presents a graph structure indexing method that can accelerate the join operation and a new join method called LinkJoin based on it. Graph structure indexing stores the in-memory position of data in the form of a graph structure according to the join relationship specified by users. The join method based on graph structure indexing reduces the amount of data accessing and computation operations with a low complexity equivalent to that of the hash join. This study expands the state-of-the-art open-source in-memory OLAP system called MonetDB from a single-machine system to a distributed one and designs and implements a join method based on graph structure indexing on it. A series of designs and optimizations are also conducted in the aspects of graph indexing structure, columnar storage, and distributed execution engine to improve the distributed OLAP performance of the system. The test results show that in the TPC-H benchmark tests, the join operation based on graph structure indexing improves the performance on queries with join operations by 1.64 times on average and 4.1 times at most. For the join operation part of these queries, it enhances the performance by 9.8–22.1 times.
CHEN Sheng , FANG Ming-Zhe , JIANG Bu-Yun , LI Chun-Xiao , ZUO Chun , LI Yu-Cheng , LIANG Geng
2023, 34(10):4681-4704. DOI: 10.13328/j.cnki.jos.006664
Abstract:Smart contracts running on the blockchain can hardly be modified after deployment, and their call and execution rely on a consensus procedure. Consequently, existing debugging methods that require the modification of the smart contract code or the interruption of execution cannot be directly applied to smart contracts. Since the running of a smart contract is composed of ordered execution of blockchain transactions, tracing the execution of the transactions is an effective approach to render the smart contract more debuggable. The major goal of tracing blockchain transaction execution is to unveil how a blockchain transaction produces such a result in execution. The execution of a blockchain transaction relies on the internal state of the blockchain, and this state is determined by the execution results of previous transactions, which results in transitive dependencies. Such dependencies and the characteristics of the execution environment the blockchain provides bring challenges to tracing. The tracing of blockchain transaction execution is mainly faced with three challenges: how to obtain enough information for tracing from the production environment in which the smart contract is deployed, how to obtain the dependencies among the blockchain transactions, and how to ensure the consistency between the result of tracing and the real execution online. This study proposes a tracing method for blockchain transaction execution based on recording and replay. By building a recording and replay mechanism in the contract container, the proposed method enables the recording of state reading and writing operations during transaction execution without modifying the contract code and interrupting the running of the smart contract. A transaction dependency analysis method based on state reading and writing is proposed to support the retracing of previous transactions linked by dependencies on demand. Moreover, a verification mechanism for reading and writing operation recording is designed to ensure that the consistency between the replaying execution and the real online execution can be verified. The tracing method can trace the execution of the blockchain transaction that calls the smart contract, which can be used in debugging of smart contracts. When loss is caused by the failure of smart contracts, the tracing result can be used as evidence. Experiments are conducted for a performance comparison between storing recorded reading and writing operations on chain and off chain. The advantages and effectiveness of the proposed method in tracing blockchain transaction execution are revealed by a case study.
WANG Min , PAN Xing-Lu , ZOU Yan-Zhen , XIE Bing
2023, 34(10):4705-4723. DOI: 10.13328/j.cnki.jos.006674
Abstract:Code review is an important mechanism in the distributed development of modern software. In code review, providing the context information of the current changes can help code reviewers understand the evolution of a certain source code quickly, thereby enhancing the efficiency and quality of code review. Existing studies have provided some commit history tracking methods and corresponding tools, but these methods cannot further extract auxiliary information relevant to code review from historical data. Therefore, this study proposes a novel code change tracking approach for code review named C2Tracker. Given a fine-grained code change at the method (function) level, C2Tracker can automatically track the history commits which are related to the code changes. Furthermore, the frequent co-occurrence changed code elements and relevant code changes are mined to help reviewers understand the current code changes and make decisions. Experimental verification is conducted on ten well-known open-source projects. The results show that the accuracy of C2Tracker in tracking historical commits, mining frequent co-occurrence code elements, and tracking related code change fragments are 97%, 95%, and 97%, respectively. Compared with existing review methods, C2Tracker greatly improves its code review efficiency and quality in specific cases. Additionally, reviewers acknowledge that it can play a significant role in helping improve the efficiency and quality of most review cases.
BIAN Pan , LIANG Bin , HUANG Jian-Jun , YOU Wei , SHI Wen-Chang , ZHANG Jian
2023, 34(10):4724-4742. DOI: 10.13328/j.cnki.jos.006676
Abstract:Reference counts are widely employed in large-scale low-level systems including Linux kernel to manage shared resources, and should be consistent with the number of objects referring to resources. Otherwise, bugs of improper update of reference counts may be caused, and resources can never be released or will be released earlier. To detect improper updates of reference counts, available static detection methods have to know the functions which increase reference counts or decrease the counts. However, manually collecting prior knowledge of reference counts is too time-consuming and may be incomplete. Though mining-based methods can reduce the dependency on prior knowledge, it is difficult to effectively detect path-sensitive bugs containing improper updates of reference counts. To this end, this study proposes a method RTDMiner that deeply integrates data mining into static analysis to detect improper updates of reference counts. First, according to the general principles of reference counts, the data mining technique is leveraged to identify functions that raise or reduce reference counts. Then, a path-sensitive static analysis method is employed to detect defective paths that increase reference counts instead of decreasing the counts. To reduce false positives, the study adopts the data mining technique to identify exceptional patterns during detection. The experiment results on the Linux kernel demonstrate that the proposed method can automatically identify functions increasing or decreasing reference counts with the precision of nearly 90%. Moreover, 24 out of the top 50 suspicious bugs detected by RTDMiner have been confirmed to be real bugs by kernel maintainers.
GAO Wei-Feng , LIU Ling-Ling , WANG Zhen-Kun , GONG Mao-Guo
2023, 34(10):4743-4771. DOI: 10.13328/j.cnki.jos.006672
Abstract:The basic concept of the multiobjective optimization evolutionary algorithm based on decomposition (MOEA/D) is to transform a multiobjective optimization problem into a set of subproblems (single-objective or multiobjective) for optimization solutions. Since MOEA/D was proposed in 2007, it has attracted extensive attention from Chinese and international scholars and become one of the most representative multiobjective optimization evolutionary algorithms. This study summarizes the research progress on MOEA/D in the past thirteen years. The advances include algorithm improvements of MOEA/D, research of MOEA/D on many-objective optimization and constraint optimization,and application of MOEA/D in some practical issues. Then, several representative improved algorithms of MOEA/D are compared through experiments. Finally, the study presents several potential research topics of MOEA/D in the future.
ZHANG Li-Hua , LIU Quan , HUANG Zhi-Gang , ZHU Fei
2023, 34(10):4772-4803. DOI: 10.13328/j.cnki.jos.006671
Abstract:Inverse reinforcement learning (IRL), also known as inverse optimal control (IOC), is an important research method of reinforcement learning and imitation learning. IRL solves a reward function from expert samples, and the optimal strategy is then solved to imitate expert strategies. In recent years, fruitful achievements have been yielded by IRL in imitation learning, with widespread application in vehicle navigation, path recommendation, and robotic optimal control. First, this study presents the theoretical basis of IRL. Then, from the perspective of reward function construction methods, IRL algorithms based on linear and non-linear reward functions are analyzed. The algorithms include maximum marginal IRL, maximum entropy IRL, maximum entropy deep IRL, and generative adversarial imitation learning. In addition, frontier research directions of IRL are reviewed to compare and analyze relevant representative algorithms containing IRL with incomplete expert demonstrations, multi-agent IRL, IRL with sub-optimal expert demonstrations, and guiding IRL. Finally, the primary challenges of IRL and future developments in its theoretical and application significance are summarized.
DU Si , QI Zhi-Wei , YUE Kun , DUAN Liang , WANG Jia-Hui
2023, 34(10):4804-4820. DOI: 10.13328/j.cnki.jos.006670
Abstract:Bayesian network (BN), as a preliminary framework for representing and inferring uncertain knowledge, is widely used in social network, knowledge graph, medical diagnosis, etc. The centric computing task of BN-based analysis, diagnosis, and decision-support in specific fields includes multiple probabilistic inferences. However, the high time complexity is doomed on the same BN by using the traditional inference methods, due to the several intermediate results of probability calculations that cannot be shared and reused among different inferences. Therefore, to improve the overall efficiency of multiple inferences on the same BN, this study proposes the method of BN embedding and corresponding probabilistic inferences. First, by incorporating the idea of graph embedding, the study proposes a BN embedding method based on the autoencoder and attention mechanism by transforming BN into the point mutual information matrix to preserve the directed a cyclic graph and conditional probability parameters simultaneously. Specifically, each coding layer of the autoencoder generates node embedding by using the correlation between a node and its neighbors (parent and child nodes) to preserve the probabilistic dependencies. Then, the method for probabilistic inferences to measure the joint probability by using the distance between embedding vectors is proposed. Experimental results show that the proposed method outperforms other state-of-the-art methods in efficiency, achieving accurate results of probabilistic inferences.
MA Heng-Zhao , YAN Yue , LI Jian-Zhong
2023, 34(10):4821-4829. DOI: 10.13328/j.cnki.jos.006649
Abstract:In a published study, the problem of using Turing reduction to solve ε-NN is studied. In other words, given a query point q, a point set P, and an approximate factor ε, the purpose is to return the approximate nearest neighbor of q in P with an approximation ratio of not more than 1+ε. Moreover, a Turing reduction algorithm with O(logn) query time complexity is proposed, where the query time is the number of times that the oracle is invoked. The comparison indicates that the O(logn) query time is the lowest compared to that of all the existing algorithms. However, the disadvantage of the proposed algorithm is that there is a factor of O((d/ε)d) in the preprocessing time complexity and space complexity. When the number of dimensions d is high, or the approximation factor ε is small, the factor would become unacceptable. Therefore, this study revises the reduction algorithm and analyzes the expected time complexity and space complexity of the algorithm when the input point set follows the Poisson point process. As a result, the expected preprocessing time complexity is reduced to O(nlogn), and the expected space complexity is reduced to O(nlogn), while the expected query time complexity remains O(logn). In this sense, the future work raised in the published study is completed.
WANG Jia-Long , YANG Jie , ZHOU Li-Hua , WANG Li-Zhen , WANG Rui-Kang
2023, 34(10):4830-4850. DOI: 10.13328/j.cnki.jos.006654
Abstract:Community is an important attribute of information networks. Community search, as an important content of information network analysis, aims to find a set of nodes that meet the conditions specified by the user. As heterogeneous information networks contain more comprehensive and richer structural and semantic information, community search in such networks has received extensive attention in recent years. However, the existing community search methods for heterogeneous information networks cannot be directly applied when the search conditions are complex. For this reason, this study defines community search under complex conditions and proposes search algorithms considering asymmetric meta-paths, constrained meta-paths, and prohibited node constraints. These three algorithms respectively use the meta-path completion strategy, the strategy of adjusting batch search with labeling, and the way of dividing complex search conditions to search communities. Moreover, two optimization algorithms respectively based on the pruning strategy and the approximate strategy are designed to improve the efficiency of the search algorithm with prohibited node constraints. A large number of experiments are performed on real datasets, and the experimental results verify the effectiveness and efficiency of the proposed algorithms.
LI Shao-Ying , MENG Dan , KONG Chao , ZHANG Li-Ping , XU Chen
2023, 34(10):4851-4869. DOI: 10.13328/j.cnki.jos.006662
Abstract:Recent research studies on social recommendation have focused on the joint modeling of the explicit and implicit relations in social networks and overlooked the special phenomenon that high-order implicit relations are not equally important to each user. The importance of high-order implicit relations to users with plenty of neighbors differs greatly from that to users with few neighbors. In addition, due to the randomness of social relation construction, explicit relations are not always available. This study proposes a novel adaptive high-order implicit relations modeling (AHIRM) method, and the model consists of three components. Specifically, unreliable relations are filtered, and potential reliable relations are identified, thereby mitigating the adverse effects of unreliable relations and alleviating the data sparsity issue. Then, an adaptive random walk algorithm is designed to capture neighbors at different orders for users according to normalized node centrality, construct high-order implicit relations among the users, and ultimately reconstruct the social network. Finally, the graph convolutional network (GCN) is employed to aggregate information about neighbor nodes. User embeddings are thereby updated to model the high-order implicit relations and further alleviate the data sparsity issue. The influence of social structure and personal preference are both considered during modeling, and the process of social influence propagation is simulated and retained. Comparative verification of the proposed model and the existing algorithms are conducted on the LastFM, Douban, and Gowalla datasets, and the results verify the effectiveness and rationality of the proposed AHIRM model.
MA Zi-Bo , MI Yue , ZHANG Bo , ZHANG Zheng , WU Jing-Yun , HUANG Hai-Wen , WANG Wen-Dong
2023, 34(10):4870-4915. DOI: 10.13328/j.cnki.jos.006680
Abstract:In recent years, deep learning technology has made remarkable progress in many computer vision tasks. More and more researchers have tried to apply it to medical image processing, such as the segmentation of an atomical structures in high-throughput medical images (CT, MRI), which can improve the efficiency of image reading for doctors. Deep learning models for training medical image processing need a large amount of labeled data, and the data from a separate medical institution can not meet this requirement. Moreover, due to the difference in medical equipment and acquisition protocols, the data from different medical institutions are largely heterogeneous. This results in the difficulty in obtaining reliable results on the data of a certain medical institution with the model trained by data from other medical institutions. In addition, the distribution of different medical data in patients’ disease stages is uneven, thereby reducing the reliability of the model. Technologies including domain adaptation and multi-site learning emerge to reduce the impact of data heterogeneity and improve the generalization ability of the model. As a research hotspot in transfer learning, domain adaptation is intended to transfer knowledge learned from the source domain to data of the unlabeled target domain. Multi-site learning and federated learning with non-independent and identically distributed data aim to improve the robustness of the model by learning a common representation on multiple datasets. This study investigates, analyzes, and summarizes domain adaptation, multi-site learning, and federated learning with non-independent and identically distributed datasets in recent years, providing references for related research.
FENG Xun , YANG Jian , ZHOU Tao , GONG Chen
2023, 34(10):4916-4929. DOI: 10.13328/j.cnki.jos.006675
Abstract:Weakly supervised object localization aims to train target locators only by image-level labels instead of accurate location annotations for algorithm training. Some existing methods can only identify the most discriminative region of the target object and are incapable of covering the complete object, or can easily be misled by irrelevant background information, thereby leading to inaccurate object locations. Therefore, this study proposes a weakly supervised object localization algorithm based on attention mechanism and categorical hierarchy. The proposed method extracts a more complete object area by performing mean segmentation on the attention map of the convolutional neural network. In addition, the category hierarchy network is utilized to weaken the attention caused by background areas, which achieves more accurate object location results. Extensive experimental results on multiple public datasets show that the proposed method can yield better localization effects than other weakly supervised object localization methods under various evaluation metrics.
LUO Chao-Ran , JIN Xin , ZHANG Ying , CAI Hua-Qian , LIU Yi , JING Xiang , HUANG Gang
2023, 34(10):4930-4940. DOI: 10.13328/j.cnki.jos.006663
Abstract:Distributed hash table (DHT) is widely used in distributed storage because of its efficient data addressing. Nevertheless, traditional DHT-based storage has to store data in specified nodes to achieve efficient data addressing, which restricts the application scope of the DHT technology severely. Taking heterogeneous storage networks for example, the storage space, bandwidth, and stability of nodes vary greatly. Choosing appropriate data storage nodes according to data features and the performance differences among the nodes can greatly improve the data access efficiency. However, the tight coupling between data and storage location disqualifies the traditional DHT-based storage from being applied to heterogeneous storage networks. Therefore, this study proposes a vRoute algorithm to decouple the data identifier from storage location in DHT-based storage. By building a distributed data index based on Bloom Filter, the vRoute algorithm allows data to be stored in any node of the network without reducing the efficiency of data addressing. It is implemented by extending the Kademlia algorithm, and its validity is verified theoretically. Finally, the simulation experiments show that vRoute achieves a data addressing efficiency close to that of the traditional DHT algorithm with low storage and network overhead.