• Volume 34,Issue 9,2023 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Quality Attributes and Practices of Trustworthy Artificial Intelligence Systems: A Tertiary Study

      2023, 34(9):3941-3965. DOI: 10.13328/j.cnki.jos.006875

      Abstract (1591) HTML (1338) PDF 5.23 M (1995) Comment (0) Favorites

      Abstract:Artificial intelligence systems are widely used to solve various challenges in the real world in an unprecedented way, and they have become the core driving force for the development of human society. With the rapid popularization of artificial intelligence systems in all walks of life, the trustworthiness of artificial intelligence systems is becoming more and more worrying. The main reason is that the trustworthiness of traditional software systems is not enough to fully describe that of artificial intelligence systems. Therefore, research on the trustworthiness of artificial intelligence systems is urgently needed. At present, there have been a large number of relevant studies, which focus on different aspects. However, these studies lack a holistic and systematic understanding. This study is a tertiary study with the existing secondary study as the research object. It aims to reveal the research status of quality attributes and practices related to the trustworthiness of artificial intelligence systems and establish a more comprehensive quality attribute framework for trustworthy artificial intelligence systems. This study collects, sorts out, and analyzes 34 secondary studies published until March 2022. In addition, it identifies 21 quality attributes related to trustworthiness, as well as measurement methods and assurance practices of trustworthiness. The study finds that existing research mainly focuses on security and privacy, and extensive and in-depth research on other quality attributes is fewer. Furthermore, two research directions requiring interdisciplinary collaboration need more attention in future research. On the one hand, the artificial intelligence system is essentially a software system, and its trustworthiness as a software system is worthy of collaborative research by artificial intelligence and software engineering experts. On the other hand, artificial intelligence belongs to human’s exploration of machine anthropomorphism, and research on how to ensure the trustworthiness of machines in the social environment from the system level, such as how to satisfy humanism, is worthy of collaborative research by artificial intelligence and social science experts.

    • Just-in-time Defect Prediction for Intelligent Computing Frameworks

      2023, 34(9):3966-3980. DOI: 10.13328/j.cnki.jos.006874

      Abstract (1774) HTML (1165) PDF 6.48 M (2734) Comment (0) Favorites

      Abstract:In recent years, intelligent computing frameworks have been widely applied as implementation tools in artificial intelligence (AI) engineering, and the reliability of the frameworks is the key to AI implementation effectiveness. However, the reliability assurance of intelligent computing frameworks is challenging. On one hand, the code iteration of frameworks is fast, with difficult code testing. On the other hand, unlike traditional software, intelligent computing frameworks involve a large number of tensor calculations, and the code specification lacks the guidance of software engineering theory. To this end, existing research mostly employs fuzzy testing to localize defects. However, such a method can only accurately discover specific fault types, and it is difficult to guide developers and make them focus on software quality during the development process. Therefore, this study takes the popular intelligent computing frameworks (TensorFlow, Baidu PaddlePaddle, etc.) as the research object, selects multiple change features to build datasets, and conducts just-in-time prediction on the defects of the intelligent computing framework at the code submission level. Additionally, LDA is employed to mine codes and code submission information as new features, and then the random forest is adopted for prediction. Results show that the average AUC-ROC is 0.77, and semantic information can slightly improve the prediction performance. Finally, this study leverages an explainable machine learning method called SHAP to analyze the influence of each feature on the prediction output of the model. The findings are as follows. (1) The influence of basic features on the model conforms to traditional software development laws. (2) Code and semantic features in submitted information are important in the prediction result of the model. (3) The contribution of different features in different systems to the output of the prediction model varies a lot.

    • Scenario Description Language of Autonomous Driving Embedded with Road Network Graph Model

      2023, 34(9):3981-4002. DOI: 10.13328/j.cnki.jos.006877

      Abstract (1297) HTML (933) PDF 3.41 M (1871) Comment (0) Favorites

      Abstract:The development of deep learning technology has driven the rapid progress of autonomous driving. While the accuracy of perception models based on deep learning is gradually improved, hidden dangers related to robustness and reliability still exist. Therefore, tests should be conducted thoroughly under various scenes to ensure acceptable security levels. Scene-based simulation testing is crucial in autonomous driving technology. One key challenge is to describe and generate diversified simulation testing scenes. Scenario description languages can describe autonomous driving scenes and instantiate the scenes in virtual environments to obtain simulation data. However, most existing scenario description languages cannot provide high-level abstractions and descriptions of the road structure of the scene. This study presents a road network property graph to represent the abstracted entities and their relations within a road network. It also introduces SceneRoad, a language specifically designed to provide concise and expressive descriptions of the road structure in a scene. SceneRoad can build a road network feature query graph based on the described road structure features of a scene. Thus, the problem of searching the road structures in the road network is abstracted as a subgraph matching one on the property graph, which can be solved by the VF2 algorithm. Additionally, SceneRoad is incorporated as an extension into the Scenic scenario description language. With this extended language, a diverse set of static scenes are employed to build a simulation dataset. Statistical analysis of the dataset indicates the wide variety of scenes that have been generated. The results of training and testing various perception models on both real and simulated datasets show that the model’s performance on the two datasets is positively correlated, which shows that the model’s evaluation on the simulated dataset aligns with its performance in real scenes. This is significant for perception model evaluation and research on improving the model’s robustness and safety.

    • Adversarial Example Generation Method Based on Sparse Perturbation

      2023, 34(9):4003-4017. DOI: 10.13328/j.cnki.jos.006878

      Abstract (1341) HTML (774) PDF 5.18 M (2356) Comment (0) Favorites

      Abstract:In recent years, deep neural network (DNN) has made great progress in the field of image. However, studies show that DNN is susceptible to the interference of adversarial examples and exhibits poor robustness. By generating adversarial examples to attack DNN, the robustness of DNN can be evaluated, and then corresponding defense methods can be adopted to improve the robustness of DNN. The existing adversarial example generation methods still have some defects, such as insufficient sparsity of generated perturbations, and excessive perturbation magnitude. This study proposes an adversarial example generation method based on sparse perturbation, sparse perturbation based adversarial example generation (SparseAG), which can generate relatively sparse and small-magnitude perturbations for image examples. Specifically, SparseAG first selects the perturbation points iteratively based on the gradient value of the loss function for the input image to generate the initial adversarial example. In each iteration, the candidate set of the new perturbation points is determined in the order of gradient value from large to small values, and the perturbation which makes the value of loss function value smallest is added to the image. Secondly, a perturbation optimization strategy is employed in the initial perturbation scheme to improve the sparsity and authenticity of the adversarial example. The perturbations are improved based on the importance of each perturbation for jumping out of the local optimum, and the redundant perturbation and the redundant perturbation magnitude are further reduced. This study selects the CIFAR-10 dataset and the ImageNet dataset to evaluate the method in the target attack and non-target attack scenarios. The experimental results show that SparseAG can achieve a 100% attack success rate in different datasets and different attack scenarios, and the sparsity and the overall perturbation magnitude of the generated perturbations are better than those of the comparison methods.

    • Robustness Verification Method for Artificial Intelligence Systems Based on Source Code Processing

      2023, 34(9):4018-4036. DOI: 10.13328/j.cnki.jos.006879

      Abstract (1325) HTML (1116) PDF 2.31 M (2235) Comment (0) Favorites

      Abstract:The development of artificial intelligence (AI) technology provides strong support for AI systems based on source code processing. Compared with natural language processing, source code is special in semantic space. Machine learning tasks related to source code processing usually employ abstract syntax trees, data dependency graphs, and control flow graphs to obtain the structured information of codes and extract features. Existing studies can obtain excellent results in experimental scenarios through in-depth analysis of source code structures and flexible application of classifiers. However, for real application scenarios where the source code structures are more complex, most of the AI systems related to source code processing have poor performance and are difficult to implement in the industry, which triggers practitioners to consider the robustness of AI systems. As AI-based systems are generally data-driven black box systems, it is difficult to directly measure the robustness of these software systems. With the emerging adversarial attack techniques, some scholars in natural language processing have designed adversarial attacks for different tasks to verify the robustness of models and conducted large-scale empirical studies. To solve the instability of AI systems based on source code processing in complex code scenarios, this study proposes robustness verification by Metropolis-Hastings attack method (RVMHM). Firstly, the code preprocessing tool based on abstract syntax trees is adopted to extract the variable pool of the model, and then the MHM source code attack algorithm is employed to replace the prediction effect of the variable perturbation model. The robustness of AI systems is measured by observing the changes in the robustness verification index before and after the attack by interfering with the data and model interaction process. With vulnerability prediction as a typical binary classification scenario of source code processing, this study verifies the robustness of 12 groups of AI vulnerability prediction models on three datasets of open source projects to illustrate the RVMHM effectiveness for robustness verification of source code processing based on AI systems.

    • Research on Fairness in Deep Learning Models

      2023, 34(9):4037-4055. DOI: 10.13328/j.cnki.jos.006872

      Abstract (1624) HTML (1207) PDF 7.12 M (2428) Comment (0) Favorites

      Abstract:In recent years, deep neural networks have been widely employed in real decision-making systems. Unfairness in decision-making systems will exacerbate social inequality and harm society. Therefore, researchers begin to carry out a lot of studies on the fairness of deep learning systems, where as most studies focus on group fairness and cannot guarantee fairness within the group. To this end, this study defines two individual fairness calculation methods. The first one is individual fairness rate IFRb based on labels of output, which is the probability of having the same predicted label for two similar samples. The second is individual fairness rate IFRp based on distributions of output, which is the probability of having similar predicted output distribution for two similar samples respectively, and the latter has stricter individual fairness. In addition, this study proposes an algorithm IIFR to improve the individual fairness of these models. The algorithm employs cosine similarity to measure the similarity between samples and then selects similar sample pairs via the similarity threshold decided by different applications. Finally, the output difference of the similar sample pairs is added to the objective function as an individual fairness loss item during the training, which penalizes the similar training samples with large differences in model output to improve the individual fairness of the model. The experimental results show that the proposed IIFR algorithm outperforms the state-of-the-art methods on individual fairness improvement, and can maintain group fairness of models while improving individual fairness.

    • Empirical Study on Pull-request Revisions in Open Source Software Community of TensorFlow

      2023, 34(9):4056-4068. DOI: 10.13328/j.cnki.jos.006873

      Abstract (889) HTML (651) PDF 5.71 M (1645) Comment (0) Favorites

      Abstract:The recent boom in artificial intelligence (AI) benefits from the open collaboration of the open source software (OSS) community. An increasing number of OSS developers are contributing to AI projects by submitting pull requests (PRs). However, the PR quality submitted by external contributors varies, and the AI project management teams have to review PRs and ask contributors to revise them if necessary. Since the revision exerts a direct impact on the review efficiency and acceptance of PRs, it is important to achieve a better understanding of PR revisions. This study conducts an empirical study based on a set of PRs and their revision histories collected from the TensorFlow project. It first manually analyzes a sample of commit messages, reviews PR comments, and constructs a taxonomy of revision types. Then, according to the defined taxonomy, a large set of PR revisions are manually labeled. Based on the dataset, the frequency and order of each type of revision are explored. Additionally, this study also investigates the frequency distribution, order distribution, and correlation relationship between different types of revisions. The empirical findings show that there are 11 different types of revisions which can be classified into three categories. Evolvability revisions occur more frequently than other revision types, and functional revisions are more likely to occur in the early PR updates than evolvability revisions and other types of revisions. Structure-related revisions have a high chance to co-occur or adj-occur with other revisions. Configuration-related revisions or rebasing revisions are more likely to appear in succession. The empirical results can help AI open source practitioners and researchers better understand the PR revision process, especially guide the review and revision behaviors of PRs and improve the collaborative efficiency of open source groups.

    • AIOps in Practice: Status Quo and Standardization

      2023, 34(9):4069-4095. DOI: 10.13328/j.cnki.jos.006876

      Abstract (1648) HTML (1886) PDF 3.80 M (2896) Comment (0) Favorites

      Abstract:The operation of IT systems faces many challenges of rapid IT scale expansion, increasingly complex system architecture, and growing demand for autonomy. By employing big data and machine learning technologies to analyze massive operation data, artificial intelligence for IT operations (AIOps) can assist IT operators in operating and maintaining IT systems more efficiently. However, enterprises often encounter various difficulties when practicing AIOps. Thus standards of AIOps are required to guide enterprises in building AIOps capability. To promote the standardization of AIOps, this study surveys the AIOps-in-practice enterprises in various industries to analyze the practice status of AIOps. Existing standards on operation, artificial intelligence, and AIOps are studied to figure out the current progress of AIOps standardization. According to the conclusions above, the study proposes an AIOps capability standard framework AIOps-OSA. The framework lists the critical points of organization, scenarios, and abilities from the perspective of building AIOps capabilities of enterprises. During actual standard preparation, aguiding AIOps standard for enterprises can be formed by applying detailed requirements to AIOps-OSA.

    • Many-objective Evolutionary Algorithm Based on Curvature Estimation of Pareto Front

      2023, 34(9):4096-4113. DOI: 10.13328/j.cnki.jos.006648

      Abstract (844) HTML (757) PDF 6.98 M (1960) Comment (0) Favorites

      Abstract:The many-objective evolutionary algorithm based on decomposition is the main approach to solving many-objective optimization problems, but its performance largely depends on the matching degree between the adopted reference vectors and the real Pareto front (PF). Existing decomposition-based many-objective evolutionary algorithms can hardly deal with all kinds of many-objective optimization problems with different PF at the same time. To solve this issue, this study proposes a many-objective evolutionary algorithm based on the curvature estimation (MaOEA-CE) of PF. The core of the proposed algorithm includes two aspects: Firstly, on the basis of PF curvature estimation, different reference vectors are generated in each iteration to gradually match the real PF of different kinds of problems. Secondly, with the estimated PF curvature, the appropriate aggregation function is used to select elite solutions and dynamically adjust the generated reference vector in the environmental selection, which can improve the convergence while maintaining the diversity of the population. Moreover, MaOEA-CE is compared with seven advanced many-objective algorithms on three mainstream problem sets for testing, i.e., DTLZ, WFG, and MaF, to verify its effectiveness. The experimental results prove that MaOEA-CE has strong competitiveness.

    • Code Comment Generation Based on Concept Propagation for Software Projects

      2023, 34(9):4114-4131. DOI: 10.13328/j.cnki.jos.006637

      Abstract (816) HTML (442) PDF 6.75 M (2138) Comment (0) Favorites

      Abstract:Comment generation for software codes has been an important research task in the field of software engineering in the past few years. Several research efforts have achieved impressive results on the open-source datasets that contain copious <code snippet, comment> pairs. In the practice of software enterprises, however, the codes to be commented usually belong to a software project library, and it should be decided first on which code lines the comment generation can achieve better performance; moreover, the code snippets to be commented have different lengths and granularity. Thus, a code comment generation method is required, which can integrate commenting decisions and comment generation and is resistant to noise. To this end, CoComment, a software project-oriented code comment generation approach, is proposed in this study. This approach can automatically extract domain-specific basic concepts from software project documents and then uses code parsing and text matching to propagate and expand these concepts. On this basis, automatic code commenting decisions are made by locating code lines or segments related to these concepts, and corresponding natural language comments with high readability are generated upon the fusion of concepts and contexts with templates. Comparative experiments are conducted on three enterprise software projects containing more than 46000 manually annotated code comments. The experimental results demonstrate the proposed approach can effectively make code commenting decisions and generate more helpful code comments compared with existing methods, which provides an integrated solution to code commenting decisions and comment generation for software projects.

    • Inductive SQL Synthesis with Positive and Negative Tuples

      2023, 34(9):4132-4152. DOI: 10.13328/j.cnki.jos.006646

      Abstract (601) HTML (458) PDF 6.58 M (1598) Comment (0) Favorites

      Abstract:SQL is a programming language that is widely used to operate relation databases. Many users (such as data analysts and junior programmers) will encounter various difficulties when writing SQL query programs due to the lack of programming experience and knowledge of SQL syntax. Currently, the research on the automatic synthesis of SQL query programs from the <input-output> (I/O) example tables has attracted more and more attention. The inductive SQL synthesis with positive and negative tuples (ISST) method proposed in this study can automatically synthesize SQL query programs that meet the users’ expectations by the I/O example tables edited by users and containing a small number of tuples. The ISST method contains five main stages: constructing the SQL query program sketches, expanding the worksheet data, dividing the sets of positive and negative examples, inductively synthesizing selection predicates, and sorting after verifying. The candidate set of SQL query programs is verified on the online database PostgreSQL, and the candidate set of synthesized SQL query programs is scored and sorted according to the principle of Occam’s razor. The ISST method is implemented using the Java language and then is evaluated on a test set containing 28 samples. The results reveal that the ISST method can correctly synthesize 24 of the samples, which takes an average of 2 seconds.

    • Efficient Parallel Propagation Algorithm for FDE

      2023, 34(9):4153-4166. DOI: 10.13328/j.cnki.jos.006644

      Abstract (560) HTML (446) PDF 4.47 M (1663) Comment (0) Favorites

      Abstract:Constraint programming (CP) is one of the classical paradigms for representing and solving combinatorial problems. Extensional constraints, also called table constraints, are the most common type of constraints in CP, and most CP problems can be expressed by table constraints. In the problem-solving process, consistency algorithms are used to reduce the search space, and the simple table reduction (STR) algorithms are the most efficient consistency algorithms with table constraints, including Compact-Table (CT) and STRbit algorithms, both of which maintain the generalized arc consistency (GAC) during the search. In addition, the full pairwise consistency (fPWC) is stronger than GAC in the pruning capability, and the most efficient fPWC maintenance algorithm is the PW-CT algorithm. Over the years, many consistency algorithms with table constraints are proposed to improve the pruning capability and efficiency. Factor-decomposition encoding (FDE) recodes trivial problems, which enlarges the scale of the problem model to some extent, and as a result, maintaining a relatively weak GAC on a new model is equivalent to maintaining a strong fPWC on the original model. Currently, the appropriate STR algorithms for FDE are STRFDE and STR2 rather than CT as the CT algorithm may produce memory overflow. In the process of maintaining the consistency algorithm, it is necessary to call each constraint iteratively to perform its consistency algorithm to filter the search space. This process is called constraint propagation. The dynamic submission scheme is a parallel constraint propagation scheme, which can schedule constraint execution propagation algorithms in parallel, and it is particularly effective in large-scale problems. Therefore, this study proposes PSTRFDE for FDE by improving STRFDE and dynamic submission propagation algorithms. PSTRFDE can be embedded into the dynamic submission scheme to further improve the efficiency of constraint problem solving. Extensive experiments indicate that PSTRFDE can reduce the used memory compared with CT and STRbit, and compared with the results of STRFDE and STR2, the efficiency of PSTRFDE can be further improved. The research demonstrates that PSTRFDE is the most efficient filtering algorithm for FDE at present.

    • >Review Articles
    • Survey on English and Chinese Discourse Structure Analysis

      2023, 34(9):4167-4194. DOI: 10.13328/j.cnki.jos.006650

      Abstract (916) HTML (1237) PDF 6.79 M (2763) Comment (0) Favorites

      Abstract:Discourse structure analysis aims to understand the overall structure of an article and the semantic relationships between its various parts. As a research hotspot of natural language processing, it has developed rapidly in recent years. This study first summarizes the mainstream discourse structure analysis theories in English and Chinese and then introduces the research on the popular English and Chinese discourse corpora as well as their calculation models. On this basis, this study surveys the current work context of discourse structure analysis in Chinese and English and constructs its research framework. Moreover, the current research trends and focuses are summarized, and the application of discourse structure in downstream tasks is introduced briefly. Finally, this study points out the issues and challenges in the current Chinese discourse structure analysis to provide guidance and help for future research.

    • Generative Adversarial Network Based on Bidirectional Constraints

      2023, 34(9):4195-4209. DOI: 10.13328/j.cnki.jos.006655

      Abstract (745) HTML (575) PDF 2.58 M (1731) Comment (0) Favorites

      Abstract:Improving the quality and diversity of generated samples has always been one of the main challenging tasks in the field of generative adversarial network (GAN). For this reason, a bidirectional constraint GAN (BCGAN) is proposed. Compared with the traditional GAN variants, this network adds one more generator module to the architecture design. The two generators approach the data distribution of real samples from two different directions. Then, according to the network architecture of BCGAN, this study designs a new loss function and analyzes and proves it theoretically. During BCGAN training, the diversity of the generated samples is enriched by increasing the distance between the data distribution of two generated samples, and the difference of the discriminator between the data distribution of the two generated samples is reduced to stabilize the training process and thereby improve the quality of the generated samples. Finally, experiments are carried out on a synthetic dataset and three open challenge datasets. This series of experiments show that compared with other generative methods, the proposed method fits real data distribution better and effectively improves the quality and diversity of generated samples. In addition, the training process of this method is smoother and more stable.

    • Classifier Chains Method Based on Association Rules and Topological Sequences

      2023, 34(9):4210-4224. DOI: 10.13328/j.cnki.jos.006659

      Abstract (494) HTML (417) PDF 2.61 M (1504) Comment (0) Favorites

      Abstract:The order of label learning is crucial to a classifier chains method. Therefore, this study proposes a classifier chains method based on the association rules and topological sequence (TSECC). Specifically, a measurement strategy for label dependencies based on strong association rules is designed by leveraging frequent patterns. Then, a directed acyclic graph is constructed according to the dependency relationships among the labels to topologically sort all the vertices in the graph. Finally, the topological sequence obtained is used as the order of label learning to iteratively update each label’s classifier successively. In particular, to reduce the impact of “lonely” labels with no or low label dependencies on the prediction performance on the other labels, TSECC excludes “lonely” labels out of the topological sequence and uses a binary relevance model to train them separately. Experimental results on a variety of public multi-label datasets show that TSECC can effectively improve classification performance.

    • Distributed Link Scheduling in Dynamic Wireless Networks under SINR Model

      2023, 34(9):4225-4238. DOI: 10.13328/j.cnki.jos.006634

      Abstract (722) HTML (831) PDF 3.71 M (1732) Comment (0) Favorites

      Abstract:Interference among wireless signals hinders the concurrent transmission of signals and reduces the throughput of wireless networks. Link scheduling is an effective way to improve throughput and decrease transmission delay of wireless networks as the signal-to-interference-plus-noise ratio (SINR) model can accurately describe the inherent characteristics of wireless signal propagation and truly reflect the interference among wireless signals. Therefore, this study proposes an online distributed link scheduling (OLD_LS) algorithm in the dynamic wireless networks with the constant approximation factor of the SINR model. Specifically, online means that nodes can join and leave wireless networks at any time, and this arbitrary behavior of nodes reflects the dynamic characteristics of wireless networks. The OLD_LS algorithm partitions the network region into hexagons to localize the global interference of the SINR model. In addition, a leader election (LE) subroutine in dynamic networks is designed in this study. It is shown that as long as the dynamic rate of nodes is less than 1/ε, LE can elect a leader with a high probability in the time complexity of ${\rm{O}}(\log n + \log R)$, where ε is a constant satisfying $\varepsilon \leqslant {{5(1 - {2^{1 - {\alpha/ 2}}})} /6}$, with $\alpha $ being the path loss exponent, n the number of senders, and R the longest link length. To the best of our knowledge, the algorithm proposed in this study is the first OLD_LS algorithm for dynamic wireless networks.

    • New Key Recovery Attack Based on Periodic Property

      2023, 34(9):4239-4255. DOI: 10.13328/j.cnki.jos.006636

      Abstract (979) HTML (593) PDF 6.95 M (2075) Comment (0) Favorites

      Abstract:This study proposes a new classical key recovery attack against schemes such as Feistel, Misty, and Type-1/2 generalized Feistel schemes (GFS), which creatively combines the birthday attack with the periodic property of Simon’s algorithm. Although Simon’s algorithm can recover the periodic value in polynomial time, this study requires the birthday bound to recover the corresponding periodic value in the classical setting. By this new attack, the key to a 5-round Feistel-F scheme can be recovered with the time complexity of O(23n/4) under the chosen plaintexts and ciphertexts of O(2n/4), and the corresponding memory complexity is O(2n/4). Compared with the results of Isobe and Shibutani, the above result not only increases one round but also requires lower memory complexity. For the Feistel-FK scheme, a 7-round key recovery attack is constructed. In addition, the above approach is applied to construct the key recovery attacks against Misty schemes and Type-1/2 GFS. Specifically, the key recovery attacks against the 5-round Misty L-F and Misty R-F schemes and those against the 6-round Misty L-KF/FK and Misty R-KF/FK schemes are given; for the d-branch Type-1 GFS, a d2-round key recovery attack is presented, and when d≥6, the number of rounds of the key recovery attack is superior to those of the existing key recovery attacks.

    • Secure Data Sharing Solution for Portable Health Clinics

      2023, 34(9):4256-4274. DOI: 10.13328/j.cnki.jos.006638

      Abstract (962) HTML (477) PDF 6.46 M (1923) Comment (0) Favorites

      Abstract:With the rapid development of technologies such as the Internet of Things (IoT) and cloud computing, portable health clinics (PHCs) have been realized and widely used in telemedicine. Relying on the significant advantages of 5G communications, China has actively promoted the construction of smart healthcare and built a multi-function and high-quality telemedicine information service platform.The realization of telemedicine represented by PHCs is inseparable from the technical support of remote data-sharing systems. At present, the remote data-sharing system combining IoT and the cloud server (CS) has attracted wide attention due to its flexibility and efficiency, but its privacy and security issues are rarely studied. Considering the sensitivity of medical data, this paper endeavors to study the security and privacy issues in the PHC data-sharing system. As a result, in the PHC system, this study achieves the secure uploading of IoT awareness data, normalization of personalized ciphertexts, dynamic multi-user fine-grained access control, and efficient decryption operations, and it also presents formal security verification. The specific innovations of this study are as follows: (1) The classical proxy re-encryption (PRE) and attribute-based encryption algorithms are improved, and an IPRE-TO-FAME combined encryption mechanism is proposed to ensure the data-sharing security of the PHC system with cloud-edge collaboration. (2) To address the challenge of key updates caused by many highly distributed IoT terminals, this paper uses the idea of PRE to realize the key updates on the basis of the unilateral transformation without changing the keys to IoT terminals. Meanwhile, the re-encryption entities can be regarded as fully trusted in the application scenarios of this study, which is different from the situation of the conventional PRE mechanism, where the re-encryption entities are usually untrusted third-party servers. Therefore, the conventional PRE algorithm is improved, and an efficient improved PRE (IPRE) algorithm is put forward to adapt to the scenarios proposed in this study. (3) The classical fast attribute-based message encryption (FAME) mechanism is improved to enable dynamic multi-user fine-grained access control. In this way, users can easily use portable intelligent devices to access data anytime and anywhere. The security proofs, theoretical analysis, and experimental results reveal that the proposed solution is highly secure and practical, which is an effective way to ensure secure PHC data sharing.

    • Computation Offloading Scheme Based on Two-layer Stackelberg Game for Multi-access Edge Computing

      2023, 34(9):4275-4293. DOI: 10.13328/j.cnki.jos.006642

      Abstract (889) HTML (648) PDF 5.79 M (2457) Comment (0) Favorites

      Abstract:The computation offloading problem of multi-access edge computing (MEC) has become one of the research focuses. The current computation offloading scheme only considers the computation offloading problem in the cloud, edge, and end structures and does not take into account the attributes of the public and private clouds. In this study, a novel computation offloading scheme is proposed, which considers the relationship between the public cloud and private cloud in edge computing and regards the public cloud as a supplement to private cloud resources to alleviate the insufficient computing power caused by the limitations of private cloud resources. Moreover, a two-layer Stackelberg game is established to solve the computation offloading problem. The optimal strategies of each player are obtained upon the analysis of the strategies and profits of the public cloud, the private cloud, and users, and the existence and uniqueness of the Nash equilibrium solution to the two-layer game are proved. The simulation results and analysis verifies the feasibility of the computation offloading scheme based on the two-layer Stackelberg game. Compared with the computation offloading scheme based on the single-layer Stackelberg game, the proposed scheme is more efficient and more suitable for edge computing environments.

    • Throughput Model of Starlike Sharding Structure for Blockchains and Its Applications

      2023, 34(9):4294-4309. DOI: 10.13328/j.cnki.jos.006651

      Abstract (522) HTML (455) PDF 6.81 M (1684) Comment (0) Favorites

      Abstract:Parallelization is one of the most effective blockchain scalability solutions, and the existing parallelization schemes can be classified into two categories, i.e., starlike structure and parallel structure, according to the network structure. However, the current research lacks the analyses of factors affecting the performance boundary and performance bottleneck in starlike sharding structure. To address this problem, this study abstracts a general starlike sharding structure of blockchains for the schemes adopting different starlike sharding structure, and the transaction process in this general structure is quantitatively modeled to derive the relationship between throughput and the number of shards in starlike sharding structure. According to the constructed model, there exists a performance limit in starlike sharding structure and an optimal sharding quantity to maximize the system throughput. An explicit functional relationship exists between the maximal throughput and the functional complexity of the mainchain. With the proposed throughput model, related blockchain systems can balance the number of shards and the functional complexity of the mainchain to reach the theoretical upper limit of system throughput with the consideration of their specific design. Therefore, the work of this study has significant guiding value in the design of the schemes adopting starlike parallelization.

    • >Review Articles
    • Progress of Lattice-based Cryptanalysis of RSA and Its Variant Algorithms

      2023, 34(9):4310-4335. DOI: 10.13328/j.cnki.jos.006657

      Abstract (1068) HTML (826) PDF 8.78 M (3311) Comment (0) Favorites

      Abstract:Lattice-based cryptanalysis, an analysis method using the algorithms solving hard Lattice problems to analyze the security of public-key cryptosystems, has become one of the powerful mathematical tools for studying the security of the Rivest-Shamir-Adleman (RSA)-type cryptographic algorithms. The key point of this method is the construction of the Lattice basis. There exists a general strategy for Lattice basis construction. However, this general strategy fails to fully and flexibly utilize the algebraic structure of the RSA algorithm and its variants. In recent years, Lattice-based cryptanalysis of RSA-type algorithms mostly focuses on introducing special techniques of Lattice base construction on the basis of the general strategy. This study starts by outlining Lattice-based cryptanalysis and the general strategy for Lattice basis construction and summarizing several commonly used techniques of Lattice basis construction. Subsequently, the main achievements in Lattice-based cryptanalysis of the standard RSA algorithm are reviewed, and they involve factoring with known bits, small private exponent attacks, and partial key exposure attacks. Then, the special algebraic structures of several mainstream variants of the RSA algorithm and the techniques of Lattice basis construction applicable to these variants are summarized. Finally, the available work on Lattice-based cryptanalysis of the RSA algorithm and its variants is classified and summed up, and the prospects of the research and development of lattice-based cryptanalysis are presented.

    • Blockchain-based Validation Method for Inter-domain Routing Policy Compliance

      2023, 34(9):4336-4350. DOI: 10.13328/j.cnki.jos.006660

      Abstract (913) HTML (411) PDF 5.99 M (1664) Comment (0) Favorites

      Abstract:Various business relationships and routing policies exist among the autonomous systems (ASes) in an inter-domain routing system. Routing propagation violating the export policy agreements among the ASes is likely to cause route leaks, ultimately leading to serious consequences such as network interruption, traffic eavesdropping, and link overload. Verifying routing policy compliance is thus essential for ensuring the security and stability of the inter-domain routing system. However, the dual requirements of ASes for the autonomous configuration and privacy protection of local routing policies increase the difficulty in verifying routing policy compliance and consequently pose a hard problem that remains to be settled properly in the field of inter-domain routing security. This study proposes a blockchain-based verification method for inter-domain routing policy compliance. With blockchain and the cryptographic technology as trust endorsements, this method enables ASes to publish, interact, verify, and execute routing policy expectations in a safe and private manner. The authenticity of the routing propagation process is ensured by generating route attestations corresponding to routing updates. Thus, the verification of routing policy compliance is completed by multi-domain cooperation. A prototype system is implemented, and experiments and analyses are carried out on real routing data. The results show that the proposed method offers traceable verification of export policy compliance of routing propagation without leaking the business relationships and local routing policies among ASes, suppresses policy-violating routing propagation effectively with reasonable overhead, and maintains a remarkable ability to suppress policy-violating routing even in partial deployment scenarios.

    • Novel Imperceptible Watermarking Attack Method Based on Residual Learning

      2023, 34(9):4351-4361. DOI: 10.13328/j.cnki.jos.006661

      Abstract (909) HTML (480) PDF 8.77 M (1712) Comment (0) Favorites

      Abstract:Although traditional watermarking attack methods can obstruct the correct extraction of watermark information, they reduce the visual quality of watermarked images greatly. Therefore, a novel imperceptible watermarking attack method based on residual learning is proposed. Specifically, a watermarking attack model based on a convolutional neural network is constructed for the end-to-end nonlinear learning between a watermarked image and an unwatermarked one. A mapping from the watermarked image to the unwatermarked one is thereby accomplished to achieve the purpose of watermarking attack. Then, a proper number of feature extraction blocks are selected according to the embedding region of watermark information to extract a feature map containing watermark information. As the difference between the two images is insignificant, the learning ability of the watermarking attack model is limited in the training process, making it difficult for the model to reach a convergence state. A residual learning mechanism is thus introduced to improve the convergence speed and learning ability of the watermarking attack model. The imperceptibility of the attacked image can be improved by reducing the difference between the residual image (the subtraction between the watermarked image and the extracted feature map) and the unwatermarked one. In addition, a dataset for training the watermarking attack model is constructed with the super-resolution dataset DIV2K2017 and the attacked robust color image watermarking algorithm based on quaternion exponent moments. The experimental results show the proposed watermarking attack model can attack a robust watermarking algorithm with a high bit error rate (BER) without compromising the visual quality of watermarked images.

    • Probabilistic Memory Auto-encoding Network for Abnormal Behavior Detection in Surveillance Videos

      2023, 34(9):4362-4377. DOI: 10.13328/j.cnki.jos.006641

      Abstract (574) HTML (480) PDF 7.50 M (1681) Comment (0) Favorites

      Abstract:Abnormal behavior detection is one of the important functions in the intelligent surveillance system, which plays an active role in ensuring public security. To improve the detection rate of abnormal behavior in surveillance videos, this study designs a semi-supervised abnormal behavior detection network based on a probabilistic memory model from the perspective of learning the distribution of normal behavior, in an attempt to deal with the great imbalance between normal behavior data and abnormal behavior data. The network takes an auto-encoding network as the backbone network and uses the gap between the predicted future frame and the real frame to measure the intensity of the anomaly. When extracting spatiotemporal features, the backbone network employs three-dimensional causal convolutional and temporally-shared full connection layers to avoid future information leakage and ensure the temporal sequence of information. In terms of auxiliary modules, a probabilistic model and a memory module are designed from the perspective of probability entropy and diverse patterns of normal behavior data to improve the quality of video frame reconstruction in the backbone network. Specifically, the probabilistic model uses the autoregressive process to fit the input data distribution, which promotes the model to converge to the low-entropy state of the normal distribution; the memory module stores the prototypical features of normal behavior in the historical data to realize the coexistence of multi-modal data and avoid the reconstruction of abnormal video frames caused by excessive participation of the backbone network. Finally, ablation experiments and comparison experiments with classic algorithms are carried out on public datasets to examine the effectiveness of the proposed algorithm.

    • Feature Constrained Restricted Distillation Learning for Visual Anomaly Detection

      2023, 34(9):4378-4391. DOI: 10.13328/j.cnki.jos.006643

      Abstract (706) HTML (443) PDF 5.32 M (1922) Comment (0) Favorites

      Abstract:This study proposes a new feature constrained distillation learning method for visual anomaly detection, which makes full use of the features of the teacher model to instruct the student model to efficiently identify abnormal images. Specifically, the vision transformer (ViT) model is introduced as the backbone network of anomaly detection tasks, and a central feature strategy is put forward to constrain the output features of the student network. Considering the strong feature expressiveness of the teacher network, the central feature strategy is developed to dynamically generate the feature representation centers of normal samples for the student network from the teacher network. In this way, the ability of the student network to describe the feature output of normal data is improved, and the feature difference between the student and teacher networks in abnormal data is widened. In addition, to minimize the difference between the student and teacher networks in the feature representation of normal images, the proposed method leverages the Gram loss function to constrain the relationship between the coding layers of the student network. Experiments are conducted on three general anomaly detection data sets and one real-world industrial anomaly detection data set, and the experimental results demonstrate that the proposed method significantly improves the performance of visual anomaly detection compared with the state-of-the-art methods.

    • Longitudinal Prediction of Lung Tumor Based on Conditional Adversarial Spatiotemporal Encoder

      2023, 34(9):4392-4406. DOI: 10.13328/j.cnki.jos.006656

      Abstract (482) HTML (524) PDF 12.72 M (1851) Comment (0) Favorites

      Abstract:The observation of tumor location and growth is an important link in the formulation of tumor treatment plans. Intervention methods based on medical images can be employed to visually observe the status of the tumor in the patient in a non-invasive way, predict the growth of the tumor, and ultimately help physicians develop a treatment plan specific to the patient. This study proposes a new deep network model, namely the conditional adversarial spatiotemporal encoder model, to predict tumor growth. This model mainly consists of three parts: the tumor prediction generator, the similarity score discriminator, and conditions composed of the patient’s personal situations. The tumor prediction generator predicts the tumor in the next period according to the tumor images of two periods. The similarity score discriminator is used to calculate the similarity between the predicted tumor and the real one. In addition, this study adds the patient’s personal situations as conditions to the tumor growth prediction process. The proposed model is experimentally verified on two collected medical datasets. The experimental results achieve a recall rate of 76.10%, an accuracy rate of 91.70%, and a Dice coefficient of 82.4%, indicating that the proposed model can accurately predict the tumor images of the next period.

    • Graph Neural Network Training Acceleration for Multi-GPUs

      2023, 34(9):4407-4420. DOI: 10.13328/j.cnki.jos.006647

      Abstract (1104) HTML (1015) PDF 6.59 M (3035) Comment (0) Favorites

      Abstract:In recent years, graph neural networks (GNNs) have attracted wide attention due to their powerful and flexible representation ability. Considering the increasing scale of graph data and the limitation of the video memory capacity, it becomes more challenging to train GNNs with traditional general deep learning systems, and such training cannot give full play to the performance of GPU devices. To achieve efficient use of GPU hardware for GNN training is one of the important research issues in this field. Traditional approaches employ sparse matrix multiplication for the calculation process of GNNs. When the video memory capacity of GPU devices is limited, the computation tasks are distributed to each device by distributed matrix multiplication. Their shortcomings are mainly as follows: (1) Sparse matrix multiplication ignores the sparse distribution of the graph data, which results in low computation efficiency. (2) These methods ignore the computation and memory access characteristics of GPU and fail to utilize the hardware resources. To improve the training efficiency, some studies propose to reduce the costs of each iteration and storage requirements through graph sampling techniques, which also support flexible distributed scaling. Due to the stochastics and variance, however, these methods often affect the model accuracy. Therefore, this study proposes a high-performance GNN training framework for multi-GPUs. Different GNN partition strategies for multi-GPUs are explored, and the influence of different graph ordering patterns on the GPU performance during the calculation process of GNNs is investigated to ensure the accuracy of the model. Moreover, block-sparsity-aware optimization methods are put forward for GPU memory access. The prototype system is achieved using C++ and CuDNN. The experiments on four large-scale GNN datasets demonstrate that (1) the graph re-ordering method improves the cache hit rate of GPU by around 40% and doubles the computation speedup; (2) compared to the existing system DGL, the proposed system achieves a total speedup of 5.8x.

    • Many-core Optimization of Level 1 and Level 2 BLAS Routines on SW26010-Pro

      2023, 34(9):4421-4436. DOI: 10.13328/j.cnki.jos.006527

      Abstract (725) HTML (404) PDF 3.72 M (1787) Comment (0) Favorites

      Abstract:BLAS (basic linear algebra subprograms) is an important module of the high-performance extended math library, which is widely used in the field of scientific and engineering computing. Level 1 BLAS provides vector-vector operation, Level 2 BLAS provides matrix-vector operation. This study designs and implements highly optimized Level 1 and Level 2 BLAS routines for SW26010-Pro, a domestic many-core processor. A reduction strategy among CPEs is designed based on the RMA communication mechanism, which improves the reduction efficiency of many Level 1 and Level 2 BLAS routines. For TRSV and TPSV and other routines that have data dependencies, a series of efficient parallelization algorithms are proposed. The algorithm maintains data dependencies through point-to-point synchronization and designs an efficient task mapping mechanism that is suitable for triangular matrices, which reduces the number of point-to-point synchronizations effectively, and improves the execution efficiency. In this study, adaptive optimization, vector compression, data multiplexing, and other technologies have further improved the memory access bandwidth utilization of Level 1 and Level 2 BLAS routines. The experimental results show that the memory access bandwidth utilization rate of the Level 1 BLAS routines can reach as high as 95%, with an average bandwidth of more than 90%. The memory access bandwidth utilization rate of Level 2 BLAS routines can reach 98%, with an average bandwidth of more than 80%. Compared with the widely used open-source linear algebra library GotoBLAS, the proposed implementation of Level 1 and Level 2 BLAS routines achieved an average speedup of 18.78 times and 25.96 times. With the optimized Level 1 and Level 2 BLAS routines, LQ decomposition, QR decomposition, and eigenvalue problems achieved an average speedup of 10.99 times.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063