• Volume 35,Issue 7,2024 Table of Contents
    Select All
    Display Type: |
    • Preface

      2024, 35(7):3069-3070. DOI: 10.13328/j.cnki.jos.007114

      Abstract (47) HTML (0) PDF 481.72 K (63) Comment (0) Favorites

      Abstract:

    • Empirical Study on Data Leakage Problem in Neural Program Repair

      2024, 35(7):3071-3092. DOI: 10.13328/j.cnki.jos.007110

      Abstract (286) HTML (0) PDF 2.60 M (879) Comment (0) Favorites

      Abstract:Repairing software defects is an inevitable and significant problem in the field of software engineering, while automated program repair (APR) techniques aim to alleviate software defect problem by repairing the defective programs automatically, accurately, and efficiently. In recent years, with the rapid development of deep learning, the field of automated program repair has emerged a method that utilizes deep neural networks to automatically capture the relationship between defective programs and their patches, called neural program repair (NPR). In terms of the number of defects that can be correctly repaired on the benchmark, NPR tools have significantly outperformed non-deep learning APR tools. However, a recent study found that the performance improvement of NPR systems may be due to the presence of test data in the training data, i.e., the data leakage. Inspired by this, to further investigate the causes and effects of data leakage in NPR systems and to evaluate existing systems more fairly, this study: (1) systematically categorizes and summarizes the existing NPR systems, defines the data leakage of NPR systems based on this classification, and designs the data leakage detection method for each category of system; (2) conducts a large-scale testing of existing models according to the data leakage detection method in the previous step and investigates the effect of data leakage on model realism and evaluation performance and the impact on the model itself; (3) analyzes the collection and filtering strategies of existing NPR system datasets, improves and supplements them, then constructs a pure large-scale NPR training dataset based on the improved strategy with the existing popular dataset, and verifies the effectiveness of this dataset in preventing data leakage. From the experimental results, it is found that the ten NPR systems studied in this investigation all had data leakage on the evaluation dataset, among which the NPR system RewardRepair had the more serious data leakage problem, with 24 data leaks on the Defects4J (v1.2.0) benchmark, and the leakage ratio was as high as 53.33%. In addition, data leakage has an impact on the robustness of the NPR system, and all five NPR systems investigated had reduced robustness due to data leakage. As a result, data leakage is a very common problem and can lead to unfair performance evaluation results of NPR systems and affect therobustness of the NPR system on the benchmark. When training NPR models, researchers should avoid data leakage as much as possible and consider the impact of data leakage on the evaluation of the performance of NPR systems to evaluate the NPR systems as fairly as possible.

    • Comparison Research on Rule-based and Learning-based Mutation Techniques

      2024, 35(7):3093-3114. DOI: 10.13328/j.cnki.jos.007113

      Abstract (229) HTML (0) PDF 2.86 M (765) Comment (0) Favorites

      Abstract:Mutation testing is an effective software testing technique. It helps improve the defect detection capability of an existing test suite by generating mutants that simulate software defects. The quality of mutants has a significant impact on the effectiveness of mutation testing. The traditional mutation testing approach typically employs manually designed syntactic rule-based mutation operators to generate mutants, and has achieved some academic success. In recent years, many studies have started to incorporate deep learning techniques to generate mutants by learning historical code from open source projects. This new approach has achieved preliminary progress in mutant generation. A comprehensive comparison of the two mutation techniques, i.e. rule-based and learning-based, which have different mechanisms but both aim to improve the defect detection capability of the test suite by mutation, is crucial for mutation testing and its downstream tasks. To handle the problem, this study designs and implements an empirical study of rule-based and learning-based mutation techniques, aiming to understand the performance of mutation techniques with different mechanisms on the task of mutation testing, as well as the variability of the generated mutants in terms of program semantics. Specifically, this study uses the Defect4J v1.2.0 dataset to compare the syntactic rule-based mutation techniques represented by MAJOR and PIT with the deep learning-based mutation techniques represented by DeepMutation, μBERT, and LEAM. The experimental results show that both rule-based and learning-based mutation techniques can effectively support mutation testing practices, but MAJOR has the best testing performance and is able to detect 85.4% of real defects. In terms of semantic representation, MAJOR has the strongest semantic representation capability, and its constructed test suite is able to kill more than 95% of the mutants generated by other mutation techniques. In terms of defect representation, both types of techniques are unique.

    • AmazeMap: Microservices Fault Localization Method Based on Multi-level Impact Graph

      2024, 35(7):3115-3140. DOI: 10.13328/j.cnki.jos.007104

      Abstract (337) HTML (0) PDF 3.30 M (965) Comment (0) Favorites

      Abstract:Due to the large number of complex service dependencies and componentized modules, a failure in one service often causes one or more related services to fail, making it increasingly difficult to locate the cause of the failure. Therefore, how to effectively detect system faults and locate the root cause of faults quickly and accurately is the focus of current research in the field of microservices. Existing research generally builds a failure relationship model by analyzing the relationship between failures and services and metrics, but there are problems such as insufficient utilization of operation and maintenance data, incomplete modeling of fault information, coarse granularity of root cause localization, etc. Therefore, this study proposes AmazeMap, for which a multi-level fault impact graph modeling method and a microservice fault localization method are designed based on the fault impact graph. Specifically, the multi-level fault impact graph modeling method can comprehensively model the fault information by mining the collected temporal metric data and trace data while system running and considering the interrelationships between different levels; the fault localization method narrows the scope of fault impact, discovers the root cause from service instances and metrics, and finally outputs the most probable root cause of fault and metrics sequence. Based on an open-source benchmark microservice system and the AIOps contest dataset, this study designs experiments to validate AmazeMap, and also compares it with the existing methods. The results confirm AmazeMap’s effectiveness, accuracy, and efficiency.

    • Fuzzing Approach of Clustering Analysis-driven in Seed Scheduling

      2024, 35(7):3141-3161. DOI: 10.13328/j.cnki.jos.007105

      Abstract (353) HTML (0) PDF 3.66 M (871) Comment (0) Favorites

      Abstract:As a widely used automated software testing technique, the primary goal of fuzzy testing is to explore as many code areas of the program under test as possible, thereby achieving higher coverage as well as detecting more bugs or errors. Most of existing fuzzy testing methods schedule the seed based on the historical mutation data of the seed, which is simpler to implement but ignores the distribution of program space explored by the seed, resulting in that the testing may fall into only a single region of the program to be probed, and causing the waste of testing resources. This study proposes the Cluzz, a fuzzing approach of clustering analysis-driven in seed scheduling. Firstly, Cluzz analyzes the difference between seeds in the feature space by combining the distribution of seed execution path coverage, and uses cluster analysis to classify the distribution of seeds execution in the program space. And then, Cluzz prioritizes the seeds according to the path coverage patterns of different seed clusters and the results of cluster analysis, explores the rare code regions and prioritizes the seeds with higher evaluation scores. Secondly, energy is allocated to the seeds by their evaluation scores, and the interesting inputs obtained from mutations are retained and categorized to update the seed cluster information. Cluzz reevaluates the seeds based on the updated seed clusters to ensure the validity of seeds during testing process, thereby exploring more unknown code regions in a limited time and improving the coverage of the program under test. Finally, the Cluzz is implemented on three current mainstream fuzzers and extensive testing work is conducted on eight popular real-world programs. The results show that Cluzz can detect an average of 1.7 times more unique crashes than a regular fuzzer, and it also outperforms a benchmark fuzzer by an average of 22.15% in terms of the number of new edges found. In addition, compared with the existing seed scheduling methods, the comprehensive performance of Cluzz is better than that of other benchmark fuzzers.

    • GUI Fuzzing Framework for Mobile Apps Based on Multi-modal Representation

      2024, 35(7):3162-3179. DOI: 10.13328/j.cnki.jos.007106

      Abstract (328) HTML (0) PDF 2.57 M (896) Comment (0) Favorites

      Abstract:GUI fuzzing plays a crucial role in enhancing the reliability and compatibility of mobile apps. However, most existing GUI fuzzing methods are inefficient, mainly because they are coarse-grained, relying solely on single-modal features to understand the GUI pages holistically. The excessive abstraction of app states leads to the neglect of many details, resulting in an insufficient understanding of GUI states and widgets. To address this issue, a GUI fuzzing framework called GUIFuzzer for mobile apps is proposed based on multi-modal representation. This framework leverages multi-modal features, such as visual features, layout context features, and fine-grained meta-attribute features, to jointly infer the semantics of GUI widgets. Then, it trains a multi-level reward-driven deep reinforcement learning model to optimize the GUI event selection strategy, thus improving the efficiency of fuzz testing. The proposed framework is evaluated on a large number of real apps. Experimental results show that GUIFuzzer significantly improves the coverage of fuzz testing compared with existing competitive baselines. A case study is also conducted on customized search for specific targets, namely sensitive API triggering, which further demonstrates the practicality of the GUIFuzzer framework.

    • APP Software Defect Tracking and Analysis Method Oriented to Version Evolution

      2024, 35(7):3180-3203. DOI: 10.13328/j.cnki.jos.007107

      Abstract (193) HTML (0) PDF 3.16 M (798) Comment (0) Favorites

      Abstract:The speed of evolution in mobile application (APP) software market is accelerating. Effective analysis of software defects can help developers understand and repair software defects in time. However, the analysis object of existing research is not enough, which leads to isolated, fragmented information, and poor information quality. In addition, because of insufficient consideration of data verification and version mismatch issues, there are some errors in the analysis results, resulting in invalid software evolution. In order to provide more effective defect analysis results, an APP software defect tracking and analysis method oriented to version evolution (ASD-TAOVE) is proposed. First, the content of APP software defects is extracted from multi-source, heterogeneous APP software data, and the causal relationship of defect events is discovered. Then, a verification method for APP software defect content is designed, which is based on information entropy combined with text features and structural features to calculate the defect suspicious formula for verification and construction of APP software defect content heterogeneity graph. In order to consider the impact of version evolution, an APP software defect tracking analysis method is designed to analyze the evolution relationship of defects in version evolution. The evolution relationship can be transformed into the defect/evolutionary meta-paths which are useful for defect analysis. Finally, this study designs a heterogeneous information network based on deep learning to complete APP software defect analysis. The experimental results of four research questions (RQ) confirmed the effectiveness of ASD-TAOVE method of defect content verification and tracking analysis in the process of version-oriented evolution, and the accuracy of defect identification increased by about 9.9% and 5% respectively (average 7.5%). Compared with baseline methods, the ASD-TAOVE method can analyze more APP software data and provide effective defect information.

    • Fine-grained JVM Test Program Reduction Method Based on Program Constraints

      2024, 35(7):3204-3226. DOI: 10.13328/j.cnki.jos.007108

      Abstract (228) HTML (0) PDF 3.46 M (786) Comment (0) Favorites

      Abstract:In order to test the Java virtual machine (JVM), developers often need to manually design or use test generation tools to create complex test programs to detect potential defects in the JVM. Nevertheless, complex test programs bring high costs to developers in terms of locating and fixing defects. Test program reduction techniques aim to minimize the amount of code in test programs that is unrelated to defect detection, while maintaining the program’s defect detection ability. Existing research has achieved sound reduction results based on Delta debugging in C programs and XML inputs, but in the JVM testing scenario, Java test programs with complex syntax and semantic dependencies still have problems with coarse granularity and poor reduction effects, resulting in high comprehension costs even after reduction. Therefore, this study proposes a fine-grained test program reduction method, called JavaPruner, based on program constraints for Java test programs with complex program dependencies. JavaPruner first designs a fine-grained code measurement method at the statement block level, and then introduces dependency constraint relationships between statement blocks based on Delta debugging to reduce the test program. This work targets Java bytecode test programs, and selects 50 test programs with complex dependencies from existing test program generation tools for JVM testing as the benchmark dataset to evaluate the effectiveness of JavaPruner. Experimental results show that JavaPruner can effectively reduce redundant code in Java bytecode test programs. Compared with existing methods, the reduction capability can be improved by an average of 37.7% on all benchmark datasets. At the same time, JavaPruner can maximize the reduction of Java bytecode test programs to 1.09% of their original size while ensuring program effectiveness and defect detection ability, effectively reducing the analysis and comprehension costs of test programs.

    • Defect Category Prediction Method Based on Multi-source Domain Adaptation

      2024, 35(7):3227-3244. DOI: 10.13328/j.cnki.jos.007109

      Abstract (331) HTML (0) PDF 2.53 M (832) Comment (0) Favorites

      Abstract:With the rapid expansion of scale and complexity, defects inevitably exist within software systems. In recent years, defect prediction techniques based on deep learning have become a prominent research topic in the field of software engineering. These techniques can identify potential defects without executing the code, garnering significant attention from both industry and academia. Nevertheless, existing approaches mostly concentrate on determining the presence of defects at the method-level code, lacking the ability to precisely classify specific defect categories. Consequently, this undermines the efficiency of developers in locating and rectifying defects. Furthermore, in practical software development, new projects often lack sufficient defect data to train high-accuracy deep learning models. Models trained on historical data from existing projects frequently struggle to achieve satisfactory generalization performance on new projects. Hence, this study initially reformulates the traditional binary defect prediction task into a multi-label classification problem, employing defect categories described in the common weakness enumeration (CWE) as fine-grained predictive labels. To enhance the model performance in cross-project scenarios, this study proposes a multi-source domain adaptation framework that integrates adversarial training and attention mechanisms. Specifically, the proposed framework employs adversarial training to mitigate domain (i.e., software projects) discrepancies, and further utilizes domain-invariant features to capture feature correlations between each source domain and the target domain. Simultaneously, the proposed framework employs a weighted maximum mean discrepancy as an attention mechanism to minimize the representation distance between source and target domain features, facilitating model in learning more domain-independent features. The experiments on the dataset consisting of 8 real-world open-source projects constructed in this study show that the proposed approach achieves significant performance improvements compared with state-of-the-art baselines.

    • Software Bug Location Method Combining Information Retrieval and Deep Model Features

      2024, 35(7):3245-3264. DOI: 10.13328/j.cnki.jos.007111

      Abstract (389) HTML (0) PDF 3.15 M (936) Comment (0) Favorites

      Abstract:Automated bug localization methods can accelerate the process of programmers locating complex software system defects using bug reports. Early researchers treated bug localization as a retrieval task, constructing defect features by analyzing bug reports and related code, and applying information retrieval techniques for bug localization. With the development of deep learning, bug localization methods utilizing deep model features have also achieved certain effectiveness. Nevertheless, existing deep learning-based bug localization research methods suffer from experimental search space mismatching real-world scenarios due to the high time and resource costs of deep model training. These research methods do not consider all the files in the project as the search space during testing; they only search for code related to marked defects, such as the DNNLOC method, DreamLoc method, and DeepLocator method. This approach is inconsistent with the actual search scenario for programmers to localize real bug. In order to simulate the real-world scenario of bug localization, this study proposes the TosLoc method, which combines information retrieval and deep model features for bug localization. Firstly, information retrieval is employed to retrieve all source codes of real projects to ensure comprehensive utilization of existing features. Subsequently, deep models are utilized to extract semantics from source codes and bug reports. The TosLoc method achieves rapid localization of all code in a single project through two-stage retrieval. Experimental results conducted on four popular Java projects demonstrate that the proposed TosLoc method outperforms existing benchmark methods in terms of retrieval speed and accuracy. Compared to the best method called DreamLoc, the TosLoc method achieves an average MRR improvement of 2.5% and an average MAP improvement of 6.0% while only requiring 35% of the retrieval time of the DreamLoc method.

    • Survey on Code Review Automation Research

      2024, 35(7):3265-3290. DOI: 10.13328/j.cnki.jos.007112

      Abstract (819) HTML (0) PDF 2.88 M (1555) Comment (0) Favorites

      Abstract:During software development, collaborative development has become the mainstream trend for large-scale software development, and code review has become an important workflow of it. However, there are some problems in manual code review such as mismatch and knowledge limitations of reviewers, then the quality and efficiency of code review may be poor, and the code repair after review also takes time and effort for developers. It is urgently needed for researchers to improve the code review process and make it automated. This study provides a systematic survey of research related to code review automation, and focuses on 4 main directions: Reviewer recommendation, automated code quality estimation, review comment generation, and automated code refinement. The 148 high-quality publications related to this topic have been collected, and a technical classification and analysis have been carried out in this research field. Then, the evaluation methods of each task in directions are briefly summarized, and some benchmarks and open-source tools are listed. Finally, the key challenges and insights are proposed for future research.

    • Identification of Memory Copy Function via Hybrid Static and Dynamic Analysis

      2024, 35(7):3291-3313. DOI: 10.13328/j.cnki.jos.006919

      Abstract (314) HTML (0) PDF 7.28 M (947) Comment (0) Favorites

      Abstract:Memory error vulnerabilities (e.g., buffer overflow) are often caused by improper use of memory copy functions. The identification of memory copy functions in binary programs is beneficial for finding memory error vulnerabilities. However, current methods for identifying memory copy functions in binary programs mainly rely on static analysis to extract functions’ features, control flow, data flow, and other information, with a high false positive and false negative. This study proposes a technique, namely CPSeeker, based on hybrid static and dynamic analysis to improve the effectiveness of identifying memory copy functions. CPSeeker combines the advantages of static analysis and dynamic analysis, collects the global static information and local execution information of functions in stages, and fuses the extracted information to identify memory copy functions in binary programs. The experimental results show that CPSeeker outperforms the state-of-the-art BootStomp, SaTC, CPYFinder, and Gemini in identifying memory copy functions, despite its increased runtime consumption, and its F1 value reaches 0.96. Furthermore, CPSeeker is not affected by the compilation environment (compiler version, compiler type, and compiler optimization level). In addition, CPSeeker has a better performance in actual firmware tests.

    • SMTLOC: Bug Localization for SMT Solver Based on Multi-source Spectrum

      2024, 35(7):3314-3331. DOI: 10.13328/j.cnki.jos.006922

      Abstract (235) HTML (0) PDF 7.21 M (877) Comment (0) Favorites

      Abstract:SMT solver is an important system software. Therefore, bugs in the SMT solver may lead to the function failure of software relying on it and even bring security incidents. However, fixing bugs in the SMT solver is time-consuming because developers need to spend a lot of effort in understanding and finding the root cause of the bugs. Although many studies on software bug localization have been proposed, there is no systematic work to automatically locate bugs in the SMT solver. Therefore, this study proposes a bug localization method for the SMT solver based on multi-source spectrums, namely SMTLOC. First, for a given bug in the SMT solver, SMTLOC proposes an enumeration-based algorithm to mutate the formula that triggers the bug by generating a set of witness formulas that will not trigger the bug but has a similar execution trace with the formula that triggers the corresponding bug. Then, according to the execution trace of the witness formulas and the source code information of the SMT solver, SMTLOC develops a technique based on the coverage spectrum and historical spectrum to calculate the suspiciousness of files, thus locating the files that contain the bug. In order to evaluate the effectiveness of SMTLOC, 60 bugs in the SMT solver are collected. Experimental results show that SMTLOC is superior to the traditional spectrum bug localization method and can locate 46.67% of the bugs in TOP-5 files, and the localization effect is improved by 133.33%.

    • Dynamic Mitigation Solution Based on eBPF Against Kernel Heap Vulnerabilities

      2024, 35(7):3332-3354. DOI: 10.13328/j.cnki.jos.006923

      Abstract (502) HTML (0) PDF 9.14 M (981) Comment (0) Favorites

      Abstract:Kernel heap vulnerability is currently one of the main threats to operating system security. User-space attackers can leak or modify sensitive kernel information, disrupt kernel control flow, and even gain root privilege by triggering a vulnerability. However, due to the rapid increase in the number and complexity of vulnerabilities, it often takes a long time from when a vulnerability is first reported to when the developer issues a patch, and kernel mitigation mechanisms currently adopted are usually steadily bypassed. Therefore, this study proposes an eBPF-based dynamic mitigation framework for kernel heap vulnerabilities, so as to reduce kernel security risks during the time window fixing. The framework adopts data object space randomization to assign random addresses to the data objects involved in vulnerability reports at each allocation. In addition, it takes full advantage of the dynamic and secure features of eBPF to inject space-randomized objects into the kernel during runtime, so the attacker cannot place any attack payload accurately, and the heap vulnerabilities are almost unexploitable. This study evaluates 40 real kernel heap vulnerabilities and collects 12 attacks that bypass the existing mitigation mechanisms for further analysis and tests. As a result, it verifies that the dynamic mitigation framework provides sufficient security. Performance tests show that even under severe conditions, the four types of data objects only cause performance loss of about 1% and negligible memory loss to the system, and there is almost no additional performance loss when the number of protected objects increases. Compared with related work, the mechanism in this study has a wider scope of application and stronger security, and it does not require vulnerability patches issued by security experts. Furthermore, it can generate mitigation procedures according to vulnerability reports and has a broad application prospect.

    • Mutation-based Generation Algorithm of Negative Test Strings from Regular Expressions

      2024, 35(7):3355-3376. DOI: 10.13328/j.cnki.jos.006925

      Abstract (304) HTML (0) PDF 8.68 M (842) Comment (0) Favorites

      Abstract:Regular expressions are widely used in various areas of computer science. However, due to the complex syntax and the use of a large number of meta-characters, regular expressions are quite error-prone when defined and used by developers. Testing is a practical and effective way to ensure the semantic correctness of regular expressions. The most common method is to generate a set of character strings according to the tested expression and check whether they comply with the intended language. Most of the existing test data generation focuses only on positive strings. However, empirical study shows that a majority of errors during actual development are manifested by the fact that the defined language is smaller than the intended one. In addition, such errors can only be detected by negative strings. This study investigates the generation of negative strings from regular expressions based on mutation. The study first obtains a set of mutants by injecting defects into the tested expression through mutation and then selects a negative character string in the complementary set of the language defined by the tested expression to reveal the error simulated by the corresponding mutant. In order to simulate complex defects and avoid the problem that the negative strings cannot be obtained due to the specialization of mutants, a second-order mutation mechanism is adopted. Meanwhile, optimization techniques such as redundant mutant elimination and mutation operator selection are used to reduce the mutants, so as to control the size of the finally generated test set. The experimental results show that the proposed algorithm can generate negative test strings with a moderate size and have strong error detection ability compared with the existing tools.

    • Dynamic Multitask Learning Approach for Contract Information Extraction

      2024, 35(7):3377-3391. DOI: 10.13328/j.cnki.jos.006931

      Abstract (331) HTML (0) PDF 4.99 M (835) Comment (0) Favorites

      Abstract:Accurately extracting two types of information including elements and clauses in contract texts can effectively improve the contract review efficiency and provide facilitation services for all trading parties. However, current contract information extraction methods generally train single-task models to extract elements and clauses separately, whereas they do not dig deep into the characteristics of contract texts, ignoring the relevance among different tasks. Therefore, this study employs a deep neural network structure to study the correlation between the two tasks of element extraction and clause extraction and proposes a multitask learning method. Firstly, the primary multitask learning model is built for contract information extraction by combining the above two tasks. Then, the model is optimized and attention mechanism is adopted to further explore the correlation. Additionally, an Attention-based dynamic multitask-learning model is built. Finally, based on the above two methods, adynamic multitask learning model with lexical knowledge is proposed for the complex semantic environment in contract texts. The experimental results show that the method can fully capture the shared features among tasks and yield better information extraction results than the single-task model. It can solve the nested entity among elements and clauses in contract texts, and realize the joint information extraction of contract elements and clauses. In addition, to verify the robustness of the proposed method, this study conducts experiments on public datasets in various fields, and the results show that the proposed method is superior to baseline methods.

    • Chinese BERT Attack Method Based on Masked Language Model

      2024, 35(7):3392-3409. DOI: 10.13328/j.cnki.jos.006932

      Abstract (393) HTML (0) PDF 9.28 M (1051) Comment (0) Favorites

      Abstract:Adversarial texts are malicious samples that can cause deep learning classifiers to make errors. The adversary creates an adversarial text that can deceive the target model by adding subtle perturbations to the original text that are imperceptible to humans. The study of adversarial text generation methods can evaluate the robustness of deep neural networks and contribute to the subsequent robustness improvement of the model. Among the current adversarial text generation methods designed for Chinese text, few attack the robust Chinese BERT model as the target model. For Chinese text classification tasks, this study proposes an attack method against Chinese BERT, that is Chinese BERT Tricker. This method adopts a character-level word importance scoring method, important Chinese character positioning. Meanwhile, a word-level perturbation method for Chinese based on the masked language model with two types of strategies is designed to achieve the replacement of important words. Experimental results show that for the text classification tasks, the proposed method can significantly reduce the classification accuracy of the Chinese BERT model to less than 40% on two real datasets, and it outperforms other baseline methods in terms of multiple attack performance.

    • Label Distribution Learning Method Based on Deep Forest and Heterogeneous Ensemble

      2024, 35(7):3410-3427. DOI: 10.13328/j.cnki.jos.006936

      Abstract (282) HTML (0) PDF 6.76 M (929) Comment (0) Favorites

      Abstract:As a new learning paradigm to solve the problem of label ambiguity, label distribution learning (LDL) has received much attention in recent years. To further improve the prediction performance of LDL, this study proposes an LDL based on deep forest and heterogeneous ensemble (LDLDF), which uses the cascade structure of deep forest to simulate deep learning models with multi-layer processing structure and combines multiple heterogeneous classifiers in the cascade layer to increase the diversity of ensemble. Compared with other existing LDL methods, LDLDF can process information layer by layer and learn better feature representations to mine rich semantic information in data, and it has better representation learning ability and generalization ability. In addition, by considering the degradation problem of deep models, LDLDF adopts a layer feature reuse mechanism to reduce the training error of the model, which effectively utilizes the prediction ability of each layer in the deep model. Sufficient experimental results show that LDLDF is superior to other methods.

    • Recency-bias-avoiding Partitioned Incremental Learning Based on Self-learning Mask

      2024, 35(7):3428-3453. DOI: 10.13328/j.cnki.jos.006948

      Abstract (302) HTML (0) PDF 14.21 M (868) Comment (0) Favorites

      Abstract:Forgetting is the biggest problem of artificial neural networks in incremental learning and is thus called “catastrophic forgetting”. In contrast, humans can continuously acquire new knowledge and retain most of the frequently used old knowledge. This continuous “incremental learning” ability of human without extensive forgetting is related to the partitioned learning structure and memory replay ability of the human brain. To simulate this structure and ability, the study proposes an incremental learning approach of “recency-bias-avoiding partitioned incremental learning based on self-learning mask (SLM)”, or ASPIL for short. ASPIL involves the two stages of regional isolation and regional integration, which are alternately iterated to accomplish continuous incremental learning. Specifically, this study proposes the “Bayesian network (BN)-based sparse regional isolation” method to isolate the new learning process from the existing knowledge and thereby avoid the interference with the existing knowledge. For regional integration, SLM and dual-branch fusion (GBF) methods are proposed. The SLM method can accurately extracts new knowledge and improves the adaptability of the network to new knowledge, while the GBF method integrates the old and new knowledge to achieve the goal of fostering unified and high-precision cognition. During training, a regularization term for Margin Loss is proposed to avoid the “recency bias”, thereby ensuring the further balance of the old knowledge and the avoidance of the bias towards the new knowledge. To evaluate the effectiveness of the proposed method, this study also presents systematic ablation experiments performed on the standard incremental learning datasets CIFAR-100 and miniImageNet and compares the proposed method with a series of well-known state-of-the-art methods. The experimental results show that the method proposed in this study improves the memory ability of the artificial neural network and outperforms the latest well-known methods by more than 5.27% in average identification rate.

    • Federated Learning Watermark Based on Model Backdoor

      2024, 35(7):3454-3468. DOI: 10.13328/j.cnki.jos.006914

      Abstract (576) HTML (0) PDF 7.70 M (1170) Comment (0) Favorites

      Abstract:The training of high-precision federated learning models consumes a large number of users’ local resources. The users who participate in the training can gain illegal profits by selling the jointly trained model without others’ permission. In order to protect the property rights of federated learning models, this study proposes a federated learning watermark based on backdoor (FLWB) by using the feature that deep learning backdoor technology maintains the accuracy of main tasks and only causes misclassification in a small number of trigger set samples. FLWB allows users who participate in the training to embed their own private watermarks in the local model and then map the private backdoor watermarks to the global model through the model aggregation in the cloud as the global watermark for federated learning. Then a stepwise training method is designed to enhance the expression effect of private backdoor watermarks in the global model so that FLWB can accommodate the private watermarks of the users without affecting the accuracy of the global model. Theoretical analysis proves the security of FLWB, and experiments verify that the global model can effectively accommodate the private watermarks of the users who participate in the training by only causing an accuracy loss of 1% of the main tasks through the stepwise training method. Finally, FLWB is tested by model compression and fine-tuning attacks. The results show that more than 80% of the watermarks can be retained when the model is compressed to 30% by FLWB, and more than 90% of the watermarks can be retained under four different fine-tuning attacks, which indicates the excellent robustness of FLWB.

    • Anonymous Credential Protocol Based on SM2 Digital Signature

      2024, 35(7):3469-3481. DOI: 10.13328/j.cnki.jos.006929

      Abstract (453) HTML (0) PDF 6.33 M (1331) Comment (0) Favorites

      Abstract:As a privacy-preserving digital identity authentication technology, anonymous credentials not only authenticate the validity of the users’ digital identity but also protect the privacy of their identity. Anonymous credentials are widely applied in anonymous authentication, anonymous tokens, and decentralized digital identity systems. Existing anonymous credentials usually adopt the commitment-signature-proof paradigm, which requires that the adopted signature scheme should have the re-randomization property, such as CL signatures, PS signatures, and structure-preserving signatures (SPS). In practical applications, ECDSA, Schnorr, and SM2 are widely employed for digital identity authentication, but they lack the protection of user identity privacy. Therefore, it is of certain practical significance to construct anonymous credentials compatible with ECDSA, Schnorr, SM2, and other digital signatures, and protect identity privacy during the authentication. This study explores anonymous credentials based on SM2 digital signature. Pedersen commitment is utilized to commit the user attributes in the registration phase. Meanwhile, according to the structural characteristics of SM2, the signed message is H(m), and the equivalence between the Pedersen commitment message and the hash commitment message is proven. This study also employs ZKB++ technology to prove the equivalence of algebraic and non-algebraic statements. The commitment message is transformed to achieve the cross-domain proof and issue the users’ credentials based on the SM2 digital signature. In the showing phase of anonymous credentials, the zero-knowledge proof is combined to prove the possession of an SM2 signature and ensure the anonymity of credentials. This study provides the construction of an anonymous credential protocol based on SM2 digital signature and proves the security of this protocol. Finally, it also verifies the effectiveness and feasibility of the protocol by analyzing the computational complexity of the protocol and testing the algorithm execution efficiency.

    • Construction Method of Complete Cryptographic Reverse Firewall for IBE

      2024, 35(7):3482-3496. DOI: 10.13328/j.cnki.jos.006930

      Abstract (289) HTML (0) PDF 6.91 M (984) Comment (0) Favorites

      Abstract:Since the Snowden revelations, threats from backdoor attacks represented by algorithm substitution attack (ASA) have been widely concerned. This kind of attack subverts the process of the algorithm that tampers with the cryptographic protocol participants in an undetectable manner, which embeds backdoors to obtain secrets. Building a cryptographic reverse firewall (CRF) for protocol participants is a well-known and feasible approach against ASA. Identity-based encryption (IBE), as a quite applicable public key infrastructure, has vital importance to be protected by appropriate CRF schemes. However, the existing work only realizes the CRF re-randomization, ignoring the security risk of sending users’ private keys directly to the third-party CRF. Given the above problem, the formal definition and security model of security properties of CRF applicable to IBE are proposed. Then, the formal definition of rerandomizable and key-malleable secure channel free IBE (RKM-SCF-IBE) and the method of transforming traditional IBE to RKM-SFC-IBE are presented. In addition, an approach to increasing anonymity is also given. Finally, a generic provably secure framework of CRF construction for IBE is proposed based on RKM-SFC-IBE, with several instantiations from classic IBE schemes in the standard model and simulation results with optimization methods. Compared with existing work, the proposed scheme is proven secure under a more complete security model with a generic approach to building CRF for IBE schemes and clarifies the basic principles when constructing CRF for more expressive encryption schemes.

    • Survey on Deep Learning Methods for Freehand-sketch-based Visual Content Generation

      2024, 35(7):3497-3530. DOI: 10.13328/j.cnki.jos.007053

      Abstract (333) HTML (0) PDF 10.06 M (1076) Comment (0) Favorites

      Abstract:Freehand sketches can intuitively present users’ creative intention by drawing simple lines and enable users to express their thinking process and design inspiration or produce target images or videos. With the development of deep learning methods, sketch-based visual content generation performs cross-domain feature mapping by learning the feature distribution between sketches and visual objects (images and videos), enabling the automated generation of sketches from images and the automated generation of images or videos from sketches. Compared with traditional artificial creation, it effectively improves the efficiency and diversity of generation, which has become one of the most important research directions in computer vision and graphics and plays an important role in design, visual creation, etc. Therefore, this study presents an overview of the research progress and future development of deep learning methods for sketch-based visual content generation. The study classifies the existing work into sketch-based image generation and sketch-based video generation according to different visual objects and analyzes the generation models in detail with a combination of specific tasks including cross-domain generation between sketch and visual content, style transfer, and editing of visual content. Then, it summarizes and compares the commonly used datasets and points out sketch propagation methods to address in sufficient sketch data and evaluation methods of generated models. Furthermore, the study prospects the research trend based on the challenges faced by the sketch in the application of visual content generation and the future development direction of generated models.

    • Black-box Transferable Attack Method for Object Detection Based on GAN

      2024, 35(7):3531-3550. DOI: 10.13328/j.cnki.jos.006937

      Abstract (510) HTML (0) PDF 12.55 M (1083) Comment (0) Favorites

      Abstract:Object detection is widely used in various fields such as autonomous driving, industry, and medical care. Using the object detection algorithm to solve key tasks in different fields has gradually become the main method. However, the robustness of the object detection model based on deep learning is seriously insufficient under the attack of adversarial samples. It is easy to make the model prediction wrong by adding the adversarial samples constructed by small perturbations, which greatly limits the application of the object detection model in key security fields. In practical applications, the models are black-box models. Related research on black-box attacks against object detection models is relatively lacking, and there are many problems such as incomplete robustness evaluation, low attack success rate of black-box, and high resource consumption. To address the aforementioned issues, this study proposes a black-box object detection attack algorithm based on a generative adversarial network. The algorithm uses the generative network fused with an attention mechanism to output the adversarial perturbations and employs the alternative model loss and the category attention loss to optimize the generated network parameters, which can support two scenarios of target attack and vanish attack. A large number of experiments are conducted on the Pascal VOC and the MSCOCO datasets. The results demonstrate that the proposed method has a higher black-box transferable attack success rate and can perform transferable attacks between different datasets.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063