• Volume 34,Issue 7,2023 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Preface

      2023, 34(7):2979-2980. DOI: 10.13328/j.cnki.jos.006864 CSTR:

      Abstract (723) HTML (1718) PDF 530.35 K (2569) Comment (0) Favorites

      Abstract:

    • Safe Neural Network Controller Synthesis and Verification for Hybrid Systems

      2023, 34(7):2981-3001. DOI: 10.13328/j.cnki.jos.006857 CSTR:

      Abstract (1235) HTML (2176) PDF 2.37 M (2630) Comment (0) Favorites

      Abstract:Controller synthesis is a fundamental problem in hybrid system control. The synthesis of safe controllers is related to the use of hybrid systems in safety-critical fields. This study proposes a novel approach to synthesizing neural network controllers with safety guarantees for hybrid systems. The safety of neural network controllers is guaranteed by barrier certificates, which are simultaneously synthesized with the controllers. To learn safe neural network controllers, first, the network structures of the controllers are determined, and the training datasets are constructed based on the hybrid system. Then, the loss function of network training is encoded based on the barrier certificate conditions guaranteeing the safety of the controllers. When the training process completes, the learned controllers are safe on training datasets but may not be safe on the whole hybrid system. To verify the safety of the learned controllers on the whole system, this study transforms the certification of safety conditions into a group of mixed-integer programming problems and adopts the numerical optimization solver to get formally guaranteed results. The safe neural network controller synthesis tool SafeNC is implemented and its performance on 8 benchmark systems is evaluated. SafeNC successfully synthesizes large controllers with up to 6 hidden layers and 1 804 neurons. The experimental results show that SafeNC can deal with more complex systems, and is more effective and scalable than the existing methods.

    • Modeling and Verification Method of Intersection Test Scenario for Automated Driving

      2023, 34(7):3002-3021. DOI: 10.13328/j.cnki.jos.006855 CSTR:

      Abstract (1437) HTML (1952) PDF 3.49 M (3212) Comment (0) Favorites

      Abstract:Autonomous vehicles play an important role in easing traffic congestion and eliminating traffic accidents. In order to ensure the safety and reliability of the autonomous vehicle, there must be an all-around test before they are deployed on public roads. Most of the existing test scenario data come from traffic accidents and traffic violations. Furthermore, the most fundamental safety requirement of an autopilot system is that autonomous vehicles should comply with traffic law, which fully reflects the importance of autonomous vehicles complying with traffic rules. Nevertheless, there is a severe lack of test scenarios built for the traffic law. Therefore, in this study, the safety requirements of the autopilot system are extracted from the traffic law perspective, and a Petri net modeling and formal verification method for intersection test scenarios is proposed. Firstly, the traffic rules are classified according to the test scenarios of automated driving, the rule text suitable for the autonomous vehicle is extracted and semi-formalized. Secondly, with the aim of covering traffic law and function testing procedure of the test scenario, the interactive behavior of the intersection scene elements is integrated, the typical test scene elements are selected and combined to deploy the intersection test scenarios. Then, the test scenario based on traffic rules is modeled as a Petri net, in which places describe the state of the autonomous vehicle and transitions represent the trigger condition of the state. Moreover, the Clock Constraint Specification Language (CCSL) is chosen as the intermediate semantic language to convert the Petri net into an intermediate semantic model which can be formally verified. A specific conversion method is proposed. Finally, Tina is used to verify the activity, boundedness, and accessibility of the traffic-law scenario model, and the experimental results prove the validity of the model. Besides, the CCSL constraints are analyzed through the analysis tool MyCCSL which is based on the SMT and the consistency of the model is verified by the LTL formula.

    • Automatic Parallelization Framework for Complex Nested Loops Based on LLVM Pass

      2023, 34(7):3022-3042. DOI: 10.13328/j.cnki.jos.006858 CSTR:

      Abstract (1164) HTML (1629) PDF 2.96 M (2842) Comment (0) Favorites

      Abstract:With the popularization of multi-core processors, automatic parallelization of serial codes in embedded legacy systems is a research hotspot, while there are technical challenges in the automatic parallelization method for complex nested loops with imperfect nested structure and non-affine dependency characteristics. This study proposes an automatic parallelization framework (CNLPF) for complex nested loops based on LLVM Pass. Firstly, a representation model of complex nested loops, namely loop structure tree, is proposed, and the regular region of nested loops is automatically converted into a loop structure tree representation. Then, the data dependency analysis is carried out on the loop structure tree to construct intra-loop and inter-loop dependency relationship. Finally, the parallel loop program is generated based on the OpenMP shared memory programming model. For the 6 program cases in the SPEC2006 data set containing nearly 500 complex nested loops, the statistics of the proportion of complex nested loops and the parallel performance acceleration test were carried out respectively. The results show that the automatic parallelization framework proposed in this study can deal with complex nested loops that cannot be optimized by LLVM Polly, which enhances the parallel compilation and optimization capabilities of LLVM, and the method combined with Polly optimization improves the acceleration effect of Polly optimization alone by 9%-43%.

    • Target-oriented UAF Vulnerability Prediction Method of Multi-threaded Programs

      2023, 34(7):3043-3063. DOI: 10.13328/j.cnki.jos.006862 CSTR:

      Abstract (1107) HTML (1567) PDF 2.33 M (2761) Comment (0) Favorites

      Abstract:Use-after-free (UAF) vulnerability is a common concurrency defect in multi-threaded programs. Predictive UAF vulnerability detection methods have attracted much attention for their balance of false positives and misses. However, existing predictive UAF detection methods are not optimized for the target to be detected, which leads to unacceptable detection efficiency when the program is large or has complex behavior. To address the issue, proposes a target-oriented method to detect UAF vulnerabilities in multi-threaded programs. Firstly, the Petri net model of the program is mined from the program traces. Then, for each potential memory Free and Use operation pair that could constitute a UAF vulnerability. To add behavioural control structures that maintains causal constrains and data consistency between operations to the Petri net model of the program, with the target of triggering the vulnerability. On this basis, a UAF vulnerability detection method based on Petri net reverse unfolding is designed. This method verifies the authenticity of only one potential UAF vulnerability at a time, thus ensuring the efficiency of detection. This method verifies the validity of one potential UAF vulnerability at a time, thus ensuring the efficiency of detection. At the same time, in order to reduce the number of potential UAF vulnerabilities to be detected, a new vector clock is proposed in this paper to automatically identify the causal relationship between Free and Use operations, and to filter the potential UAF vulnerabilities accordingly. The proposed method is experimentally evaluated with several program examples. The experimental results show that the proposed method improves the efficiency and accuracy of detection compared to the mainstream methods.

    • Model Checking Tool for Concurrent Program Based on Constrained Dependency Graph

      2023, 34(7):3064-3079. DOI: 10.13328/j.cnki.jos.006856 CSTR:

      Abstract (1072) HTML (1608) PDF 2.37 M (3206) Comment (0) Favorites

      Abstract:Model checking is an automatic verification approach based on the state-space exploration strategy, which can effectively improve the quality of a program. However, due to the non-deterministic of thread scheduling and the complexity of data synchronization, the state-space explosion problem in concurrent program verification is more serious. At present, the independence analysis based partial-order reduction techniques have been widely applied to reduce the exploration space of concurrent program verification tasks. In the face of the problem that imprecise independence analysis will significantly increase the number of equivalent trace classes to be explored, a concurrent program model checking tool CDG4CPV that can refine the dependencies between thread transitions has been developed. Firstly, specification automata are constructed corresponding to the reachability property. Then, a constrained dependency graph is constructed according to the types of transition edges of threads and the access information of shared variables. Finally, the constrained dependency graph is utilized to prune the independent and enabled branches when unwinding the control-flow graph. The experiments have been carried out on the concurrency track of SV-COMP 2022 benchmark suite, and the efficiency of the proposed tool is also compared and analyzed. Empirical results show that the proposed tool can effectively improve the efficiency of model checking for concurrent programs. Specially, compared with the BDD-based program analysis algorithm, the proposed tool reduces the number of explored states by 91.38%, and the time and memory overheads are reduced by 86.25% and 69.80%, respectively.

    • SMT-based Formal Verification of Synchronous Reactive Model for Zone Controller

      2023, 34(7):3080-3098. DOI: 10.13328/j.cnki.jos.006861 CSTR:

      Abstract (1091) HTML (1641) PDF 2.82 M (2621) Comment (0) Favorites

      Abstract:Formal verification is a proven technique for improving product quality during software development of safety critical systems. However, the verification must be complete, both theoretically and in the interest of practicality. Data-flow verification is a pervading manifestation of verification of the software model in implementation level. Environmental input, generic function, high-order iterative operation, and intermediate variables are therefore crucial for analyzing usability of verification approaches. To verify a synchronous reactive model, engineers readily verify the control-flow model (i.e., safe state machine). Existing work shows that these approaches fall short of complete verification of synchronous reactive model of industrial software, which results in the loss of reaching the industrial requirements. It presents a significant pain point for adopting formal verification of industrial software. Thus, it is drawn on the insight that the synchronous reactive model of safety-critical systems should be verified completely, and the data-flow models should be considered. An approach is presented for automated, generic verification that tailor to verify the integration of safe state machines and data-flow models. Furthermore, a synthesis-based approach is adopted where the SCADE models describe functional requirements, safety requirements and environmental inputs that can be verified for an SMT-based model checker through program synthesis to Lustre model. The proposed technique promotes program synthesis as a general primitive for improving the integrity of formal verification. The proposed approach is evaluated on an industrial application (nearly two million lines of Lustre code) in rail transit. It is show that the proposed approach is effective in sidestepping long-standing and complex verification issues in large scale synchronous reactive model.

    • Efficient Approach for Solving Simple Preference in AI Planning

      2023, 34(7):3099-3115. DOI: 10.13328/j.cnki.jos.006859 CSTR:

      Abstract (1012) HTML (1842) PDF 2.27 M (2866) Comment (0) Favorites

      Abstract:AI planning, or planning for short, is an important branch of AI and widely applied in many fields, e.g., job shop scheduling, transportation scheduling, robot motion planning, aerospace mission planning, etc. A plan (a sequence of actions) must achieve all goals eventually in traditional planning, where such goals are called hard goals. Nevertheless, in many practical problems, the key focus is not only on the realization of goals as soon as possible and the reduction of the cost of plans as low as possible, but also on other factors, e.g., resource consumption or time constraint. To this end, the concept of simple preference which is also called soft goals is introduced. In contrast to hard goals, a simple preference is allowed to be violated by a plan. In essence, simple preferences are used to measure the quality of plans, without affecting the existence of plans. Current research on simple preferences makes less progress and the quality of plans are often unsatisfactory. This study proposes an efficient approach for solving simple preferences which are modeled as a part of classical planning models. Moreover, SMT (satisfiability modulo theories) solver is employed to recognize the mutual exclusion relations among simple preferences for the purpose of preference reduction, relieving the burden of planers. The major advantages of this approach lie in: on one hand, the state space is largely reduced due to the pre-tailoring of simple preferences, and on the other hand, the existing fast planners can be utilized and there is no need to design specialized planning algorithm. The experimental results on benchmarks show that the proposed approach has sound performance in improving the quality of plans, especially suited for the situation where simple preferences are not independent of each other.

    • Collaborative Verification Method of Uninterpreted Programs

      2023, 34(7):3116-3133. DOI: 10.13328/j.cnki.jos.006860 CSTR:

      Abstract (679) HTML (1413) PDF 2.82 M (2361) Comment (0) Favorites

      Abstract:The verification of an uninterpreted program is undecidable in general. Recently, a decidable fragment (called coherent) of uninterpreted programs is discovered and the verification of coherent uninterpreted programs is PSPACE complete. Based on the results of coherent uninterpreted programs, a trace abstraction-based verification method in CEGAR (counterexample-guided abstraction refinement) style is proposed for general uninterpreted programs, and is very effective. Although that, the verification of uninterpreted programs sometimes needs many refinements. Especially when verify multiple programs with this method, the verifications of different programs are independent of each other and has high complexity. However, it is observed that those abstract models of infeasible counter-example traces are reusable and can benefit from each other’s verification when the programs to be verified are similar. In this work, a collaborative verification framework is proposed that accumulates the abstract models of infeasible traces during the programs’ verification. When a new program is to be verified, the program abstraction is refined with the accumulated abstract model first to wipe off those infeasible traces to improve the verification efficiency. Besides, an optimized congruence-based trace abstraction method is also proposed that compacting the states during the verification to enlarge the scope of the abstractions of the infeasible traces. The collaborative verification framework and the optimized trace abstraction method have been implemented, achieving on average 2.70x and 1.49x speedups on two representative benchmarks.

    • >Review Articles
    • Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks

      2023, 34(7):3134-3166. DOI: 10.13328/j.cnki.jos.006863 CSTR:

      Abstract (1553) HTML (2258) PDF 10.65 M (3742) Comment (0) Favorites

      Abstract:With the advent of the intelligent age, the applications of intelligent systems equipped with deep neural networks (DNNs) have penetrated into every aspect of our life. However, due to the black-box and large-scale characteristics, the predictions of the neural networks are difficult to be completely convincing. When neural networks are applied to security-critical fields such as autonomous driving, how to ensure their security is still a great challenge for the academia and industry. For this reason, the academia carried out much research on robustness—a kind of special security of neural networks, and proposed many algorithms for robustness analysis and verification. The verification algorithms for feedforward neural networks (FNNs) include precise algorithms and approximate algorithms, which have been developed relatively prosperously. The verification algorithms for other types of networks, such as recurrent neural networks (RNNs), are still in the primary stage. This study reviews the current development of DNNs and the challenges of deploying them into our life. It also exhaustively investigates the robustness verification algorithms of FNNs and RNNs, analyzes and compares the intrinsic connection among these algorithms. The security verification algorithms of RNNs in specific application scenarios are investigated, and the future research directions in the robustness verification field of neural networks are clarified.

    • Transparent Request Tracing and Sampling Method for Java-based Microservice System

      2023, 34(7):3167-3187. DOI: 10.13328/j.cnki.jos.006523 CSTR:

      Abstract (1030) HTML (1135) PDF 8.66 M (2310) Comment (0) Favorites

      Abstract:Microservice is becoming the mainstream architecture of the cloud-based software systems because of its agile development and rapid deployment. However, the structure of a microservice system is complex, it often has hundred of service instances. Moreover, the call relationship between services is extremely complex. When an anomaly occurs in the microservice system, it is difficult to locate the root causes of the anomaly. The end-to-end request tracing method becomes the standard configuration of a microservice system to solve this problem. However, current methods of distributed request tracing are intrusive to applications and heavily rely on the developers’ expertise in request tracing. Besides, it is unable to start or stop the tracing functionality at runtime. These defects not only increase the burden of developers but also restrict the adoption of distributed request tracing technique in practice. This study designs and implements a transparent request tracing system named Trace++, which can generate tracing code automatically and inject the generated code into the running application by using dynamic code instrumentation technology. Trace++ is low intrusive to programs, transparent to developers, and can start or stop the tracing functionality flexibly. In addition, the adaptive sampling method of Trace++ effectively reduces the cost of request tracing. The results of the experiments conducted on TrainTicket, a microservice system, show that Trace++ can discover the dependencies between services accurately and its performance cost is close to the source code instrumentation method when it starts request tracing. When the request tracing functionality is stopped, Trace++ incurs no performance cost. Moreover, the adaptive sampling method can preserve the representative trace data while 89.4% of trace data are reduced.

    • Malware Similarity Measurement Method Based on Multiplex Heterogeneous Graph

      2023, 34(7):3188-3205. DOI: 10.13328/j.cnki.jos.006538 CSTR:

      Abstract (1210) HTML (913) PDF 3.14 M (3009) Comment (0) Favorites

      Abstract:Existing malware similarity measurement methods cannot accommodate code obfuscation technology and lack the ability to model the complex relationships between malware. This study proposes a malware similarity measurement method called API relation graph enhanced multiple heterogeneous proxembed (RG-MHPE) based on multiplex heterogeneous graph to solve the above problems. This method first uses the dynamic and static feature of malware to construct the multiplex heterogeneous graph and then proposes an enhanced proximity embedding method based on relational paths to solve the problem that proximity embedding cannot be applied to the similarity measurement of the multiplex heterogeneous graph. In addition, this study extracts knowledge from API documents on the MSDN website, builds an API relation graph, learns the similarity between Windows APIs, and effectively slows down the aging speed of similarity measurement models. Finally, the experimental results show that RG-MHPE has the best performance in similarity measurement performance and model anti-aging ability.

    • Stable Learning via Sample Reweighting and Dual Classifiers

      2023, 34(7):3206-3225. DOI: 10.13328/j.cnki.jos.006511 CSTR:

      Abstract (744) HTML (1054) PDF 7.51 M (2329) Comment (0) Favorites

      Abstract:Stable learning aims to leverage the knowledge obtained only from a single training data to learn a robust prediction model for accurately predicting label of the test data from a different but related distribution. To achieve promising performance on the test data with agnostic distributions, existing stable learning algorithms focus on eliminating the spurious correlations between the features and the class variable. However, these algorithms can only weaken part of the spurious correlations between the features and the class variable, but can not completely eliminate the spurious correlations. Furthermore, these algorithms may encounter the overfitting problem in learning the prediction model. To tackle these issues, this study proposes a sample reweighting and dual classifiers based stable learning algorithm, which jointly optimizes the weights of samples and the parameters of dual classifiers to learn a robust prediction model. Specifically, to estimate the effects of all features on classification, the proposed algorithm balances the distribution of confunders by learning global sample weights to remove the spurious correlations between the features and the class variable. In order to eliminate the spurious correlations between some irrelevant features and the class variable and weaken the influence of irrelevant features on the weighting process of samples, the proposed algorithm selects and removes some irrelevant features before sample reweighting. To further improve the generalization ability of the model, the algorithm constructs two classifiers and learns a prediction model with an optimal hyperplane by minimizing the parameter difference between the two classifiers during learning the prediction model. Using synthetic and real-world datasets, the experiments have validated the effectiveness of the proposed algorithm.

    • Event Extraction Method Based on Dual Attention Mechanism

      2023, 34(7):3226-3240. DOI: 10.13328/j.cnki.jos.006520 CSTR:

      Abstract (994) HTML (1056) PDF 7.72 M (3114) Comment (0) Favorites

      Abstract:In view of the fact that the syntactic relationship is not fully utilized and the argument role is missing in event extraction, an event extraction based on dual attention mechanism (EEDAM) method is proposed to improve the accuracy and recall rate of event extraction. Firstly, sentence coding is based on four embedded vectors and dependency relation is introduced to construct dependency relation graph, so that deep neural network can make full use of syntactic relation. Then, through graph transformation attention network, new dependency arcs and aggregate node information are generated to capture long-range dependencies and potential interactions, weighted attention network is integrated to capture key semantic information in sentences, and sentence level event arguments are extracted to improve the prediction ability of the model. Finally, the key sentence detection and similarity ranking are used to fill in the document level arguments. The experimental results show that the event extraction method based on dual attention mechanism can improve the accuracy rate, recall rate, and F1-score by 17.82%, 4.61%, and 9.80% respectively compared with the optimal baseline joint multiple Chinese event extractor (JMCEE) on ACE2005 data set. On the data set of dam safety operation records, the accuracy, recall rate, and F1 score are 18.08%, 4.41%, and 9.93% higher than the optimal baseline JMCEE, respectively.

    • Optimization of Equivalence Query Algorithm in Active Automata Learning

      2023, 34(7):3241-3255. DOI: 10.13328/j.cnki.jos.006532 CSTR:

      Abstract (633) HTML (960) PDF 5.55 M (2204) Comment (0) Favorites

      Abstract:As an effective technique for black-box state machine models of software systems, model learning (a.k.a. automata learning) can be divided into active and passive learning. Based on given input and output alphabets, the minimum complete state machine of the target system can be obtained in polynomial time through active interaction with the black box system. And the algorithm of equivalence query is still a big obstacle to the development and application of active automata learning tools. This study discusses the influence of counterexamples on the learning algorithms with the discrimination tree, and defines the comparison rules of hypotheses, and proposes two principles of constructing test cases. According to the principle, the Wp-method equivalence query algorithm is improved to produce better hypotheses and effectively reduce the number of queries and symbols. Based on the LearnLib, three kinds of automata are used as experimental objects to verify the effectiveness of the principle and the improved algorithm.

    • Heterogeneous Network Representation Learning Method Fusing Mutual Information and Multiple Meta-paths

      2023, 34(7):3256-3271. DOI: 10.13328/j.cnki.jos.006535 CSTR:

      Abstract (1083) HTML (1172) PDF 9.42 M (3036) Comment (0) Favorites

      Abstract:Heterogeneous information networks can be used for modeling several applications in the real world. Their representation learning has received extensive attention from scholars. Most of the representation learning methods extract structural and semantic information based on meta-paths and their effectiveness in network analysis have been proved. However, these methods ignore the node internal information and different degrees of importance of meta-path instances. Besides, they can capture only the local node information. Thus, this study proposes a heterogeneous network representation learning method fusing mutual information and multiple meta-paths. First, a meta-path internal encoding method called relational rotation encoding is used, which captures the structural and semantic information of the heterogeneous information network according to adjacent nodes and meta-path context nodes. It uses an attention mechanism to model the importance of each meta-path instance. Then, an unsupervised heterogeneous network representation learning method fusing mutual information maximization and multiple meta-paths is proposed and mutual information can capture both global and local information. Finally, experiments are conducted on two real datasets. Compared with the current mainstream algorithms as well as some semi-supervised algorithms, the results show that the proposed method has better performance on node classification and clustering.

    • Fog Computing-based Three-party Authentication and Key Agreement Protocol for Smart Healthcare

      2023, 34(7):3272-3291. DOI: 10.13328/j.cnki.jos.006514 CSTR:

      Abstract (687) HTML (1575) PDF 5.02 M (2358) Comment (0) Favorites

      Abstract:In smart healthcare, cloud computing and the Internet of Things are combined to solve the problem of real-time access to large-scale data. However, the data is uploaded to a remote cloud. It increases additional communication cost and transmission delay. Fog computing has been introduced into smart healthcare to solve this problem. The fog servers assist the cloud server to complete data storage and access locally. It contributes to low latency and high mobility. Since the medical data is highly sensitive, how to design a fog computing-based smart healthcare authentication protocol has become a research hotspot. If the data is tampered illegally, the consequences will be catastrophic. Hence, the authentication protocol should be secure against various attacks and realize the secure data transmission among users, fog nodes, and cloud servers. This study analyzes two schemes for smart healthcare, and points out that Hajian et al.’s scheme cannot resist stolen verifier attack, denial of service attacks, impersonation attacks, node capture attack, and session key disclosure attacks; Wu et al.’s scheme cannot resist offline password guessing attacks and impersonation attacks. Furthermore, a fog computing-based three-party authentication and key agreement protocol are proposed for smart healthcare. The security is proved by using the random oracle model, the BAN logic, and heuristic analysis. As result, it is secure against known attacks. The performance comparison with related schemes shows that the proposed scheme is more suitable for fog computing-based smart healthcare.

    • Supervised Identity Privacy Protection Scheme in Blockchain

      2023, 34(7):3292-3312. DOI: 10.13328/j.cnki.jos.006517 CSTR:

      Abstract (1387) HTML (867) PDF 8.75 M (3054) Comment (0) Favorites

      Abstract:One of the main challenges of blockchain technology is to ensure the privacy protection of transaction identity under the condition of open ledger and multi-party consensus. At present, the identity privacy protection scheme based on anonymous authentication and transaction mixing in public blockchain is difficult to be popularized in the industry due to the lack of supervision. Based on the identity privacy protection scheme in Monero, this study introduces the role of the regulator, designs a supervised privacy protection scheme for the transaction receiver based on one-time address encryption and zero knowledge proof. It also designs a linkable revocable ring signature scheme based on linkable ring signature and revocable ring signature so as to implement the supervised privacy protection scheme for transaction sender based on autonomous mixing. The scheme can not only protect the identity privacy of the participants, but also support the offline transaction identity recovery for the regulator so as to achieve the regulatory purpose of “controllable anonymity”. The analysis and test results show that the algorithm operation time is millisecond in this scheme, which can meet the performance requirements of blockchain in non-high frequency transaction scenarios.

    • Enhancement of Textual Adversarial Attack Ability Based on Sememe-level Sentence Dilution Algorithm

      2023, 34(7):3313-3328. DOI: 10.13328/j.cnki.jos.006525 CSTR:

      Abstract (771) HTML (1087) PDF 7.00 M (2302) Comment (0) Favorites

      Abstract:With machine learning widely applied to the natural language processing (NLP) domain in recent years, the security of NLP tasks receives growing natural concerns. Existing studies found that small modifications in examples might lead to wrong machine learning predictions, which was also called adversarial attack. The textual adversarial attack can effectively reveal the vulnerability of NLP models for improvement. Nevertheless, existing textual adversarial attack methods all focus on designing complex adversarial example generation strategies with a limited improvement of success rate, and the highly invasive modifications bring the decline of textual quality. Thus, a simple and effective method with high adversarial example quality is in demand. To solve this problem, the sememe-level sentence dilution algorithm (SSDA) and the dilution pool construction algorithm (DPCA) are proposed from a new perspective of improving the process of adversarial attack. SSDA is a new process that can be freely embedded into the classical adversarial attack workflow. SSDA first uses dilution pools constructed by DPCA to dilute the original examples, then generates adversarial examples through those diluted examples. It can not only improve the success rate of any adversarial attack methods without any limit of datasets or victim models but also obtain higher adversarial example quality compared with the original method. Through the experiments of different datasets, dilution pools, victim models, and textual adversarial attack methods, it is successfully verified the improvement of SSDA on the success rate and proved that dilution pools constructed by DPCA can further enhance the dilution ability of SSDA. The experiment results demonstrate that SSDA reveals more vulnerabilities of models than classical methods, and DPCA can help SSDA to improve success rate with higher adversarial example quality.

    • File Hierarchy CP-ABE Scheme Supporting Graded User Access

      2023, 34(7):3329-3342. DOI: 10.13328/j.cnki.jos.006526 CSTR:

      Abstract (759) HTML (787) PDF 10.55 M (2548) Comment (0) Favorites

      Abstract:The file hierarchy ciphertext policy attribute-based encryption (FH-CP-ABE) scheme realizes multi-level files encryption with the single access policy, which saves the computation cost of encryption and decryption and the storage cost of ciphertext. Nevertheless, the existing file hierarchy CP-ABE scheme cannot support graded user access, while suffers due to the unauthorized access. For this reason, a file hierarchy CP-ABE scheme that supports graded user access is proposed. In the proposed scheme, the graded user access tree is constructed, and the ciphertext subsections are reconstructed to support the access requirements of graded users, thus eliminate the possibility of users to conduct unauthorized access. The security analysis shows that the proposed scheme can resist selective chosen-plaintext attack. Both theoretical and experimental analyses show that the proposed scheme is more efficient in terms of computation and storage compared to related scheme.

    • Secure Intersection-sum Computation

      2023, 34(7):3343-3353. DOI: 10.13328/j.cnki.jos.006529 CSTR:

      Abstract (649) HTML (799) PDF 5.55 M (2055) Comment (0) Favorites

      Abstract:Secure multi-party computation is one of the hot issues in international cryptographic community. The secure computation of intersection-sum is a new problem of secure multi-party computation. The problem has important theoretical significance and practical value in the fields of industry, commerce, and healthcare. The existing solutions are designed under the condition that the private sets are subsets of a universal set and the intersection cardinality will be leaked and there are some false probabilities. This study, based on the Paillier cryptosystem, designs three protocols for the intersection-sum problem. These protocols are secure in the semi-honest model. Protocol 1 privately computes the number of common identifiers (i.e., user identifier intersection cardinality) and the sum of the integer values associated with these users, Protocol 2 and Protocol 3 privately compute the sum of the associated integer values of intersection elements without leaking the intersection cardinality. The whole computation process does not reveal any more information about their private inputs except for the intersection-sum. The protocols do not restrict that the private sets are subsets of a universal set, and they can be applied in more scenarios. It is proved, by using the simulation paradigm, that these protocols are secure in the semi-honest model. The efficiency of the protocols is also tested by experiments.

    • CCA Secure Broadcast Encryption Based on SM9

      2023, 34(7):3354-3364. DOI: 10.13328/j.cnki.jos.006531 CSTR:

      Abstract (948) HTML (854) PDF 6.00 M (2684) Comment (0) Favorites

      Abstract:The chosen-ciphertext attack (CCA) security model can effectively figure active attacks in reality. The existing cryptosystems against CCA are mainly designed by foreign countries, and China is lack of its CCA secure cryptosystems. Although there are general transformation approaches to achieving CCA security, they lead to an increase in both computational overhead and communication overhead. Based on the SM9 encryption algorithm, this study proposes an identity-based broadcast encryption scheme with CCA security. The design is derived from the SM9, and the size of the private key and ciphertext is constant and independent of the number of receivers chosen in the data encryption phase. Specifically, the private key includes one element, and the ciphertext is composed of three elements. If the GDDHE assumption holds, the study proves that the proposed scheme has selective CCA security under the random oracle model. In order to achieve CCA security, a dummy identity is introduced in designing the encryption algorithm, and the identity can be used to answer the decryption query successfully. Analysis shows that the proposed scheme is comparable to the existing efficient identity-based broadcast encryption schemes in terms of computational efficiency and storage efficiency.

    • Deep Recommendation Model with Cross-domain Association and Privacy Protection

      2023, 34(7):3365-3384. DOI: 10.13328/j.cnki.jos.006533 CSTR:

      Abstract (840) HTML (928) PDF 5.28 M (2803) Comment (0) Favorites

      Abstract:Recommendation systems, which can effectively filter information based on user preferences, has been applied widely. The problem of cold start and data sparsity becomes more and more serious with the explosive growth of the number of users. Multi-source data fusion, which can effectively alleviate the recommendation accuracy under the conditions of data sparsity and the cold start problem, is favored by researchers. Its main idea is to fuse auxiliary information of users in other aspects for missing values filling to optimize the accuracy of target recommendation service. Nevertheless, more serious risk of privacy disclosure is introduced due to the relations between data. To solve the above problems, this study proposes a deep cross-domain recommendation model with privacy protection. In detail, a deep learning collaborative recommendation method is designed featuring multi-source data fusion and differential privacy protection. On the one hand, this method fuses auxiliary domain information to improve the accuracy of recommendation and corrects the deviation of abnormal points to improve the performance of the recommender system; on the other hand, this method adds noise in the collaborative training process based on differential privacy model to solve the data security problem in data fusion. In order to evaluate the long tail effect in the recommendation system, this study proposes a new metric—discovery degree for the first time, which is used to measure the ability of the recommendation algorithm to find users’ invisible requirements. Based on the performance comparison and analysis of the existing algorithms, the results show that the proposed method has better recommendation accuracy and diversity than the existing methods on the premise of ensuring privacy security, and can effectively discover the hidden needs of users.

    • Generative Multiple Adversarial Steganography Algorithm Based on U-Net Structure

      2023, 34(7):3385-3407. DOI: 10.13328/j.cnki.jos.006537 CSTR:

      Abstract (928) HTML (858) PDF 10.20 M (2611) Comment (0) Favorites

      Abstract:The development of artificial intelligence brings more and more challenges to data hiding technology, and it is urgent to improve the security of existing steganography methods. In this study, a generative multiple adversarial steganography algorithm based on U-Net network structure is proposed to improve the image data hiding ability. A generative multiple adversarial steganography network (GMASN), including the generative adversarial network, the steganalyzer optimization network and the steganalysis network, is firstly constructed, and the anti steganalysis ability of the steganography image is improved through the competition of the networks in the GMASN. At the same time, aiming at the problem that the existing generative adversarial network can only generate low-quality images randomly, a generative network based on U-Net structure is designed to transfer the details of the reference image to the generated carrier image, by which the image can be generated objectively with high visual quality. Moreover, the image discrimination loss, mean square error (MSE) loss, and steganalysis loss are dynamically combined in the proposed scheme to enable the GMASN to converge rapidly and stably. Experimental results show that the PSNR of the generated carrier image can reach 48.60 dB, and the discrimination rate between the generated carrier image and the steganographic image is 50.02%. The proposed algorithm can generate high-quality carrier images suitable for data hiding, enable the steganographic network to converge rapidly and stably, and improve the security of image steganography effectively.

    • CNN Based Transformer for Panoptic Segmentation

      2023, 34(7):3408-3421. DOI: 10.13328/j.cnki.jos.006530 CSTR:

      Abstract (1388) HTML (1141) PDF 8.76 M (2940) Comment (0) Favorites

      Abstract:This study proposes a convolutional neural network (CNN) based Transformer to solve the panoptic segmentation task. The method draws on the inherent advantages of the CNN in image feature learning and avoids increase in the amount of calculation when the Transformer is transplanted into the vision task. The CNN-based Transformer is attributed to the two basic structures of the projector performing the feature domain transformation and the extractor responsible for the feature extraction. The effective combination of the projector and the extractor forms the framework of the CNN-based Transformer. Specifically, the projector is implemented by a lattice convolution that models the spatial relationship of the image by designing and optimizing the convolution filter configuration. The extractor is performed by a chain network that improves feature extraction capabilities by chain block stacking. Considering the framework and the substantial function of panoptic segmentation, the CNN-based Transformer is successfully applied to solve the panoptic segmentation task. The experimental results on the MS COCO and Cityscapes datasets demonstrate that the proposed method has excellent performance.

    • Priority Ceiling Protocol Based on Avoidance Blocking

      2023, 34(7):3422-3437. DOI: 10.13328/j.cnki.jos.006534 CSTR:

      Abstract (777) HTML (812) PDF 11.36 M (2255) Comment (0) Favorites

      Abstract:In order to improve the CPU utilization of spacecraft computers, the new generation of spacecraft operating system uses a hybrid scheduling algorithm that includes both fixed-point starting tasks and sporadic tasks. Among them, fixed-point starting tasks are often safety-critical tasks and need to be started at fixed points and cannot be blocked during execution. Under the condition that fixed-point starting tasks and sporadic tasks coexist, the existing real-time lock protocols cannot guarantee that the blocking time of fixed-point starting tasks is zero, so on the basis of the classic priority ceiling protocol, a real-time lock protocol based on the idea of avoidance blocking is proposed in this study to ensure that sporadic tasks’ access to shared resources will not affect the execution of fixed-point starting tasks by judging in advance and setting virtual starting point. At the same time, by temporarily increasing the access priority of some resources, the cost caused by task preemption can be reduced. This study presents the worst blocking time of the above lock protocol and uses the schedulable rate experiments to analyze its performance. Experiments show that in the case of short critical sections, this protocol can control the schedulability loss caused by accessing shared resources to under 27%.

    • Label Enhancement Based Discrete Cross-modal Hashing Method

      2023, 34(7):3438-3450. DOI: 10.13328/j.cnki.jos.006536 CSTR:

      Abstract (678) HTML (1024) PDF 6.65 M (2310) Comment (0) Favorites

      Abstract:Cross-modal hashing can greatly improve the efficiency of cross-modal retrieval by mapping data of different modalities into more compact hash codes. Nevertheless, existing cross-modal hashing methods usually use a binary similarity matrix, which cannot accurately describe the semantic similarity relationships between samples and suffer from the squared complexity problem. In order to better mine the semantic similarity relationships of data, this study presents a label enhancement based discrete cross-modal hashing method (LEDCH). It first leverages the prior knowledge of transfer learning to generate the label distribution of samples, then constructs a stronger similarity matrix through the label distribution, and generates the hash codes by an efficient discrete optimization algorithm with a small quantization error. Finally, experimental results on two benchmark datasets validate the effectiveness of the proposed method on cross-modal retrieval tasks.

    • Optimization of GEMM on SW1621 Processors

      2023, 34(7):3451-3463. DOI: 10.13328/j.cnki.jos.006519 CSTR:

      Abstract (884) HTML (1418) PDF 7.02 M (2785) Comment (0) Favorites

      Abstract:General matrix multiply (GEMM) is one of the most used functions in scientific and engineering computation, and it is also the base function of many linear algebra libraries. Its performance usually has essential influence on the whole application. Besides, because of its intensity in computation, its efficiency is often considered as an important metric of the hardware platform. This study conducts systematic optimization to dense GEMM on the domestic SW1621 processor. Based on analysis of the baseline code and profiling of various overhead, as well as utilization of the architectural features and instruction set, optimization for DGEMM is carefully designed and performed, including blocking scheme, packing mechanism, kernel function implementation, data prefetch, etc. Besides, a code generator is developed, which can generate different assembly and C code according to the input parameters. Using the code generator, together with auto-tuning scripts, it is able to find the optimal values for the tunable parameters. After applying the optimizations and tuning, the proposed single thread DGEMM achieved 85% of the peak performance of a single core, and 80% of the performance of the entire chip of 16 cores. The optimization to DGEMM not only improves the performance of BLAS on SW1621, but also provides an important reference for optimizing dense data computation on SW series multi-core machines.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063