YU Zhong-Qi , ZHANG Xiao-Yu , LI Jian-Wen
2023, 34(8):3467-3484. DOI: 10.13328/j.cnki.jos.006867
Abstract:In recent years, formal verification technology has received more and more attention, and it plays an important role in ensuring the safety and correctness of systems in critical areas of safety. As a branch of formal verification with a high degree of automation, model checking has a very broad development prospect. This work studies and proposes a new model checking technique, which can effectively check transition systems, including bug-finding and safety proof. Different from existing model checking algorithms, the proposed method, unsatisfied core (UC)-based approximate incremental reachability (UAIR), mainly utilizes the unsatisfied core to solve a series of candidate safety invariants until the final invariant is generated, so as to realize safety proof and bug-finding. In symbolic model checking based on the SAT solver, the UC obtained by the satisfiability solver is used to construct candidate safety invariant, and if the transition system itself is safe, the initial invariant obtained is only an approximation of the safety invariant. Then, while checking the safety, the candidate safety invariant is gradually improved until a real invariant is found that proves the system is safe; if the system is unsafe, the proposed method can finally find a counterexample to prove the system is unsafe. As a brand new method, UCs are exploited for safety model checking and sound results are achieved. As it is all known, there is no absolute best method in the field of model checking, although the proposed method cannot surpass the current mature methods such as IC3, CAR, etc., the method in this paper can solve 3 cases that other mature methods are unable to solve, it is believed that this method can be a valuable addition to the model checking toolset.
LI Shuo-Chuan , WANG Zan , MA Ming-Xu , CHEN Xiang , ZHAO Ying-Quan , WANG Hai-Chi , WANG Hao-Yu
2023, 34(8):3485-3506. DOI: 10.13328/j.cnki.jos.006865
Abstract:Constraint solving has been applied to many domains of program analysis, and deeply applied in concurrent program analysis. Concurrent programs are specific domain software that has been widely used with the rapid development of multi-core processors. However, concurrent defects threaten the security and robustness of concurrent programs, thus it is of great importance to test concurrent programs. Due to the non-deterministic thread scheduling, one of the key challenges for concurrent program testing is how to reduce the thread interleaving space with exponential growth. The state-of-the-art approaches (i.e., J-MCR) tackle the challenge through constraint solving. However, it is found that there exist conflicts and redundancies inside constraints (i.e., the conflict of constraint clauses makes constraints unsatisfiable), solving those unsatisfiable constraints results in lower efficiency. Thus, a directed graph constraint-guided method called GC-MCR (directed graph constraint-guided maximal causality reduction) is proposed. By removing conflictive constraints and simplify redundant constraints, the times of constraint solving are reduced thereby further improving the efficiency. Comparing with state-of-the-art approach J-MCR, GC-MCR reduces the times of constraint solving by 19.36% on average and reduces the testing time on average by 34.01% on 38 concurrent programs.
ZENG Fan-Lang , CHANG Rui , XU Hao , PAN Shao-Ping , ZHAO Yong-Wang
2023, 34(8):3507-3526. DOI: 10.13328/j.cnki.jos.006866
Abstract:As a trusted execution environment technology on ARM processor, TrustZone provides an isolated and independent execution environment for security sensitive programs and data on the device. However, making trusted OS and all trusted applications run in the same trusted environment may cause problems-the exploitation of vulnerabilities on any component will affect the others in the system. Although ARM proposed S-EL2 virtualization technology, which supports multiple isolated partitions in the security world to alleviate this problem, there may still be security threats such as information leakage between partitions in the actual partition manager. Current secure partition manager designs and implementations lack rigorous mathematical proofs to guarantee the security of isolated partitions. This study analyzes the multiple secure partitions architecture of ARM TrustZone in detail, proposes a refinement-based modeling and security analysis method for multiple secure partitions of TrustZone, and completes the modeling and formal verification of the secure partition manager based on the theorem prover Isabelle/HOL. First, a multiple security partitions model named RMTEE is built based on refinement, an abstract state machine is used to describe the system running process and security policy requirements, and the abstract model of multiple secure partitions is established and instantiated. Then the concrete model of the secure partition manager is implemented, in which the event specification is implemented following the FF-A specification. Secondly, in view of the problem that the existing partition manager design cannot meet the goal of information flow security verification, a DAC-based inter-partition communication access control is designed and applied to the modeling and verification of TrustZone security partition manager. Finally, the correctness of the concrete model's refinement of the abstract model and the correctness and security of the event specification in the concrete model are proved, thus completing the formal proof of 137 definitions and 201 lemmas in the model (more than 11 000 lines of Isabelle/HOL code). The results show that the model satisfies confidentiality and integrity, and can effectively defend against malicious attacks of partitions.
ZHANG Le-Ping , ZHAO Yong-Wang , WANG Bu-Yang , LI Yue-Xin , FENG Xiao-Xiao
2023, 34(8):3527-3548. DOI: 10.13328/j.cnki.jos.006869
Abstract:L4, the second-generation of microkernel, greatly compensates for the shortcomings of the first-generation of microkernel in flexibility and performance, which has attracted the attention of academia and industry. The kernel is the basic component for implementing the operating system. Once it has errors, it may cause the breakdown of whole system, further causing losses to users. Therefore, it is very important to improve the correctness and reliability of the kernel. Virtual memory subsystem is a key mechanism to implement L4 kernel. This study focuses on the formal modeling and verification of this mechanism. A formal model is presented, which involves all operations of mapping mechanism, all management operations of address space, and MMU behavior with TLB. The properties of functional correctness, safety, and security are formalized. Through reasoning about partial correctness, invariants and unwinding conditions, it is shown that the proposed model satisfies these properties in the theorem prover Isabelle/HOL. During modeling and verification, three problems are found related to functional correctness and security in source code. The corresponding solutions or suggestions are given in this study as well.
WAN Xin-Yi , XU Ke , CAO Qin-Xiang
2023, 34(8):3549-3573. DOI: 10.13328/j.cnki.jos.006868
Abstract:Discrete mathematics is one of the foundation courses of computer science, whose components include propositional logic, first-order logic and axiomatic set theory. Teaching practice shows that it is difficult for beginners to accurately understand abstract concepts such as syntax, semantics and deduction system. In recent years, some scholars have begun to introduce interactive theorem provers into teaching to help students construct formal proofs so that they can understand logical systems more thoroughly. However, existing theorem provers have a high threshold for getting started, directly employing these tools will increase students' learning burden. To address this problem, this study proposes a ZFC axiomatic set theory prover developed in Coq for teaching scenarios. First-order deduction system and ZFC axiomatic set theory are formalized, then several automated reasoning tactics are developed. Students can utilize these tactics to reason formally in a concise, textbook-style proving environment. This tool has been introduced into the discrete mathematics course for freshmen. Students with no prior theorem proving experience can exploit this tool to quickly construct formal proofs of theorems like mathematical induction and Peano arithmetic system, which verifies the practical effect of this tool.
LI Jing , OUYANG Dan-Tong , YE Yu-Xin
2023, 34(8):3574-3586. DOI: 10.13328/j.cnki.jos.006870
Abstract:Axiom pinpointing has attracted extensive interest for description logics due to its effect of exploring explicable defects in the ontology and searching hidden justifications for the logic consequence. Balancing the expressive power of description logics and the solving efficiency of reasoners has always been the focus of axiom pinpointing research. This study, from both glass-box and black-box perspectives proposed a consequence-based method to computing justifications. The glass-box method uses modified reasoning rules to trace the specific process of inference, and introduces the concept of pinpointing formula to establish the correspondence between the label of Boolean formula and all the minimal axioms sets. The black-box method directly calls the inference engine based on the unmodified reasoning rules, and further uses the HST to compute all justifications for the inference. Two reasoning tools have been designed based on the two axiom pinpointing algorithms for expressive description logics ontologies. Its feasibility is verified theoretically and experimentally, and its solving efficiency is compared with that of existing axiom pinpointing tools.
ZHANG Yi-Yu , WANG Gui-Hang , ZUO Zhi-Qiang , LI Xuan-Dong
2023, 34(8):3587-3604. DOI: 10.13328/j.cnki.jos.006552
Abstract:As emerging technologies develop rapidly, domain software puts forward new requirements for development efficiency. In addition, as a declarative programming language with concise syntax and well-defined semantics, Datalog can help developers solve complex problems rapidly and achieve smooth development and thus has attracted wide attention in recent years. However, when solving real-world problems, the existing single-machine Datalog engines are often limited by the size of memory capacity and possess no scalability. To solve these problems, this study designs and implements a Datalog engine based on out-of-core computing. Firstly, a series of operators supporting out-of-core computing are designed to compute the Datalog program, and then the program is converted into a C++ program with the operators. Next, the study designs a partition strategy based on Hash and a minimum replacement scheduling strategy based on search tree pruning. After that, the corresponding partition files are scheduled and computed to generate the final results. Based on this method, the study establishes the prototype tool DDL (disk-based Datalog engine) and selects widely used real-world Datalog programs to conduct experiments on both synthetic and real-world datasets. The experimental results show that DDL has positive performance and high scalability.
NIU Fei-Fei , LI Chuan-Yi , GE Ji-Dong , LUO Bin
2023, 34(8):3605-3636. DOI: 10.13328/j.cnki.jos.006558
Abstract:Feature requests refer to suggestions to perfect existing features or requests for new features proposed by software users on open platforms, and they can reflect users’ wishes and needs. In addition, efficient and accurate analysis and processing of feature requests play a vital role in improving user satisfaction and product competitiveness. With users’ active participation, feature requests have become an important source of software requirements. However, feature requests are different from traditional requirements in terms of source, content, and form. Therefore, methods of applying feature requests to software development must differ from that of traditional requirements. At present, massive research focuses on applying feature requests to software development, e.g., feature requests’ acquisition, classification, prioritization, quality management, developer recommendation, and location of relevant codes. As related research emerges constantly, it is increasingly necessary to review user feature request analysis and processing. This study analyzes 121 global academic research papers on how to analyze and process feature requests in the software development process and systematically sorts existing research results from the perspective of applying feature requests to software development. In addition, the study summarizes research topics on feature requests, suggests that feature requests be applied to software development, and makes a comparison with traditional requirements engineering processes. Furthermore, it analyzes existing research methods of different requirement engineering and points out the difference. Finally, the research direction of feature requests is discussed to provide guidance for future researchers.
MIAO Xin-Liang , CHANG Rui , PAN Shao-Ping , ZHAO Yong-Wang , JIANG Lie-Hui
2023, 34(8):3637-3658. DOI: 10.13328/j.cnki.jos.006612
Abstract:The security of the trusted execution environment (TEE) has been concerned by Chinese and foreign researchers. Memory tag technology utilized in TEE helps to achieve finer-grained memory isolation and access control mechanisms. Nevertheless, prior works often rely on testing or empirical analysis to show their effectiveness, which fails to strongly guarantee the functional correctness and security properties. This study proposes a general formal model framework for memory tag-based access control and introduces a security analysis method in access control based on model checking. First, a general model framework for the access control model of TEE based on memory tag is constructed through a formal method, and those access control entities are formally defined. The defined rules include access control rules and tag update rules. Then abstract machines under the framework are incrementally designed and implemented with formal language B. In addition, the basic properties of the machines are formalized through invariant constraints. Next, a TEE implementation called TIMBER-V is used as an application case. The TIMBER-V access control model is constructed by instantiating these abstract machines, and the security properties are formally specified. Furthermore, the functional correctness and security of the models are verified based on model checking. Finally, this study simulates the specific attack scenarios, and these attacks are successfully detected. The evaluation results have proved the effectiveness of the security analysis method.
WANG Wen-Sheng , TIAN Cong , DUAN Zhen-Hua
2023, 34(8):3659-3673. DOI: 10.13328/j.cnki.jos.006614
Abstract:Determinization of an automaton refers to the transformation of a nondeterministic automaton into a deterministic automaton recognizing the same language, which is one of the fundamental notions in automata theory. Determinization of ω automata serves as a basic step in the decision procedures of SnS, CTL*, and μ-calculus. Meanwhile, it is also the key to solving infinite games. Therefore, studies on the determinization of ω automata are of great significance. This study focuses on a kind of ω automata called Streett automata. Nondeterministic Streett automata can be transformed into equivalent deterministic Rabin or Parity automata. In the previous work, the study has obtained algorithms with optimal state complexity and optimal asymptotical performance, respectively. Furthermore, it is necessary to develop a tool supporting Streett determinization, so as to evaluate the actual effect of proposed algorithms and show the procedure of determinization visually. This study first introduces four different Streett determinization structures including μ-Safra trees, H-Safra trees, compact Streett Safra trees, and LIR-H-Safra trees. By H-Safra trees, which are optimal, and μ-Safra trees, deterministic Rabin transformation automata are obtained. In addition, deterministic parity transformation automata are constructed via another two structures, where LIR-H-Safra trees are asymptotically optimal. Furthermore, based on the open source software named graphical tool for omega-automata and logics (GOAL), the study develops a tool for Streett determinization and names it NS2DR&PT to support the four structures. Besides, corresponding test sets are constructed by randomly generating 100 Streett automata, and comparative experiments are carried out. Results show that the actual effect of state complexity in each structure is consistent with theoretical analysis. Moreover, the efficiency of different algorithms is compared and analyzed.
2023, 34(8):3674-3685. DOI: 10.13328/j.cnki.jos.006615
Abstract:Asynchronous programs utilize asynchronous non-blocking calls to achieve program concurrency, and they are widely applied in parallel and distributed systems. However, it is very complex to verify asynchronous programs, and the difficulty can be ranked as EXPSPACE in terms of both safety and liveness. This study proposes a program model of asynchronous programs and defines two problems of asynchronous programs, namely, ∈-equivalence and ∈-reachability. In addition, the two problems can be proved to be NP-complete by reducing the 3-CNF-SAT to the problems and making them further reduced to the reachability of the communication-free Petri net. The case shows that the two problems can solve the verification problems of asynchronous programs.
LI Xi-Meng , WANG Guo-Hui , ZHANG Qian-Ying , SHI Zhi-Ping , GUAN Yong
2023, 34(8):3686-3707. DOI: 10.13328/j.cnki.jos.006616
Abstract:The reliable functioning of safety-critical IT systems depends heavily on the correct execution of programs. Deductive program verification can be performed to guarantee the correct execution to a large extent. There are already a plethora of programming languages in use, and new languages oriented toward high-reliability scenarios are still being invented. As a result, it is difficult to devise a full-fledged logical system for each language to support the verification of programs and prove the soundness and completeness of the logical system with respect to the formal semantics of the language. Language-independent verification techniques offer sound verification procedures parameterized over the formal semantics of programming languages. The specialization of the verification procedure with the formal semantics of a concrete programming language directly gives rise to a verification procedure for the language. This study proposes a language-independent verification technique based on big-step operational semantics. The technique features a unified procedure for sound reasoning about program structures that potentially cause unbounded behavior, such as iteration and recursion. In particular, the study employs a functional formalization of big-step operational semantics to support the explicit representation of the computation performed by the sub-structures of a program. This representation enables the exploitation of the auxiliary information provided for these sub-structures in the unified reasoning process. In addition, the study has proved the soundness and relative completeness of the proposed technique, evaluated the technique by verification examples in imperative and functional programming languages, and formalized all theoretical results and verification examples in the Coq proof assistant, and thus provides a basis for developing a language-independent program verifier with big-step operational semantics based on a proof assistant.
2023, 34(8):3708-3725. DOI: 10.13328/j.cnki.jos.006617
Abstract:With the development of the Internet, the 5th generation (5G) of mobile communication technology emerges. The 5G authentication and key agreement (5G-AKA) protocol is proposed mainly to achieve two-way authentication between users and service networks. However, recent research suggests that the protocol may be subject to information deciphering and message replay attacks. At the same time, it is found that some variants of the current 5G-AKA cannot satisfy the protocol’s unlinkability. Therefore, in response to these shortcomings, this study proposes an improvement plan called SM-AKA. SM-AKA is composed of two parallel sub-protocols in a novel way. In addition, through the flexible mode switching, lightweight sub-protocols (GUTI submodule) are frequently adopted, and the other sub-protocol (SUPI submodule) is used to deal with abnormalities caused by authentication. Therefore, this mechanism not only realizes the efficient authentication between users and networks but also improves the stability of the protocol. Furthermore, the freshness of variables has been effectively maintained to prevent the replay of messages, and strict encryption and decryption methods have further strengthened the security of the protocol. Finally, the study carries out a complete evaluation of SM-AKA. Through formal modeling, attack assumptions, and Tamarin derivation, it is proved that the plan can achieve the authentication and privacy goals, and the theoretical analysis has demonstrated the performance advantage of the protocol.
LU Fa-Ming , HUANG Ying , ZENG Qing-Tian , BAO Yun-Xia , TANG Meng-Fan
2023, 34(8):3726-3744. DOI: 10.13328/j.cnki.jos.006618
Abstract:Data races are common defects in multi-threaded programs. Traditional data race analysis methods fail to achieve both recall and precision, and their detection reports cannot locate the root cause of defects. Due to the advantages of Petri nets in terms of accurate behavior description and rich analysis tools in the modeling and analysis of concurrent systems, this study proposes a new data race detection method based on Petri net unfolding technology. First, a Petri net model of the program is established by analyzing and mining a program running trace. The model implies different traces of the program even though it is mined from a single trace, which can reduce the false negative rate of traditional dynamic analysis methods while ensuring performance. After that, a Petri net unfolding-based detection method of program potential data races is proposed, which improves the efficiency significantly compared with static analysis methods and can clearly show the triggering path of data race defects. Finally, for the potential data race detected in the previous stage, a scheduling schema is designed to replay the defect based on the CalFuzzer platform, which can eliminate false positives and ensure the authenticity of detection results. In addition, the corresponding prototype system is developed, and the effectiveness of the proposed method is verified by open program instances.
LU Hui , GUO Run-Sheng , JIN Cheng-Jie , HE Lu-Xiao-Han , WANG Xing-Wei , TIAN Zhi-Hong
2023, 34(8):3745-3756. DOI: 10.13328/j.cnki.jos.006652
Abstract:After years of technical development and attack-defense confrontation, the reinforcement technology for Android applications has matured to the extent that protection granularity has gradually developed from general dynamic Dalvik executable (DEX) modification to a highly customized Native-layer obfuscation mechanism. Client code protection is strengthened by continuously increasing reverse analysis difficulty and workload. For the newly emerged reinforcement technology of obfuscator low level virtual machine (OLLVM) obfuscation, this study proposes an automatic anti-obfuscation solution CiANa based on Capstone and flow-sensitive concolic execution. The Capstone engine is used to analyze the basic block and its instruction structure, thereby identifying the real blocks scattered in the control flow graph of program disassembly. Then, the execution sequence of the real blocks is determined by leveraging flow-sensitive concolic execution. Finally, the real block assembly instructions are repaired to obtain anti-obfuscated executable binary files. The comparative experimental results show that CiANa can recover the Android Native files under OLLVM obfuscation in the ARM/ARM64 architecture. As the first framework that offers effective anti-obfuscation and generates executable files for all versions (Debug/Release version) of OLLVM in the ARM/ARM64 architecture, CiANa provides necessary auxiliary support for reverse analysis.
LIANG Shi-Yu , GAO Yang , CONG Yu , HAO Ai-Min
2023, 34(8):3757-3773. DOI: 10.13328/j.cnki.jos.006545
Abstract:Recently, with the continuous improvement of realism requirements of movies, games, virtual reality applications, etc., the real-time rendering of translucent materials such as human organs and milk has become more and more important. For most of the current subsurface scattering calculation methods, it is difficult to correct the scattering range. To tackle this estimation issue, a new subsurface scattering calculation formula is proposed to accurately represent the maximum scattering distance. First, the brute-force Monte Carlo photon tracking results are simulated to obtain the reflectance profile results. Second, the selected polynomial model is used to fit the reflectance profile to calculate the precise maximum scattering range at the shading point. To begin with, a new importance sampling scheme is proposed to reduce the number of Monte Carlo samples, thereby increasing the computational efficiency. In addition, the required parameters are only provided by the reflectance on the shading points and the mean free path of the material, so as to flexibly adjust the rendering effect. Experiments results have shown that the proposed model can avoid the previous error estimation of the scattering range, and has more accurate rendering results of the complex reflectivity area of the material. Meanwhile, the rendering rate meets real-time requirements.
LIU Han , LI Kai-Xuan , CHEN Yi-Xiang
2023, 34(8):3774-3792. DOI: 10.13328/j.cnki.jos.006592
Abstract:In recent years, artificial intelligence (AI) has rapidly developed. AI systems have penetrated people’s lives and become an indispensable part. However, these systems require a large amount of data to train models, and data disturbances will affect their results. Furthermore, as the business becomes diversified, and the scale gets complex, the trustworthiness of AI systems has attracted wide attention. Firstly, based on the trustworthiness attributes proposed by different organizations and scholars, this study introduces nine trustworthiness attributes of AI systems. Next, in terms of the data, model, and result trustworthiness, the study discusses methods for measuring the data, model, and result trustworthiness of existing AI systems and designs an evidence collection method of AI trustworthiness. Then, it summarizes the trustworthiness measurement theory and methods of AI systems. In addition, combined with attribute-based software trustworthiness measurement methods and blockchain technologies, the study establishes a trustworthiness measurement framework for AI systems, which includes methods of trustworthiness attribute decomposition and evidence acquisition, the federation trustworthiness measurement model, and the blockchain-based trustworthiness measurement structure of AI systems. Finally, it describes the opportunities and challenges of trustworthiness measurement technologies for AI systems.
WANG Dong-Jing , LIU Ji-Tao , YU Dong-Jin
2023, 34(8):3793-3820. DOI: 10.13328/j.cnki.jos.006542
Abstract:With the wide application of global positioning system (GPS), more and more electric bicycles are equipped with GPS sensors. Massive trajectory data recorded by those sensors are of great value in many fields, such as users’ travel patterns analysis, decision support for urban planners, and so on. However, the low-cost GPS sensors widely used on electric bicycles cannot provide high-precision positioning. Besides, the map matching for the electric bicycles’ track data is more complex and challenging due to: (1) many stay points on electric bicycles’ trajectories; (2) higher sampling frequency and shorter distance between adjacent track points on electric bicycle’s track data; (3) some roads only open for electric bicycles, and the accuracy of matching is sensitive to the quality of the road network. To solve those issues mentioned above, an adaptive and accurate road network map matching algorithm is proposed named KFTS-AMM, which consists of two main components: the segmented Kalman filtering based trajectory simplification (KFTS) algorithm and segmented hidden Markov model based adaptive map matching (AMM) algorithm. Since Kalman filtering algorithm can be used for optimal state estimation, the trajectory simplification algorithm KFTS can make the trajectory curve smoother and reduce the impact of abnormal points on the accuracy of map matching by fixing the trajectory points automatically in the process of trajectory simplification. Besides, the matching algorithm AMM is used to reduce the impact of invalid trajectory segments on the map matching accuracy. Moreover, stay points identification and merging step are added into the processing of track data, and the accuracy is further improved. Extensive experiments conducted on the real-world track dataset of electric bicycles in Zhengzhou city show that the proposed approach KFTS-AMM outperforms baselines in terms of accuracy and can speed up the matching process by using the simplified track data significantly.
ZHANG Hao-Di , CHEN Zhen-Hao , CHEN Jun-Yang , ZHOU Yi , LIAN De-Fu , WU Kai-Shun , LIN Fang-Zhen
2023, 34(8):3821-3835. DOI: 10.13328/j.cnki.jos.006593
Abstract:In recent years, deep reinforcement learning has been widely used in sequential decisions with positive effects, and it has outstanding advantages in application scenarios with high-dimensional input and large state spaces. However, deep reinforcement learning faces some limitations such as a lack of interpretability, inefficient initial training, and a cold start. To address these issues, this study proposes a dynamic decision framework combing explicit knowledge reasoning with deep reinforcement learning. The framework successfully embeds the priori knowledge in intelligent agent training via explicit knowledge representation and gets the agent intervened by the knowledge reasoning results during the reinforcement learning, so as to improve the training efficiency and the model’s interpretability. The explicit knowledge in this study is categorized into two kinds, namely, heuristic acceleration knowledge and evasive safety knowledge. The heuristic acceleration knowledge intervenes in the decision of the agent in the initial training to speed up the training, while the evasive safety knowledge keeps the agent from making catastrophic decisions to keep the training process stable. The experimental results show that the proposed framework significantly improves the training efficiency and the model’s interpretability under different application scenarios and reinforcement learning algorithms.
CHEN Jie-Na , ZHANG Ming-Zhuo , DU De-Hui , LI Bo , NIE Ji-Hui , REN Jing-Yao
2023, 34(8):3836-3852. DOI: 10.13328/j.cnki.jos.006594
Abstract:The realization of safe and efficient behavior decision-making has become a challenging issue for autonomous driving. As autonomous driving industries develop vigorously, industrial professionals and academic members have proposed many autonomous driving behavior decision-making approaches. However, due to the influence of environmental uncertainties as well as requirements for effectiveness and high security of the decision, existing approaches fail to take all these factors into account. Therefore, this study proposes an autonomous driving behavior decision-making approach with the RoboSim model based on the Bayesian network. First, based on domain ontology, the study analyzes the semantic relationship between elements in autonomous driving scenarios and predicts the intention of dynamic entities in scenarios by the LSTM model, so as to provide driving scenario information for establishing the Bayesian network. Next, the autonomous driving behavior decision-making in specific scenarios is inferred by the Bayesian network, and the state transition of the RoboSim model is employed to carry the dynamic execution of behavior decision-making and eliminate the redundant operation of the Bayesian network, thus improving the efficiency of decision-making. The RoboSim model is platform-independent. In addition, it can simulate the decision-making cycle and support validation technologies in different forms. To ensure the safety of the behavior decision-making, this study uses a model checking tool UPPAAL to verify and analyze the RoboSim model. Finally, based on lane change and overtaking cases, this study validates the feasibility of the proposed approach and provides a feasible way to achieve safe and efficient autonomous driving behavior decision-making.
HUANG Hou-Hua , LIU Jia-Xiang , SHI Xiao-Mu
2023, 34(8):3853-3869. DOI: 10.13328/j.cnki.jos.006595
Abstract:ARM develops an M-Profile vector extension solution in terms of ARMv8.1-M micro processor architecture and names it ARM Helium. It is declared that ARM Helium can increase the machine learning performance of the ARM Cortex-M processor by up to 15 times. As the Internet of Things develops rapidly, the correct execution of microprocessors is important. In addition, the official manual of instruction sets provides a basis for developing chip simulators and on-chip applications, and thus it is the basic guarantee of program correctness. This study introduces these mantic correctness of vectorized machine learning instructions in the official manual of the ARMv8.1-M architecture by using K Framework. Furthermore, the study automatically extracts pseudo codes describing the vectorized machine learning instruction operation based on the manual and then formalizes them in semantics rules. With the executable framework provided by K Framework, the correctness of machine learning instructions in arithmetic operation is verified.
Lü Hong-Run , LI Qing , SHEN Geng-Biao , ZHOU Jian-Er , JIANG Yong , LI Wei-Chao , LIU Kai , QI Zhu-Yun
2023, 34(8):3870-3890. DOI: 10.13328/j.cnki.jos.006635
Abstract:Network measurement is the basis of scenes including network performance monitoring, traffic management, and fault diagnosis, and in-band network telemetry (INT) has become the focus of network measurement research due to its timeliness, accuracy, and scalability. With the emergence and development of programmable data planes, many practical INT solutions have been proposed thanks to their rich information feedback and flexible function deployment. First, this study analyzes the principles and deployment challenges of typical INT solutions INT and AM-PM. Second, according to the optimization measures and extension of INT, it studies the characteristics of the optimization mechanism from the aspects of the data collection process and multi-tasking, as well as the feasibility of technology extension in terms of wireless networks, optical networks, and hybrid networks. Third, in view of the applications of INT in typical scenes, the characteristics of these INT applications are comparatively investigated from the perspectives of in-network performance sensing, network-level telemetry systems, traffic scheduling, and fault diagnosis. Finally, a research summary of INT is made, and the future research directions are predicted.
2023, 34(8):3891-3904. DOI: 10.13328/j.cnki.jos.006543
Abstract:The security of traditional cryptographic algorithms is based on the black-box attack model. In this attack model, the attacker can only obtain the input and output of the cryptographic algorithm, but not the internal details of the cryptographic algorithm. In recent years, the concept of white-box attack model has been proposed. In the white-box attack model, attackers can not only obtain the input and output of cryptographic algorithm, but also directly observe or change the internal data of cryptographic algorithm. In order to ensure the security of existing cryptographic algorithms under white-box attack environment, redesigning the existing cryptographic algorithms through white-box cryptography technology without changing their functions is called white-box implementation of existing cryptographic algorithms. It is of great significance to study the design and analysis of the white-box implementation scheme for solving the issue of digital rights management. In recent years, a kind of side channel analysis method for white-box implementation schemes has emerged. This kind of analysis method only needs to know a few internal details of white-box implementation schemes, then it can extract the key. Therefore, it is the analysis method with practical threat to the existing white-box implementation schemes. It is of great practical significance to analyze the existing white-box implementation schemes to ensure the security of the schemes. The typical representative of this kind of analysis method is the differential computation analysis (DCA) based on the principle of differential power analysis. This study analyzes the Bai-Wu white-box SM4 scheme based on DCA. Based on the research results of the statistical characteristics of n-order uniform random invertible matrix on GF(2), an improved DCA (IDCA) is proposed, which can significantly improve the analysis efficiency on the premise of almost constant success rate. The results also show that the Bai-Wu white-box SM4 scheme can not guarantee the security in the face of DCA, therefore, it must be further improved to meet the security requirements of practical scenarios.
WANG Hao-Chang , ZHOU Chen-Lian , Marius Gabriel PETRESCU
2023, 34(8):3905-3923. DOI: 10.13328/j.cnki.jos.006645
Abstract:Event extraction is to automatically extract event information in which users are interested from unstructured natural language texts and express it in a structured form. Event extraction is an important direction in natural language processing and understanding and is of high application value in different fields, such as government management of public affairs, financial business, and biomedicine. According to the degree of dependence on manually labeled data, the current event extraction methods based on deep learning are mainly divided into two categories: supervised learning and distantly-supervised learning. This article provides a comprehensive overview of current event extraction techniques in deep learning. Focusing on supervised methods such as CNN, RNN, GAN, GCN, and distant supervision, this study systematically summarizes the research in recent years. Additionally, the performance of different deep learning models is compared and analyzed in detail. Finally, the challenges facing event extraction are analyzed, and the research trends are forecasted.
CHEN Zhong-Qing , LI Dan-Dan , SHAN De-Sheng , QIAN Ye-Kui , XIE Kun , HUANG Xiao-Hong , CONG Qun
2023, 34(8):3924-3937. DOI: 10.13328/j.cnki.jos.006541
Abstract:The asymmetric flow generated by the widely deployed address translation technology brings challenges to the design of load balancing system. To solve the problem of insufficient use of multi-core processors and network card hardware capabilities by software load balancers, an asymmetric flow load balancing method based on flow characteristics is proposed. Firstly, a data packet dispatching algorithm to dispatch packets into expected CPU core via hardware is proposed. Then, an elephant flow detection algorithm is constructed by analyzing of the temporal and spatial characteristics of packet sequences. Finally, based on detected results, a load balance offloading method is proposed. The experimental results show that, asymmetric flow load balancing method can correctly handle the asymmetric flow. Meanwhile, the average throughput rate increases by 14.5%.