• Issue 4,2025 Table of Contents
    Select All
    Display Type: |
    • Compositional Verification for Requirements Consistency of Complex Embedded Systems

      2025, 36(4):1413-1434. DOI: 10.13328/j.cnki.jos.007223 CSTR: 32375.14.jos.007223

      Abstract (266) HTML (18) PDF 7.01 M (1570) Comment (0) Favorites

      Abstract:Formal methods have made significant strides in the field of requirements consistency verification. However, as the complexity of embedded system requirements continues to increase, verifying requirements consistency faces the challenge of dealing with an excessively large state space. To effectively reduce the verification state space, while also considering the strong dependency among devices in embedded system requirements, this study proposes a compositional verification method for ensuring the consistency of requirements in complex embedded systems. This method is based on requirement decomposition and identification of dependencies among requirements. By leveraging these dependencies, it assembles verification subsystems, enabling the compositional verification of complex embedded system requirements and facilitating the initial identification of inconsistencies. Specifically, the problem frames approach is employed for requirement modeling and decomposition, while a domain-specific device knowledge base is utilized for modeling the physical characteristics of devices. During the assembly of verification subsystems, models of expected software behavior are generated and dynamically integrated with physical device models. Finally, the feasibility and effectiveness of this method are validated through a case study of an airborne reconnaissance control system, demonstrating a significant reduction in the verification state space through five case evaluations. This method thus provides a practical solution for verifying the requirements of complex embedded systems.

    • Detecting Incompatible Third-party Library APIs in Python Based on Static Analysis

      2025, 36(4):1435-1460. DOI: 10.13328/j.cnki.jos.007224 CSTR: 32375.14.jos.007224

      Abstract (415) HTML (14) PDF 6.86 M (1808) Comment (0) Favorites

      Abstract:The rich development ecosystem of Python provides a lot of third-party libraries, significantly boosting developers’ efficiency and quality. Third-party library developers encapsulate underlying code, enabling upper-layer application developers to swiftly accomplish tasks by calling relevant APIs. However, APIs of third-party libraries are not constant. Owing to fixes, refactoring and feature additions, these libraries undergo continuous updates. Incompatible changes are seen in some APIs after updates, leading to abnormal termination or inconsistent results in upper-layer applications. Therefore, the API compatibility of the Python third-party library has become one of the issues that needs to be solved. There have been related studies focusing on API compatibility issues of Python third-party libraries, of which reasons have yet to be fully classified so that, the fine-grained cause can not be provided. An empirical study is conducted on the symptoms and causes of API compatibility issues with Python third-party library and a targeted static detection method is proposed. Initially, this study gathers 108 pairs of incompatible API versions by combining version update logs and regression tests across 6 version pairs of the flask and pandas libraries. Subsequently, an empirical study is conducted on the collected data, summarizing the symptoms and causes of compatibility issues. Finally, this study proposes a static analysis-based detection method for incompatible Python APIs, providing syntactic-level causes of incompatible API issues. This study conducts experimental evaluations on 12 version pairs of 4 popular Python third-party libraries. The results show that the proposed method is good in effectiveness, generalization, time performance, memory performance, and usefulness.

    • Survey on Security of Deep Code Models

      2025, 36(4):1461-1488. DOI: 10.13328/j.cnki.jos.007254 CSTR: 32375.14.jos.007254

      Abstract (563) HTML (9) PDF 12.67 M (619) Comment (0) Favorites

      Abstract:With the significant success of deep learning in fields such as computer vision and natural language processing, researchers in software engineering have begun to explore its integration into solving software engineering tasks. Existing research indicates that deep learning exhibits advantages in various code-related tasks, such as code retrieval and code summarization, that traditional methods and machine learning cannot match. Deep learning models trained for code-related tasks are referred to as deep code models. However, similar to natural language processing and image processing models, the security of deep code models faces numerous challenges due to the vulnerability and inexplicability of neural networks. It has become a research focus in software engineering. In recent years, researchers have proposed numerous attack and defense methods for deep code models. Nevertheless, there is a lack of a systematic review of research on deep code model security, hindering the rapid understanding of subsequent researchers in this field. To provide a comprehensive overview of the current research, challenges, and latest findings in this field, this study collects 32 relevant papers and categorizes existing research results into two main classes: backdoor attack and defense techniques, and adversarial attack and defense techniques. This study systematically analyzes and summarizes the collected papers based on the above two categories. Subsequently, it outlines commonly used experimental datasets and evaluation metrics in this field. Finally, it analyzes key challenges in this field and suggests feasible future research directions, aiming to provide valuable guidance for further advancements in the security of deep code models.

    • Survey on Application and Development of Large Language Models in Software Defect Detection and Repair

      2025, 36(4):1489-1529. DOI: 10.13328/j.cnki.jos.007268 CSTR: 32375.14.jos.007268

      Abstract (404) HTML (7) PDF 10.46 M (724) Comment (0) Favorites

      Abstract:With the advancement of informationalization, the development of a variety of applications and iterative functions inevitably leads to software defects, posing significant threats to program reliability and security. Therefore, detecting and repairing software defects becomes essential yet onerous for developers in maintaining software quality. Accordingly, software engineering researchers have proposed numerous technologies over the past decades to help developers address defect-related issues. However, these technologies face serious challenges and make little progress in industrial implementation. Large language model (LLM), such as the code-based model CodeX and the prestigious ChatGPT, trained on massive datasets, can capture complex patterns and structures in code, process extensive contextual information, and flexibly adapt to various tasks. Their superior performance has attracted considerable attention from researchers. In many software engineering tasks, technologies based on LLM show significant advantages in addressing key challenges previously faced in different domains. Consequently, this study attempts to analyze and explore three defect detection domains where technologies based on LLM have been widely adopted: deep-learning library defect detection, GUI automated testing, and automated test case generation, along with one mature software defect repair domain: automated program repair (APR). This study delves into the progress of these domains and provides an in-depth discussion of their characteristics and challenges. Lastly, based on an analysis of existing research, this study summarizes the key challenges faced by these domains and technologies and offers insights for future research.

    • Survey on Deep Learning Applications in Information Retrieval-based Bug Localization

      2025, 36(4):1530-1556. DOI: 10.13328/j.cnki.jos.007288 CSTR: 32375.14.jos.007288

      Abstract (114) HTML (7) PDF 3.47 M (426) Comment (0) Favorites

      Abstract:Automatic bug localization technologies can significantly alleviate the burden of debugging and maintaining software programs for developers. As a widely studied automatic bug localization technology, information retrieval-based bug localization has yielded promising performance of bug localization. In recent years, the utilization of deep learning for information retrieval-based bug localization has emerged as a research trend due to the widespread adoption of deep learning. This study systematically categorizes and summarizes 52 studies that have introduced deep learning to information retrieval-based bug localization in recent years. Firstly, a summary of datasets and evaluation indexes in this kind of bug localization is provided. Then, the localization performance of these techniques is analyzed from the perspectives of different granularity and transportability. Subsequently, information coding characterization methods and feature extraction methods employed in related studies are summarized. Finally, this study summarizes and compares the most advanced bug localization methods, and provides insights into the future directions of utilizing deep learning in information retrieval-based bug localization methods.

    • Slot Dependency Modeling for Cross-domain Slot Filling

      2025, 36(4):1557-1569. DOI: 10.13328/j.cnki.jos.007189 CSTR: 32375.14.jos.007189

      Abstract (150) HTML (9) PDF 6.39 M (1558) Comment (0) Favorites

      Abstract:This study considers slot filling as a crucial component of task-oriented dialogue systems, which serves downstream tasks by identifying specific slot entities in utterances. However, in a specific domain, it necessitates a large amount of labeled data, which is costly to collect. In this context, cross-domain slot filling emerges and efficiently addresses the issue of data scarcity through transfer learning. However, existing methods overlook the dependencies between slot types in utterances, leading to the suboptimal performance of existing models when transferring to new domains. To address this issue, a cross-domain slot filling method based on slot dependency modeling is proposed in this study. Leveraging the prompt learning approach based on generative pre-trained models, a prompt template integrating slot dependency information is designed, establishing implicit dependency relationships between different slot types and fully exploiting the predictive performance of slot entities in the pre-trained model. Furthermore, to enhance the semantic dependencies between slot types, slot entities, and utterance texts, discourse filling subtask is introduced in this study to strengthen the inherent connections between utterances and slot entities through reverse filling. Transfer experiments across multiple domains demonstrate significant performance improvements in zero-shot and few-shot settings achieved by the proposed model. Additionally, a detailed analysis of the main structures in the model and ablation experiments are conducted in this study to further validate the necessity of each part of the model.

    • Survey on Task Scheduling of Deep Learning Training Based on Performance Modeling

      2025, 36(4):1570-1589. DOI: 10.13328/j.cnki.jos.007202 CSTR: 32375.14.jos.007202

      Abstract (334) HTML (16) PDF 6.30 M (2228) Comment (0) Favorites

      Abstract:In recent years, research achievements in deep learning have found widespread applications globally. To enhance the training efficiency of large-scale deep learning models, industry practices often involve constructing GPU clusters and configuring efficient task schedulers. However, deep learning training tasks exhibit complex performance characteristics such as performance heterogeneity and placement topological sensitivity. Scheduling without considering performance can lead to issues such as low resource utilization and poor training efficiency. In response to this challenge, a great number of schedulers of deep learning training tasks based on performance modeling have emerged. These schedulers, by constructing accurate performance models, delve into the intricate performance characteristics of tasks. Based on this understanding, they design more optimized scheduling algorithms, thereby forming more efficient scheduling solutions. This study begins with a modeling design perspective, providing a categorized review of the performance modeling methods employed by current schedulers. Subsequently, based on the optimized scheduling approaches from performance modeling by schedulers, a systematic analysis of existing task scheduling efforts is presented. Finally, this study outlines prospective research directions for performance modeling and scheduling in the future.

    • Task Knowledge Fusion for Multimodal Knowledge Graph Completion

      2025, 36(4):1590-1603. DOI: 10.13328/j.cnki.jos.007213 CSTR: 32375.14.jos.007213

      Abstract (452) HTML (11) PDF 6.26 M (2160) Comment (0) Favorites

      Abstract:The task of completing knowledge graphs aims to reveal the missing fact triples within the knowledge graph based on existing fact triples (head entity, relation, tail entity). Existing research primarily focuses on utilizing the structural information within the knowledge graph. However, these efforts overlook that other modal information contained within the knowledge graph may also be helpful for knowledge graph completion. In addition, since task-specific knowledge is typically not integrated into general pre-training models, the process of incorporating task-related knowledge into modal information extraction becomes crucial. Moreover, given that different modal features contribute uniquely to knowledge graph completion, effectively preserving useful multimodal information poses a significant challenge. To address these issues, this study proposes a multimodal knowledge graph completion method that incorporates task knowledge. It utilizes a fine-tuned multimodal encoder tailored to the current task to acquire entity vector representations across different modalities. Subsequently, a modal fusion-filtering module based on recurrent neural networks is utilized to eliminate task-independent multimodal features. Finally, the study utilizes a simple isomorphic graph network to represent and update all features, thus effectively accomplishing multimodal knowledge graph completion. Experimental results demonstrate the effectiveness of our approach in extracting information from different modalities. Furthermore, it shows that our method enhances entity representation capability through additional multimodal filtering and fusion, consequently improving the performance of multimodal knowledge graph completion tasks.

    • Hybrid Data Augmentation Framework Based on Controllable Explanation

      2025, 36(4):1604-1619. DOI: 10.13328/j.cnki.jos.007215 CSTR: 32375.14.jos.007215

      Abstract (149) HTML (3) PDF 8.04 M (1437) Comment (0) Favorites

      Abstract:Previous pre-trained language models (PLMs) have demonstrated excellent performance in numerous tasks of natural language understanding (NLU). However, they generally suffer shortcut learning, which means learning the spurious correlations between non-robust features and labels, resulting in poor generalization in out-of-distribution (OOD) test scenarios. Recently, the outstanding performance of generative large language models (LLMs) in understanding tasks has attracted widespread attention, but the extent to which it is affected by shortcut learning has not been fully studied. In this paper, the shortcut learning effect of generative LLMs in three NLU tasks is investigated for the first time using the LLaMA series models and FLAN-T5 models as representatives. The results show that the shortcut learning problem still exists in generative LLMs. Therefore, a hybrid data augmentation framework is proposed based on controllable explanations as a mitigation strategy for the shortcut learning problem in generative LLMs. The framework is data-centric, constructing a small-scale mix dataset composed of model-generated controllable explain data and partial original prompting data for model fine-tuning. The experimental results in three representative NLU tasks show that the framework can effectively mitigate shortcut learning, and significantly improve the robustness and generalization of the model in OOD test scenarios while avoiding sacrifice of or even improving the model performance in in-distribution test scenarios. The solution code is available at https://github.com/Mint9996/HEDA.

    • Self-training Approach for Low-resource Relation Extraction

      2025, 36(4):1620-1636. DOI: 10.13328/j.cnki.jos.007219 CSTR: 32375.14.jos.007219

      Abstract (125) HTML (7) PDF 6.48 M (1523) Comment (0) Favorites

      Abstract:Self-training, a common strategy for tackling the annotated-data scarcity, typically involves acquiring auto-annotated data with high confidence generated by a teacher model as reliable data. However, in low-resource scenarios for Relation Extraction (RE) tasks, this approach is hindered by the limited generalization capacity of the teacher model and the confusable relational categories in tasks. Consequently, efficiently identifying reliable data from automatically labeled data becomes challenging, and a large amount of low-confidence noise data will be generalized. Therefore, this study proposes a self-training approach for low-resource relation extraction (ST-LRE). This approach aids the teacher model in selecting reliable data based on prediction ways of paraphrases, and extracts ambiguous data with reliability from low-confidence data based on partially-labeled modes. Considering the candidate categories of ambiguous data, this study proposes a negative training approach based on the set of negative labels. Finally, a unified approach capable of both positive and negative training is proposed for the integrated training of reliable data and ambiguous data. In the experiments, ST-LRE consistently demonstrates significant improvements in low-resource scenarios of two widely used RE datasets SemEval2010 Task-8 and Re-TACRED.

    • Survey on Machine Unlearning

      2025, 36(4):1637-1664. DOI: 10.13328/j.cnki.jos.007237 CSTR: 32375.14.jos.007237

      Abstract (233) HTML (6) PDF 7.35 M (583) Comment (0) Favorites

      Abstract:Machine learning has become increasingly prevalent in daily life. Various machine learning methods are proposed to utilize historical data for making predictions, making people’s life more convenient. However, there is a significant challenge associated with machine learning-privacy leakage. Mere deletion of a user’s data from the training set is not sufficient for avoiding privacy leakage, as the trained model may still harbor this information. To tackle this challenge, the conventional approach entails retraining the model on a new training set that excludes the data of the user. However, this method can be costly, prompting the exploration for a more efficient way to “unlearn” specific data while yielding a model comparable to a retrained one. This study summarizes the current literature on this topic, categorizing existing unlearning methods into three groups: training-based, editing-based, and generation-based methods. Additionally, various metrics are introduced to assess unlearning methods. The study also evaluates current unlearning methods in deep learning and concludes with future research directions in this field.

    • Survey on Multimodal Information Extraction Research

      2025, 36(4):1665-1691. DOI: 10.13328/j.cnki.jos.007245 CSTR: 32375.14.jos.007245

      Abstract (412) HTML (6) PDF 8.30 M (594) Comment (0) Favorites

      Abstract:Multimodal information extraction is a task to extract structured knowledge from unstructured or semi-structured multimodal data (such as text and images). It includes multimodal named entity recognition, multimodal relation extraction, and multimodal event extraction. This study analyzes multimodal information extraction tasks and summarizes the common part of the above three subtasks, i.e., a multimodal representation and fusion module. Moreover, it sorts out the commonly used datasets and mainstream research methods of the above three subtasks. Finally, it outlines research trends in multimodal information extraction and analyzes the existing problems and challenges in this field to provide a reference for future research.

    • Survey on Multi-view Stereo Based on Deep Learning

      2025, 36(4):1692-1714. DOI: 10.13328/j.cnki.jos.007248 CSTR: 32375.14.jos.007248

      Abstract (153) HTML (6) PDF 16.77 M (457) Comment (0) Favorites

      Abstract:Multi-view stereo (MVS) is widely used in fields such as autonomous driving, augmented reality, heritage conservation, and biomedicine. To address the limitations of traditional MVS methods, such as insensitivity to low-texture regions and poor reconstruction integrity, deep learning-based MVS methods have been proposed. This study reviews the pioneering work and current development of deep learning-based MVS methods. In particular, it focuses on methods for local functional improvement and overall architectural improvement and analyzes representative models. Meanwhile, the study describes widely used datasets and evaluation metrics and compares the test performance of existing methods on the datasets. Finally, promising research directions for MVS are presented.

    • Survey on Object Goal Navigation for Embodied AI

      2025, 36(4):1715-1757. DOI: 10.13328/j.cnki.jos.007250 CSTR: 32375.14.jos.007250

      Abstract (315) HTML (5) PDF 16.96 M (657) Comment (0) Favorites

      Abstract:With the continuous development of computer vision and artificial intelligence (AI) in recent years, embodied AI has received widespread attention from academia and industry at home and abroad. Embodied AI emphasizes that an agent should actively obtain real feedback from the physical world by interacting with the environment in a contextualized way and make itself more intelligent through learning from the feedback. As one of the concrete tasks of embodied AI, object goal navigation requires an agent to search for and navigate to a specified object goal (e.g., find a sink) in a previously unknown, complex, and semantically rich scenario. Object goal navigation has great potential for applications in smart assistants that support daily human activities, serving as a fundamental and antecedent task for other interaction-based embodied AI research. This study systematically classifies current research on object goal navigation. Firstly, the knowledge related to environmental representation and autonomous visual exploration is introduced, and existing object goal navigation methods are classified and analyzed from three different perspectives. Secondly, two categories of higher-level object rearrangement tasks are introduced, with a description of datasets for realistic indoor environment simulation, evaluation metrics, and a generic training paradigm for navigation strategies. Finally, the performance of existing object goal navigation strategies is compared and analyzed on different datasets. The challenges in this field are summarized, and development trends are predicted.

    • Neuromorphic Computing: From Spiking Neural Network to Edge Deployment

      2025, 36(4):1758-1795. DOI: 10.13328/j.cnki.jos.007298 CSTR: 32375.14.jos.007298

      Abstract (74) HTML (7) PDF 10.36 M (486) Comment (0) Favorites

      Abstract:Inspired by the biological nervous system, the concept of neuromorphic computing was introduced in the 1980s. It aims to mimic the structure and function of the biological brain to achieve more efficient and biologically plausible computation. As a representative model of neuromorphic computing, spiking neural networks (SNNs) have been widely employed in edge intelligence tasks with strict resource constraints due to their spike sparsity, event-driven operation, biological interpretability, and hardware compatibility. This study summarizes the edge deployment of SNNs. First, based on the principles of the SNN model itself, it discusses the energy-efficient computation of SNNs and their huge potential for edge deployment. Then, the currently common hardware implementation toolchain for SNNs is introduced, and a detailed summary and analysis of SNN deployment on various types of neuromorphic hardware platforms are provided. Finally, considering that hardware fault behavior has become an unavoidable issue in current research, an overview of fault and fault tolerance research during deploying SNNs at the edge is also presented. This study offers a comprehensive and systematic summary of recent advances in neuromorphic computing,ranging from software model principles to hardware platform implementation. Additionally, it analyzes the difficulties and challenges in the edge deployment of SNNs and points out possible solutions to these challenges.

    • Efficient Algorithms for Maximal Defective Biclique Enumeration on Bipartite Graphs

      2025, 36(4):1796-1810. DOI: 10.13328/j.cnki.jos.007270 CSTR: 32375.14.jos.007270

      Abstract (53) HTML (4) PDF 7.18 M (517) Comment (0) Favorites

      Abstract:Maximal biclique enumeration is a fundamental research problem in the bipartite graph analysis field. However, the traditional biclique model, which requires the subgraph to be a complete bipartite graph, is often overly constrained in practical applications, and thus some looser biclique models are needed to substitute. In this study, a new relaxation biclique model called k-defective biclique is proposed. This model allows a bipartite subgraph to differ from a complete bipartite subgraph biclique by up to k edges. Since enumerating maximal k-defective bicliques is NP-hard, designing efficient enumeration algorithms is a challenging task. To solve this problem, an algorithm based on symmetric set enumeration is proposed. The idea of this algorithm is to control the number of sub-branches through a constraint on the number of missing edges in the k-defective bicliques. To further improve the computational efficiency, a series of optimization techniques are also proposed, including ordering-based subgraph partitioning method, upper-bound based pruning method, linear time-based updating technique, and optimization method for branching. In addition, the time complexity of the proposed optimization algorithms is related to, where breaks through the traditional limitation. Finally, a large number of experimental results show that the efficiency of the proposed method is over a hundred times higher than that of the traditional branch-and-bound approach for most parameter settings.

    • Survey on Complex Spatio-temporal Data Mining Methods Based on Graph Neural Network

      2025, 36(4):1811-1843. DOI: 10.13328/j.cnki.jos.007275 CSTR: 32375.14.jos.007275

      Abstract (208) HTML (7) PDF 10.54 M (568) Comment (0) Favorites

      Abstract:With the development of sensing technology, lots of spatio-temporal data springs up in different fields. The spatio-temporal graph is a major type of spatio-temporal data with complex structure, spatio-temporal features, and relationships. How to mine key patterns from complex spatio-temporal graph data for various downstream tasks has become the main problem of complex spatio-temporal data mining tasks. Currently, the increasingly mature temporal graph neural networks provide powerful tools for the development of this research field. In addition, the emerging spatio-temporal large models provide a new research perspective based on the existing spatio-temporal graph neural network methods. However, most existing reviews in this field have relatively rough classification frameworks for methods, lack comprehensive and in-depth introduction to complex data types (e.g., dynamic heterogeneous graphs and dynamic hypergraphs), and do not provide a detailed summary of the latest research progress related to spatio-temporal graph large models. Therefore, in this study, the complex spatio-temporal data mining methods based on graph neural networks are divided into spatio-temporal fusion architecture and spatio-temporal large models to introduce them from traditional and emerging perspectives. According to specific complex data types, spatio-temporal fusion architecture is divided into dynamic graphs, dynamic heterogeneous graphs, and dynamic hypergraphs. Moreover, the spatio-temporal large models are divided into time series and graphs according to temporal and spatial dimensions. The latest research related to spatio-temporal graphs is listed in graph-based large models. The core details of multiple key algorithms are introduced, and the pros and cons of different methods are compared. Finally, the application fields and commonly used datasets of complex spatio-temporal data mining methods based on graph neural networks are listed, and possible future research directions are outlined.

    • Crowdsourcing Scheme Based on Blockchain and Decentralized Accountable Attribute-based Authentication

      2025, 36(4):1844-1858. DOI: 10.13328/j.cnki.jos.007208 CSTR: 32375.14.jos.007208

      Abstract (204) HTML (8) PDF 4.99 M (1780) Comment (0) Favorites

      Abstract:As a distributed approach to problem solving, crowdsourcing reduces costs and efficiently utilizes resources. While blockchain technology is introduced to solve the problem of over-centralization in traditional crowdsourcing platforms, its transparency brings the risk of privacy leakage. The traditional anonymous authentication can hide the user’s identity, but the anonymity is abused, and the worker selection gets more difficult. In this study, a decentralized accountable attribute-based authentication scheme is proposed and combined with blockchain to design a novel crowdsourcing scheme. Using decentralized attribute-based encryption and non-interactive zero-knowledge proof, the scheme protects the privacy of users’ identities with linkability and traceability, and the requester can devise access policies to select workers. In addition, the scheme improves the security of the system by implementing attribute authorization authority and tracking groups through the threshold secret sharing technique. Through experimental simulation and analysis, it is demonstrated that the scheme meets the requirements of time and storage overhead in practical application.

    • Autonomous Driving Security of Intelligent Connected Vehicles: Threats, Attacks, and Defenses

      2025, 36(4):1859-1880. DOI: 10.13328/j.cnki.jos.007272 CSTR: 32375.14.jos.007272

      Abstract (194) HTML (8) PDF 3.17 M (478) Comment (0) Favorites

      Abstract:Intelligent connected vehicles (ICVs) hold a significant strategic position within the national developmental framework, epitomizing a critical technological facet underpinning automotive industry innovations and serving as a nucleus of core national competitiveness. The culmination of ICV development resides in the realization of autonomous driving capabilities, herein termed “autonomous vehicles”. Security ramifications intrinsic to autonomous vehicles bear direct implications for public security, individual safety, and property integrity. However, a comprehensive, methodologically rigorous investigation of these security dimensions remains conspicuously absent. A comprehensive examination of the security threats germane to autonomous vehicles, thus, serves as a compass guiding security fortifications and engendering widespread adoption. By collating pertinent research endeavors from both academia and industry, this study undertakes a methodical and comprehensive analysis of the security issues intrinsic to autonomous driving. Inceptive discourse elaborates on the architectural contours of autonomous vehicles, interlaced with the nuances of their security considerations. Subsequently, embracing a model-centric vantage point, the analysis meticulously delineates nine prospective attack vectors across the tripartite domains of physical inputs, informational inputs, and the driving model itself. Each vector is expounded alongside its associated attack modalities and corresponding security mitigations. Finally, through quantitative analysis of research literature encompassing the last septennium, the prevailing terrain of autonomous vehicle security scholarship is scrutinized, thereby crystallizing latent trajectories for future research endeavors.

    • Review on Multi-sensor Data Fusion Research for Unmanned Aerial Vehicles

      2025, 36(4):1881-1905. DOI: 10.13328/j.cnki.jos.007273 CSTR: 32375.14.jos.007273

      Abstract (582) HTML (6) PDF 9.28 M (576) Comment (0) Favorites

      Abstract:With the rapid development of related technologies, sensors carried by unmanned aerial vehicles (UAVs) are becoming more precise and multifarious, which endows UAVs with strong sensing ability, and poses a large challenge to the processing and analysis of multi-sensor data in UAV applications. Data fusion is the key technology to solve this problem, which realizes the fusion and utilization of multi-sensor data through the process of detection, association, combination and estimation, and obtains accurate UAV state and target information to support decision-making. This study reviews the multi-sensor data fusion research for UAVs. It introduces UAV system components, reviews and classifies UAV multi-sensor data fusion methods, analyzes and compares the characteristics of various methods, summarizes the applications of UAV multi-sensor data fusion in different fields, and finally looks forward to the future development directions of UAV multi-sensor data fusion.


Volume , No. 4

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063