• Online First

    Select All
    Display Type: |
    • HSDiag: Variant Hitting Set Algorithm for Solving Diagnosis

      Online: June 11,2025 DOI: 10.13328/j.cnki.jos.007397

      Abstract (5) HTML (0) PDF 3.83 M (21) Comment (0) Favorites

      Abstract:In the field of model-based diagnosis, the system description is first encoded, and all minimal conflict sets are obtained using a mature SAT solver. Finally, the minimal hitting set of the minimal conflict sets is computed as the candidate diagnosis for the equipment to be diagnosed. However, this strategy consumes a significant amount of time, as it is equivalent to solving two NP-hard problems: computing the minimal conflict set and the minimal hitting set. This study re-encodes the description of the circuit system and proposes a novel variant hitting set algorithm, HSDiag, which can directly compute the diagnosis from the encoding. Compared to state-of-the-art diagnosis algorithms that first solve conflict sets and then hitting sets, the efficiency improves by a factor of 5 to 100. As the number of circuit components increases, the encoding clauses increase linearly, while the number of diagnoses increases exponentially. Since solving all conflict sets of large-scale circuits (ISCAS-85) is impractical, the proposed HSDiag algorithm, within the same cutoff time, yields more than twice the number of solutions compared to conflict-set-based diagnosis algorithms. In addition, this study proposes an equivalence class optimization strategy, which further decomposes the conflict set by using the newly proposed set splitting rule, even if the initial conflict set is inseparable. The efficiency of the HSDiag algorithm optimized by equivalence class is improved by more than 2 times in standard Polybox and Fulladder circuits.

    • Multi-view Framework for Inductive Knowledge Graph Completion with Differentiation Mechanism

      Online: June 11,2025 DOI: 10.13328/j.cnki.jos.007401

      Abstract (6) HTML (0) PDF 10.00 M (14) Comment (0) Favorites

      Abstract:Knowledge graph completion (KGC) models require inductive ability to generalize to new entities as the knowledge graph expands. However, current approaches understand entities only from a local perspective by aggregating neighboring information, failing to capture valuable interconnections between entities across different views. This study argues that global and sequential perspectives are essential for understanding entities beyond the local view by enabling interaction between disconnected and distant entity pairs. More importantly, it emphasizes that the aggregated information must be complementary across different views to avoid redundancy. Therefore, a multi-view framework with the differentiation mechanism is proposed for inductive KGC, aimed at learning complementary entity representations from various perspectives. Specifically, in addition to aggregating neighboring information to obtain the entity’s local representation through R-GCN, an attention-based differentiation mechanism is employed to aggregate complementary information from semantically related entities and entity-related paths, thus obtaining global and sequential representations of the entities. Finally, these representations are fused and used to score the triples. Experimental results demonstrate that the proposed framework consistently outperforms state-of-the-art approaches in the inductive setting. Moreover, it retains competitive performance in the transductive setting.

    • Smart Contract Security Vulnerability Detection Based on Target-guided Symbolic Execution

      Online: June 11,2025 DOI: 10.13328/j.cnki.jos.007396

      Abstract (6) HTML (0) PDF 7.28 M (15) Comment (0) Favorites

      Abstract:Smart contracts are computer programs running on blockchain platforms, which extend the functionality of the blockchain and enable complex applications. However, the potential security vulnerabilities of smart contracts can lead to significant financial losses. Symbolic execution-based security vulnerability detection methods offer advantages such as high accuracy and the ability to generate test cases that can reproduce vulnerabilities. Nevertheless, as the code size increases, symbolic execution faces challenges such as path explosion and excessive constraint-solving overhead. To address those issues, a novel approach for detecting smart contract security vulnerabilities through target-guided symbolic execution is proposed. First, vulnerable statements identified by static analysis tools or manually are treated as targets.The statements that depend on these target statements are analyzed, and the transaction sequence is augmented with symbolic constraints for the relevant variables. Second, the control flow graph (CFG) is constructed based on the bytecode of smart contracts, with the basic blocks containing the target statements and the dependentstatements located. The CFG is then pruned to generate guidance information. Third, path exploration in symbolic execution is optimized by reducing the number of basic blocks to be analyzed and reducing the time required for solving path constraints. With the guidance information, vulnerabilities are efficiently detected, and test cases capable of reproducing the vulnerabilities are generated. Based on this approach, a prototype tool named Smart-Target is developed. Experiments conducted on the SB Curated dataset in comparison with the symbolic execution tool, Mythril, demonstrate that Smart-Target reduces time overheads by 60.76% and 92.16% in vulnerability detection and replication scenarios, respectively. In addition, the analysis of target statementdependencies enhances vulnerability detection capability by identifying 22.02% more security vulnerabilities.

    • swJulia: Julia Compilation System for New Generation Sunway Supercomputer

      Online: June 11,2025 DOI: 10.13328/j.cnki.jos.007407

      Abstract (5) HTML (0) PDF 10.33 M (13) Comment (0) Favorites

      Abstract:With the increasing adoption of heterogeneous integrated architectures in high-performance computing, it has become essential to harness their potential and explore new strategies for application development. Traditional static compilation methodologies are no longer sufficient to meet the complex computational demands. Therefore, dynamic programming languages, known for their flexibility and efficiency, are gaining prominence. Julia, a modern high-performance language characterized by its JIT compilation mechanism, has demonstrated significant performance in fields such as scientific computing. Targeting the unique features of the Sunway heterogeneous many-core architecture, the ORCJIT engine is introduced, along with an on-chip storage management approach specifically designed for dynamic modes. Based on these advancements, swJulia is developed as a Julia dynamic language compiler tailored for the new generation of the Sunway supercomputer. This compiler not only inherits the flexibility of the Julia compiler but also provides robust support for the SACA many-core programming model and runtime encapsulation. By utilizing the swJulia compilation system, the deployment of the NNQS-Transformer quantum chemistry simulator on the new generation of the Sunway supercomputer is successfully achieved. Comprehensive validation across multiple dimensions demonstrates the efficacy and efficiency of swJulia. Experimental results show exceptional performance in single-threaded benchmark tests and many-core acceleration, significantly improving ultra-large-scale parallel simulations for the NNQS-Transformer quantum chemistry simulator.

    • Implicit Semantic Parsing and Reasoning of Requirement Text Based on Temporal Logic

      Online: June 11,2025 DOI: 10.13328/j.cnki.jos.007409

      Abstract (4) HTML (0) PDF 6.19 M (15) Comment (0) Favorites

      Abstract:Temporal logic has been extensively applied in domains such as formal verification and robotics control, yet it remains challenging for non-expert users to master. Therefore, the automated extraction of temporal logic formulas from natural language texts is crucial. However, existing efforts are hindered by issues such as sparse sample availability and the ambiguity of natural language semantics, which impede the accurate identification of implicit temporal semantics within natural language texts, thus leading to errors in the translation of the original natural language semantics into temporal logic formulas. To address this issue, a novel method for temporal logic semantic analysis based on a few-shot learning network, termed FSLNets-TLSA, is proposed. This method employs data preprocessing techniques to enhance the temporal semantic logic features of the text. The network architecture consists of an encoder, an induction module, and a relation module, which aim to capture the implicit temporal logic semantic information in the input text. In addition, an enhancement module is incorporated to improve the accuracy of monitoring semantic recognition. The effectiveness of the proposed method is validated through experimental evaluations conducted on three public datasets comprising a total of 3 533 samples, and a comparison with similar tools. The analysis demonstrates an average Accuracy, Recall, and F1-score of 96.55%, 96.29%, and 96.42%, respectively.

    • Neural-network-based Compression and Query Approach for Distributed Tracing Data

      Online: June 04,2025 DOI: 10.13328/j.cnki.jos.007315

      Abstract (9) HTML (0) PDF 8.98 M (14) Comment (0) Favorites

      Abstract:As an essential type of observability data, distributed tracing data plays a crucial role in operation and maintenance tasks like performance analysis, fault diagnosis, and system understanding. Due to the rapid increase in system scale and complexity, the volume of tracing data grows exponentially, putting forward higher storage requirements. To mitigate the storage cost of tracing data, data compression becomes a crucial approach. Existing compression methods fail to fully exploit tracing data features for achieving efficient compression, and they do not support complex queries on compressed data either. This study introduces a neural-network-based approach for compressing and querying distributed tracing data. It employs a novel redundancy extraction technique to identify pattern and structural redundancies within tracing data, and leverages neural network models and arithmetic coding to achieve efficient data compression. Meanwhile, the method enables efficient querying of compressed data without decompressing all the data. Variously sized tracing datasets are collected from four open-source microservices systems, and the proposed method is evaluated. Experimental results show relatively high compression ratios (105.5–197.6) are achieved by the proposed method, which are four times those of state-of-the-art general compression algorithms on average. Additionally, the querying efficiency of the proposed method on the compressed data is validated, showcasing faster performance than existing query tools in optimal scenarios.

    • Identification of Maintenance Status in Open-source Software Projects Based on Machine Learning

      Online: June 04,2025 DOI: 10.13328/j.cnki.jos.007380

      Abstract (19) HTML (0) PDF 7.48 M (25) Comment (0) Favorites

      Abstract:With the widespread adoption and rapid advancement of open-source software, the maintenance of open-source software projects has become a critical phase within the software development cycle. As a globally representative developer community, GitHub hosts numerous software project repositories with similar functionalities within the same domain, creating challenges for users when selecting the appropriate project repository for use or further development. Therefore, accurate identification of project repository maintenance status holds substantial practical value. However, the GitHub platform does not provide direct metrics for assessing the maintenance status of repositories. This study proposes an automatic identification method for project repository maintenance status based on machine learning. A classification model, GitMT, has been developed and implemented to achieve this objective. By effectively integrating dynamic time series features and descriptive features, the proposed model enables accurate identification of “active” and “unmaintained” repository status. Through a series of experiments conducted on large-scale real-world data, an AUC value of 0.964 is achieved in maintenance status identification tasks. In addition, this study constructs an open-source dataset centered on the maintenance status of software project repositories—GitMT Dataset: https://doi.org/10.7910/DVN/OJ2NI3.

    • Efficient Localization for Codes Causing Compilation Optimization-induced Result Inconsistency

      Online: June 04,2025 DOI: 10.13328/j.cnki.jos.007406

      Abstract (10) HTML (0) PDF 7.51 M (17) Comment (0) Favorites

      Abstract:The compiler is one of the most relied-upon performance tuning tools for program developers. However, due to the limited precision encoding of floating-point numbers, many compiler optimization options can alter the semantics of floating-point calculations, leading to result inconsistency. Locating the program statements that cause compilation optimization-induced result inconsistency is crucial for performance tuning and result reproducibility. The state-of-the-art approach employs precision enhancement-based binary search to locate the code snippets causing result inconsistency but suffers from insufficient support for multi-source localization and low search efficiency. This study proposes a floating-point instruction difference-guided Delta-Debugging localization method, FI3D, which utilizes the backtracking mechanism in Delta-Debugging to better support multi-source problem code localization and exploits the differences in floating-point instruction sequences under different compiler optimization options to guide the localization. FI3D is evaluated using 6 applications from the NPB benchmark, 10 programs from the GNU scientific library, and 2 programs from the floatsmith mixed-precision benchmark. Experimental results demonstrate that FI3D successfully locates the 4 applications where PLiner fails and achieves an average 26.8% performance improvement for the 14 cases successfully located by PLiner.

    • Takeover Performance Prediction and Characteristic Analysis Under Typical Driving Scenarios

      Online: June 04,2025 DOI: 10.13328/j.cnki.jos.007408

      Abstract (7) HTML (0) PDF 7.32 M (16) Comment (0) Favorites

      Abstract:With the rapid development of autonomous driving technology, the issue of vehicle control takeover has become a prominent research topic. A car equipped with an assisted driving system cannot fully handle all driving scenarios. When the actual driving scenario exceeds the operational design domain of the assisted system, human intervention is still required to control the vehicle and ensure the safe completion of the driving task. Takeover performance is an extremely important metric for evaluating a driver’s performance during the takeover process, which includes takeover reaction time and takeover quality. The takeover reaction time refers to the time from the system’s takeover request to the driver’s control of the steering wheel. The length of the takeover response time not only reflects the driver’s current state but also affects the subsequent handling of complex scenarios. Takeover quality refers to the quality of manual vehicle operation by the driver after regaining control. This study, based on the CARLA driving simulator, constructs 6 typical driving scenarios, simulates the vehicle control takeover process, and collects physiological signals and eye movement data from 31 drivers using a multi-channel acquisition system. Based on the driver’s takeover performance, and regarding International standards, an objective takeover performance evaluation metric is proposed, incorporating the driver’s takeover reaction time, maximum horizontal and vertical accelerations, and minimum collision time, derived from multiple vehicle data. By combining driver data, vehicle data, and scenario data, a deep neural network (DNN) model predicts takeover performance, while the SHAP model analyzes the impact of each feature, improving the model’s interpretability and transparency. The experimental results show that the proposed DNN model outperforms traditional machine learning methods in predicting takeover performance, achieving an accuracy of 92.2% and demonstrating good generalization. The SHAP analysis reveals the impact of key features such as heart rate variability, driving experience, and minimum safe distance on the prediction results. This research provides a theoretical and empirical foundation for the safety optimization and human-computer interaction design of autonomous driving systems and is of great significance for improving the efficiency and safety of human-vehicle cooperation in autonomous driving technology.

    • Research Progress on Intelligent Technologies for Satellite Edge Computing

      Online: June 04,2025 DOI: 10.13328/j.cnki.jos.007410

      Abstract (12) HTML (0) PDF 6.84 M (18) Comment (0) Favorites

      Abstract:In recent years, the increasing complexity of space missions has led to an exponential growth in space-generated data. However, limited satellites-to-ground bandwidth and scarce frequency resources pose significant challenges to traditional bent-pipe architecture, which faces severe transmission bottlenecks. In addition, onboard data must wait for satellites to pass over ground stations before transmission. The large-scale construction of ground stations is not only cost-prohibitive but also carries geopolitical and economic risks. Satellite edge computing has emerged as a promising solution to these bottlenecks by integrating mobile edge computing technology into satellite edges. This approach significantly enhances user experience and reduces redundant network traffic. By enabling onboard data processing, satellite edge computing shortens data acquisition times and reduces reliance on extensive ground station infrastructure. Furthermore, the integration of artificial intelligence (AI) and edge computing technologies offers an efficient and forward-looking path to address existing challenges. This study reviews the latest progress in intelligent satellite edge computing. First, the demands and applications of satellite edge computing in various typical scenarios are discussed. Next, key challenges and recent research advancements in this field are analyzed. Finally, several open research topics are highlighted, and new ideas are proposed to guide future studies. This discussion aims to provide valuable insights to promote technological innovation and the practical implementation of satellite edge computing.

    Prev 1 2 3 Next Last
    Result 74 Jump to Page GO
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063