• Volume 35,Issue 10,2024 Table of Contents
    Select All
    Display Type: |
    • Fine-grained Assessment Method of Vulnerability Impact Scope for PyPI Ecosystem

      2024, 35(10):4493-4509. DOI: 10.13328/j.cnki.jos.006959

      Abstract (946) HTML (523) PDF 7.10 M (2301) Comment (0) Favorites

      Abstract:The openness and ease-of-use of Python make it one of the most commonly used programming languages. The PyPI ecosystem formed by Python not only provides convenience for developers but also becomes an important target for attackers to launch vulnerability attacks. Thus, after discovering Python vulnerabilities, it is critical to deal with Python vulnerabilities by accurately and comprehensively assessing the impact scope of the vulnerabilities. However, the current assessment methods of Python vulnerability impact scope mainly rely on the dependency analysis of packet granularity, which will produce a large number of false positives. On the other hand, existing Python program analysis methods of function granularity have accuracy problems due to context insensitivity and produce false positives when applied to assess the impact scope of vulnerabilities. This study proposes a vulnerability impact scope assessment method for the PyPI ecosystem based on static analysis, namely PyVul++. First, it builds the index of the PyPI ecosystem, then finds the candidate packets affected by the vulnerability through vulnerability function identification, and confirms the vulnerability packets through vulnerability trigger condition. PyVul++ realizes vulnerability impact scope assessment of function granularity, improves the call analysis of function granularity for Python code, and outperforms other tools on the PyCG benchmark (accuracy of 86.71% and recall of 83.20%). PyVul++ is used to assess the impact scope of 10 Python CVE vulnerabilities on the PyPI ecosystem (385855 packets) and finds more vulnerability packets and reduces false positives compared with other tools such as pip-audit. In addition, PyVul++ newly finds that 11 packets in the current PyPI ecosystem still have security issues of referencing unpatched vulnerable functions in 10 assessment experiments of Python CVE vulnerability impact scope.

    • Formal Semantics of Apache Flink Complex Event Processing Language

      2024, 35(10):4510-4532. DOI: 10.13328/j.cnki.jos.006968

      Abstract (667) HTML (513) PDF 5.53 M (1804) Comment (0) Favorites

      Abstract:Apache Flink is one of the most popular stream computing platforms and has many applications in industry. Complex event processing (CEP) is one of the important usage scenarios of stream computation. Apache Flink defines and implements a language for complex event processing (referred to as FlinkCEP). FlinkCEP includes rich syntactic features, not only the usual features of filtering, connecting, and looping, but also the advanced features of iterative conditions and after-match skip strategies. The semantics of FlinkCEP is complex, no language specification of FlinkCEP defines its semantics precisely, so it can only be understood by checking the implementation details. This motivates the definition of formal semantics for FlinkCEP so that the developers could understand its semantics precisely. This study proposes an automaton model called data stream transducers (DST) for FlinkCEP, where the data variables are applied to capture the iterative conditions, the data stream variables are adopted to store the outputs, and transition priorities are introduced to capture the after-match skip strategies. DST is leveraged to define the formal semantics of FlinkCEP and design the query evaluation algorithms based on the formal semantics. Moreover, a prototype of the CEP engine is implemented. Finally, test case sets are generated, which cover the syntactic features of FlinkCEP more comprehensively. They are utilized to conduct comparison experiments against the actual results of FlinkCEP on the Flink platform. The experimental results show that the proposed formal semantics of FlinkCEP conforms to the actual semantics of FlinkCEP in the vast majority of the cases. Furthermore, the inconsistencies between the formal and the actual semantics are analyzed and it is discovered that the Flink implementation of FlinkCEP may not deal with the group patterns correctly.

    • PPTA Model and Verification Method for Preemptive Scheduling Problem

      2024, 35(10):4533-4554. DOI: 10.13328/j.cnki.jos.006969 CSTR: 32375.14.jos.006969

      Abstract (609) HTML (510) PDF 9.76 M (2062) Comment (0) Favorites

      Abstract:As an essential component of real-time system design, priority is utilized to resolve conflicts in resource sharing and design for safety. For real-time systems that introduce priorities, each task is assigned a priority, which leads to the possibility of low-priority tasks being preempted by high-priority tasks at runtime, thus creating a preemptive scheduling problem for real-time systems. Existing research on this problem lacks a modeling and automatic verification method that can visually represent the priority of tasks and the dependencies between tasks. To this end, a preemptive priority timed automata (PPTA) is proposed and a preemptive priority timed automata network (PPTAN) is introduced. First, the priority of a task is represented by adding the priority of migration to the timed automata, and then the migration is adopted to correlate tasks with dependencies so that PPTA can be applied to model real-time tasks with priority. The blocking position is also added to the timed automata, so PPTAN can be used to model the priority preemptive scheduling problem. Second, a model-based transformation method is proposed to map the PPTA to the automatic verification tool UPPAAL. Finally, by modeling an example of a multi-core multi-task real-time system and comparing it with other models, it is shown that this model is not only suitable for modeling the priority preemptive scheduling problem but also for accurately verifying and analyzing it.

    • LGBRoot: Partial Graph-based Automated Vulnerability Root Cause Analysis

      2024, 35(10):4555-4572. DOI: 10.13328/j.cnki.jos.006971 CSTR: 32375.14.jos.006971

      Abstract (519) HTML (521) PDF 6.71 M (1853) Comment (0) Favorites

      Abstract:Fast vulnerability root cause analysis is crucial for patching vulnerabilities and has always been a hotspot in academia and industry. The existing vulnerability root cause analysis methods based on the statistical feature analysis of a large number of test sample execution records have problems such as random noise and missing important logical correlation instructions. According to the test set measurement in this study, the proportion of random noise in the existing statistical methods reaches more than 61%. To solve the above problems, this study proposes a vulnerability root cause analysis method based on the local path graph, which extracts vulnerability-related information such as the inter-function call graph and intra-function control flow transfer graph from the execution paths. The local path graph is utilized for eliminating irrelevant instruction (i.e., noise instructions) elimination, constructing the logic relations for vulnerability root cause relevant points, and adding missing critical instructions. An automated root cause analysis system for binary software, LGBRoot, has been implemented. The effectiveness of the system has been evaluated on a dataset of 20 public CVE memory corruption vulnerabilities. The average time for single-sample root cause analysis is 12.4 seconds. The experimental data show that the system can automatically eliminate 56.2% of noise instructions, and mend as well as visualize the 20 logical structures of vulnerability root cause relevant points, speeding up the vulnerability analysis of analysts.

    • Threat Model-based Security Test Case Generation Framework and Tool

      2024, 35(10):4573-4603. DOI: 10.13328/j.cnki.jos.006973 CSTR: 32375.14.jos.006973

      Abstract (904) HTML (530) PDF 13.25 M (2200) Comment (0) Favorites

      Abstract:In recent years, software system security issues are attracting increasing attention. The security threats existing in systems can be easily exploited by attackers. Attackers usually attack systems by using various attacking techniques, such as password brute force cracking, phishing, and SQL injection. Threat modeling is a method of structurally analyzing, identifying, and processing threats. Traditional tests mainly focus on testing code defects, which take place in the late stage of software development. It is not able to well connect the results from early threat modeling and analysis for building secure software. Threat modeling tools in the industry lack the function of generating security tests. In order to tackle this problem, this study proposes a framework that is able to generate security test cases from threat models and designs and implements a tool prototype. In order to facilitate tests, this study improves the traditional attack tree model and performs compliance checks. Test scenarios can be automatically generated from the model. The test scenarios are evaluated according to the probabilities of attack nodes, and the scenarios of the threats with higher probabilities will be tested first. The defense nodes are evaluated, and the defense scheme with higher profit is selected to alleviate the threats, so as to improve the system’s security design. By setting parameters for attack nodes, test scenarios can be specified as test cases. In the early stage of software development, with the inputs of the threats identified by threat modeling, test cases can be generated through this framework and tool to guide subsequent security development and test design, which improves the integration of security technology in software design and development. The case study applies this framework and tool in test generation for very high security risks, which shows their effectiveness.

    • Function Name Consistency Check and Recommendation Based on Deep Learning

      2024, 35(10):4604-4622. DOI: 10.13328/j.cnki.jos.006974 CSTR: 32375.14.jos.006974

      Abstract (555) HTML (528) PDF 6.34 M (2038) Comment (0) Favorites

      Abstract:The functions are the smallest naming unit of aggregation behavior in most traditional programming languages. The readability of function names plays a vital role in programmers’ understanding of program functions and the interaction between different modules. Low-quality function names may confuse developers, increase the smell in the code, and then result in software defects caused by API misuse. Therefore, a method of function name consistency checking and recommendation based on deep learning is proposed, which is named DMName. Firstly, for the given source code of the target function, the internal context, interactive context, sibling context, and closed context are constructed respectively, and the context information tag sequence is obtained after merging them. Then the tag sequence is converted into the context representation vector sequence by using the word embedding technology FastText and input into the encoder of the seq2seq model. The copy mechanism and coverage mechanism are utilized to solve the OOV problem and the repeated decoding problem, respectively. Finally, the vector sequence of the prediction result of the target function name is output, and the consistency of the function name is predicted with the help of the two-channel CNN classifier. If the function name is inconsistent, the recommended function name can be obtained by direct mapping according to the vector space similarity matching. The experimental results show that the F1-measure of DMName in function name consistency check and recommendation reaches 82.65% and 73.31% respectively, which is 2.01% and 2.96% higher than the current optimal DeepName. Finally, the DMName is verified in the large-scale open-source project, namely lancia in GitHub. A total of 16 function name inconsistency problems are found, and reasonable name recommendations are made, which further confirms the effectiveness of DMName.

    • Analysis and Testing of Indirect Jump Table Solving Algorithms in Disassembly Tools

      2024, 35(10):4623-4641. DOI: 10.13328/j.cnki.jos.006976 CSTR: 32375.14.jos.006976

      Abstract (461) HTML (524) PDF 7.20 M (1758) Comment (0) Favorites

      Abstract:Disassembly of binary codes is hard but necessary for improving the security of binary software. One of the major reasons for the difficult binary disassembly is that the compilers create many indirect jump tables in the binary code for efficiency. In order to solve the targets of the indirect jump table, mainstream disassembly tools use various strategies. However, the details of the implementation of these strategies and their effectiveness are not well studied. To help researchers to well understand the algorithm implementation and performance of disassembly tools, this study first systematically summarizes the strategies used by disassembly tools to solve indirect jump tables; then the study builds an automatic framework for testing indirect jump tables, based on which a large-scale testsuite on indirect jump tables (2410455 jump tables) can be generated. Lastly, this study evaluates the performance of the disassembly tools in solving indirect jump tables on the testsuite and manually analyzes the errors introduced by each strategy of the disassembly tools. In addition, this study finds six bugs in the implementation of the disassembly tools benefiting from the systematic summary of the implementation of the disassembly tool algorithm.

    • Statement Level Software Bug Localization Based on Historical Bug Information Retrieval

      2024, 35(10):4642-4661. DOI: 10.13328/j.cnki.jos.006980 CSTR: 32375.14.jos.006980

      Abstract (601) HTML (360) PDF 7.96 M (1780) Comment (0) Favorites

      Abstract:A large number of bug reports are generated during software development and maintenance, which can help developers to locate bugs. Information retrieval based bug localization (IRBL) analyzes the similarity of bug reports and source code files to locate bugs, achieving high accuracy at the file and function levels. However, a lot of labor and time costs are consumed to find bugs from suspicious files and function fragments due to the coarse location granularity of IRBL. This study proposes a statement level software bug localization method based on historical bug information retrieval, STMTLocator. Firstly, it retrieves historical bug reports which are similar to the bug report of the program under test and extracts the bug statements from the historical bug reports. Then, it retrieves the suspicious files according to the text similarity between the source code files and the bug report of the program under test, and extracts the suspicious statements from the suspicious files. Finally, it calculates the similarity between the suspicious statements and the historical bug statements, and arranges them in descending order to localize bug statements. To evaluate the bug localization performance of STMTLocator, comparative experiments are conducted on the Defects4J and JIRA dataset with Top@N, MRR, and other evaluation metrics. The experimental results show that STMTLocator is nearly four times than the static bug localization method BugLocator in terms of MRR and locates 7 more bug statements for Top@1. The average time used by STMTLocator to locate a bug version is reduced by 98.37% and 63.41% than dynamic bug localization methods Metallaxis and DStar, and STMTLocator has a significant advantage of not requiring the construction and execution of test cases.

    • Automated Static Warning Identification via Path-based Semantic Representation

      2024, 35(10):4662-4680. DOI: 10.13328/j.cnki.jos.006982 CSTR: 32375.14.jos.006982

      Abstract (428) HTML (318) PDF 6.65 M (1634) Comment (0) Favorites

      Abstract:Static analysis tools often suffer from high false positive rates of reported alarms, despite their ability to aid developers in detecting potential defects early in the software development life cycle. To improve the availability of these tools, many automated warning identification techniques have been proposed to assist developers in classifying false positive alarms. However, existing approaches mainly focus on using hand-engineered features or statement-level abstract syntax tree token sequences to represent the defective code, failing to capture semantics from the reported alarms. To overcome the limitations of traditional approaches, this study employs deep neural networks with powerful feature extraction and representation abilities to generate code semantics from control flow graph paths for warning identification. The control flow graph abstractly represents the execution process of a given program. Thus, the generated path sequences of the control flow graph can guide the deep neural networks to learn semantic information about the potential defect more accurately. In this study, the pre-trained language model is fine-tuned to encode the path sequences and capture the semantic representations for model building. Finally, the study conducts extensive experiments on eight open-source projects to verify the effectiveness of the proposed approach by comparing it with the state-of-the-art baselines.

    • Identifier Resolution Technology for Human-cyber-physical Ternary Based on Internet of Data

      2024, 35(10):4681-4695. DOI: 10.13328/j.cnki.jos.006990 CSTR: 32375.14.jos.006990

      Abstract (559) HTML (351) PDF 9.61 M (1767) Comment (0) Favorites

      Abstract:The informationization 3.0 represented by deep mining and fusion applications of big data is starting, and the software in the traditional static environment is evolving into complex software in the human-cyber-physical ternary environment which is open and dynamic. How to realize the trusted, manageable, and controllable data interconnection on the untrusted and uncontrollable Internet is an urgent problem to be solved. The Internet of Data technical system represented by digital object architecture and identifier resolution technology provides a feasible solution for these challenges. In order to solve the problems such as low transmission efficiency, high coordination cost, and security management issues in the process of data sharing on the Internet, this study proposes identifier resolution standard specifications for human-cyber-physical ternary environments. Moreover, to meet the demands that data resources owned by different entities need to be discoverable, accessible, understandable, trustworthy, and interoperable in the human-cyber-physical ternary environment, this study designs the identifier resolution protocol and implements the identifier/resolution prototype system for human-cyber-physical ternary environments. At last, this study tests the performance of the prototype system, and the validity of the system is verified by applying it to application scenarios.

    • Conformance Checking Method for Process Text

      2024, 35(10):4696-4709. DOI: 10.13328/j.cnki.jos.006991 CSTR: 32375.14.jos.006991

      Abstract (457) HTML (334) PDF 6.67 M (1915) Comment (0) Favorites

      Abstract:Conformance checking is one of the important scenarios in the field of process mining, and its goal is to determine whether the actual running business behavior is consistent with the desired behavior and then provide a basis for business process management decisions. Traditional methods of conformance checking face the problems of too many metrics and low efficiency. In addition, the existing methods for checking the conformance between process text and process model rely heavily on expert-defined knowledge. Therefore, this study proposes a process text-oriented conformance checking method. Firstly, the study generates graph traces based on the execution semantics of the process model and obtains the structural features by the word vector model from graph traces. At the same time, Hoffman trees are introduced to reduce the computational effort. Then, the word vector representation of the process text and the activities is performed. The study also uses the Siamese mechanism to improve training efficiency. Finally, all the features of the text and the model are fused, and then the consistency score between the text and the model is predicted using a fully connected layer. Experiments show that the average absolute error value of the method in this study is two percentage points lower than that of existing methods.

    • Long- and Short-term Spatio-temporal Preference-aware Task Assignment in Crowdsourcing

      2024, 35(10):4710-4728. DOI: 10.13328/j.cnki.jos.006994 CSTR: 32375.14.jos.006994

      Abstract (381) HTML (315) PDF 5.98 M (1551) Comment (0) Favorites

      Abstract:With the development of mobile services’ computing and sensing abilities, spatial-temporal crowdsourcing, which is based on location information, comes into being. There are many challenges to improving the performance of task assignments, one of which is how to assign workers the tasks that they are interested in. Existing research methods only focus on workers’ temporal preference but ignore the impact of spatial factors on workers’ preference, and they only focus on long-term preference but ignore workers’ short-term preference and face the problem of inaccurate predictions caused by sparse historical data. This study analyzes the task assignment problem based on long-term and short-term spatio-temporal preference. By comprehensively considering workers’ preferences from both long-term and short-term perspectives, as well as temporal and spatial dimensions, the quality of task assignment is improved in task assignment success rate and completion efficiency. In order to improve the accuracy of spatio-temporal preference prediction, the study proposes a sliced imputation-based context-aware tensor decomposition algorithm (SICTD) to reduce the proportion of missing values in preference tensors and calculates short-term spatio-temporal preference through the ST-HITS algorithm and short-term active range of workers under spatio-temporal constraints. In order to maximize the total task reward and the workers’ average preference for completing tasks, the study designs a spatio-temporal preference-aware greedy and Kuhn-Munkres (KM) algorithm to optimize the results of task assignment. Extensive experimental results on real datasets show the effectiveness of the long- and short-term spatio-temporal preference-aware task assignment framework. Compared with baselines, the RMSE prediction error of the proposed SICTD for temporal and spatial preferences is decreased by 22.55% and 24.17%, respectively. In terms of task assignment, the proposed preference-aware KM algorithm significantly outperforms the baseline algorithms, with the workers’ total reward and average preference for completing tasks averagely increased by 40.86% and 22.40%, respectively.

    • Multi-timed Noninterference Verification

      2024, 35(10):4729-4750. DOI: 10.13328/j.cnki.jos.006997 CSTR: 32375.14.jos.006997

      Abstract (342) HTML (321) PDF 8.43 M (1539) Comment (0) Favorites

      Abstract:Safety-critical embedded software usually has rigorous time constraints over the runtime behaviors, raising additional requirements for enforcing security properties. To protect the information flow security of embedded software and mitigate the limitations of the existing simplex verification approaches and their potential false positives, this study first proposes a new timed noninterference property, i.e., timed SIR-NNI, based on the security requirement of a realistic scenario. Then the study presents an information flow security verification approach that unifies the verification of multiple timed noninterference properties, i.e., timed BNNI, timed BSNNI, and timed SIR-NNI. Based on the different timed noninterference requirements, the approach constructs the refined automata and test automata from the timed automata under verification. The study uses UPPAAL’s reachability analysis to implement the refinement relation check and the security verification. The verification tool, i.e., TINIVER, extracts timed automata from SysML’s sequential diagrams or C++ source code to conduct the verification procedure. The verification results of TINIVER on existing timed automata models and security properties justify the usability of the proposed approach. The security verifications on the typical flight-mode switch models of the UAV flight control systems ArduPilot and PX4 demonstrate the practicability and scalability of the proposed approach. Besides, the approach is effective in mitigating the false positives of a state-of-the-art verification approach.

    • Few-shot Named Entity Recognition Based on Fine-grained Prototypical Network

      2024, 35(10):4751-4765. DOI: 10.13328/j.cnki.jos.006979 CSTR: 32375.14.jos.006979

      Abstract (563) HTML (351) PDF 6.15 M (1884) Comment (0) Favorites

      Abstract:When prototypical networks are directly applied to few-shot named entity recognition (FEW-NER), there are the following problems: Non-entities do not have strong semantic relationships with each other, and using the same way to construct the prototype for both entities and non-entities will make non-entity prototypes fail to accurately represent the semantic characteristics of non-entities; using only the average entity vector as the computing method of the prototype will make it difficult to capture similar entities with different semantic features. To address these problems, this study proposes a FEW-NER based on fine-grained prototypical networks (FNFP) to improve the annotation effect of FEW-NER. Firstly, different non-entity prototypes are constructed for different query sets to capture the key semantic features of non-entities in sentences and obtain finer-grained prototypes to improve the recognition effect of non-entities. Then, an inconsistent metric module is designed to measure the inconsistency between similar entities, and different metric functions are applied to entities and non-entities, so as to reduce the feature representation between similar samples and improve the feature representation of the prototype. Finally, a Viterbi decoder is introduced to capture the label transformation relationship and optimize the final annotation sequence. The experimental results show that the performance of the proposed method is improved compared with that of the large-scale FEW-NER dataset, namely FEW-NERD; and the generalization ability of this method in different domain scenarios is verified on the cross-domain dataset.

    • Cross-modal Person Retrieval Method Based on Relation Alignment

      2024, 35(10):4766-4780. DOI: 10.13328/j.cnki.jos.006993 CSTR: 32375.14.jos.006993

      Abstract (306) HTML (338) PDF 6.11 M (1508) Comment (0) Favorites

      Abstract:Text-based person retrieval is a developing downstream task of cross-modal retrieval and derives from conventional person re-identification, which plays a vital role in public safety and person search. In view of the problem of lacking query images in traditional person re-identification, the main challenge of this task is that it combines two different modalities and requires that the model have the capability of learning both image content and textual semantics. To narrow the semantic gap between pedestrian images and text descriptions, the traditional methods usually split image features and text features mechanically and only focus on cross-modal alignment, which ignores the potential relations between the person image and description and leads to inaccurate cross-modal alignment. To address the above issues, this study proposes a novel relation alignment-based cross-modal person retrieval network. First, the attention mechanism is used to construct the self-attention matrix and the cross-modal attention matrix, in which the attention matrix is regarded as the distribution of response values between different feature sequences. Then, two different matrix construction methods are used to reconstruct the intra-modal attention matrix and the cross-modal attention matrix respectively. Among them, the element-by-element reconstruction of the intra-modal attention matrix can well excavate the potential relationships of intra-modal. Moreover, by taking the cross-modal information as a bridge, the holistic reconstruction of the cross-modal attention matrix can fully excavate the potential information from different modalities and narrow the semantic gap. Finally, the model is jointly trained with a cross-modal projection matching loss and a KL divergence loss, which helps achieve the mutual promotion between modalities. Quantitative and qualitative results on a public text-based person search dataset (CUHK-PEDES) demonstrate that the proposed method performs favorably against state-of-the-art text-based person search methods.

    • Multimodal Sentiment Analysis Method Based on Adaptive Weight Fusion

      2024, 35(10):4781-4793. DOI: 10.13328/j.cnki.jos.006998 CSTR: 32375.14.jos.006998

      Abstract (883) HTML (384) PDF 6.02 M (2400) Comment (0) Favorites

      Abstract:Multimodal sentiment analysis is a task that uses subjective information from multiple modalities to analyze sentiment. Exploring how to effectively learn the interaction between modalities has always been an essential task in multimodal analysis. In recent research, it is found that the learning rate of different modalities is unbalanced, leading to the convergence of one modality while the rest of the modalities are under-fitting, which weakens the effect of multimodal collaborative decision-making. In order to combine multiple modalities more effectively and learn the multimodal sentiment features with rich expression, this study proposes a multimodal sentiment analysis method based on adaptive weight fusion. The method of adaptive weight fusion is divided into two phases. The first phase is to adaptively change the fusion weights of unimodal feature representations according to the difference of unimodal learning gradients to dynamically balance the modal learning rate. The study calls this phase balanced fusion (B-fusion). The second phase is to eliminate the impact of the fusion weights of B-fusion on task analysis, propose the modal attention to explore the contributions of modalities to the task, and dynamically allocate the fusion weight to each modality. The study calls this phase attention fusion (A-fusion). The experimental results show that the introduction of the B-fusion method into existing multimodal sentiment analysis methods can effectively improve the accuracy of sentiment analysis. The ablation experiment results show that adding the A-fusion method to B-fusion can effectively reduce the impact of B-fusion weights on the task, which is conducive to improving the analysis results of sentiment analysis. Compared with the existing multimodal sentiment analysis models, the proposed method has a simpler structure, lower computational consumption, and better task accuracy than these comparison models, which shows that the method has high efficiency and excellent performance in multimodal sentiment analysis tasks.

    • Supervised Contrastive Learning for Text Emotion Category Representations

      2024, 35(10):4794-4805. DOI: 10.13328/j.cnki.jos.006999 CSTR: 32375.14.jos.006999

      Abstract (548) HTML (335) PDF 1.70 M (2355) Comment (0) Favorites

      Abstract:Revealing the complex relations among emotions is an important fundamental study in cognitive psychology. From the perspective of natural language processing, the key to exploring the relations among emotions lies in the embedded representation of emotional categories. Recently, there has been some interest in obtaining a category representation in the emotion space that can characterize emotion relations. However, the existing methods for emotion category representations have several drawbacks. For example, fixed dimensionality, the dimensionality of the emotion category representation, depends on the selected dataset. In order to obtain better representations for the emotion categories, this study introduces a supervised contrastive learning representation method. In the previous supervised contrastive learning, the similarity between samples depends on the similarity of the annotated labels of the samples. In order to better reflect the complex relations among different emotion categories, the study further proposes a partially similar supervised contrastive learning representation method, which suggests that samples of different emotion categories (e.g., anger and annoyance) may also be partially similar to each other. Finally, the study organizes a series of experiments to verify the ability of the proposed method and the other five benchmark methods in representing the relationship between emotion categories. The experimental results show that the proposed method achieves satisfactory results for the emotion category representations.

    • Lie Group Fuzzy C-means Clustering Algorithm for Image Segmentation

      2024, 35(10):4806-4825. DOI: 10.13328/j.cnki.jos.007000 CSTR: 32375.14.jos.007000

      Abstract (675) HTML (377) PDF 6.33 M (1772) Comment (0) Favorites

      Abstract:Fuzzy C-means (FCM) clustering algorithm has become one of the commonly used image segmentation techniques with its low learning cost and algorithm overhead. However, the conventional FCM clustering algorithm is sensitive to noise in images. Recently, many of improved FCM algorithms have been proposed to improve the noise robustness of the conventional FCM clustering algorithm, but often at a cost of detail loss on the image. This study presents an improved FCM clustering algorithm based on Lie group theory and applies it to image segmentation. The proposed algorithm constructs matrix Lie group features for the pixels of an image, which summarizes the low-level image features of each pixel and its relationship with other pixels in the neighborhood window. By doing this, the proposed method transforms the clustering problem of measuring the Euclidean distances between pixels into calculating the geodesic distances between Lie group features of pixels on the Lie group manifold. Aiming at the problem of updating the clustering center and fuzzy membership matrix on the Lie group manifold, the proposed method uses an adaptive fuzzy weighted objective function, which improves the generalization and stability of the algorithm. The effectiveness of the proposed method is verified by comparing with conventional FCM and several classic improved algorithms on the experiments of three types of medical images.

    • Pseudorandomness and Super-pseudorandomness of FBC Model

      2024, 35(10):4826-4836. DOI: 10.13328/j.cnki.jos.006957 CSTR: 32375.14.jos.006957

      Abstract (343) HTML (329) PDF 6.03 M (1578) Comment (0) Favorites

      Abstract:As one of the ten block cipher algorithms selected for the second round of the 2018 National Cryptographic Algorithm Design Contest, Feistel-based block cipher (FBC) is an efficient and lightweight block cipher algorithm with a four-branch and two-fold Feistel structure. In this study, the FBC algorithm is abstracted as the FBC model, and the pseudorandomness and super-pseudorandomness of the model are studied. It is assumed that the FBC round functions are independent random functions, and a method to find the minimal number of FBC rounds is provided, which will keep FBC indistinguishable from a random permutation. Finally, the study comes to the conclusion that under the chosen-plaintext attack, four rounds of FBC are indistinguishable from random permutation, so the model has pseudorandomness; under the adaptive chosen-plaintext and ciphertext attack, five rounds of FBC are indistinguishable from random permutation, so the model has super-pseudorandomness.

    • Malicious Domain Name Detection Method Based on Graph Contrastive Learning

      2024, 35(10):4837-4858. DOI: 10.13328/j.cnki.jos.006964 CSTR: 32375.14.jos.006964

      Abstract (514) HTML (341) PDF 6.52 M (2133) Comment (0) Favorites

      Abstract:The domain name plays an important role in cybercrimes. Existing malicious domain name detection methods are not only difficult to use with rich topology and attribute information but also require a large amount of label data, resulting in limited detection effects and high costs. To address this problem, this study proposes a malicious domain name detection method based on graph contrastive learning. The domain name and IP address are taken as two types of nodes in a heterogeneous graph, and the feature matrix of corresponding nodes is established according to their attributes. Three types of meta paths are constructed based on the inclusion relationship between domain names, the measure of similarity, and the correspondence between domain names and IP addresses. In the pre-training stage, the contrast learning model based on the asymmetric encoder is applied to avoid the damage to graph structure and semantics caused by graph data augmentation operation and reduce the demand for computing resources. By using the inductive graph neural network graph encoders HeteroSAGE and HeteroGAT, a node-centric mini-batch training strategy is adopted to explore the aggregation relationship between the target node and its neighbor nodes, which solves the problem of poor applicability of the transductive graph neural networks in dynamic scenarios. The downstream classification detection task contrastively utilizes logistic regression and random forest algorithms. Experimental results on publicly available data sets show that detection performance is improved by two to six percentage points compared with that of related works.

    • User-centric Two-factor Authentication Key Agreement Protocol

      2024, 35(10):4859-4875. DOI: 10.13328/j.cnki.jos.006966 CSTR: 32375.14.jos.006966

      Abstract (398) HTML (361) PDF 7.66 M (1652) Comment (0) Favorites

      Abstract:The current authentication protocol based on username and password has been difficult to meet the increasing security requirements. Specifically, users choose different passwords to access different online services, which greatly increases the user’s memory burden. In addition, password authentication has low security and faces many known attacks. To solve such problems, this study proposes a user-centric two-factor authentication key agreement protocol UC-2FAKA based on the Pointcheval-Sanders signature. Firstly, to prevent the leakage of authentication factors, passwords, and biometric two-factor credentials are constructed based on the Pointcheval-Sanders signature. The identity is authenticated to the service provider (SP) in a zero-knowledge proof manner. Secondly, using a user-centric single sign on (SSO) architecture, users can request identity credentials by registering with an identity provider (IDP) to log in different SPs to avoid IDP or SP tracking or linking users. Thirdly, the Diffie-Hellman key exchange is used to authenticate SP identities and negotiate communication keys to ensure subsequent communication security. Finally, comprehensive security analysis and performance comparison of the proposed protocol are carried out. The results show that the proposed protocol can resist various known attacks, and the proposed protocol performs better in communication overhead and computational overhead.

    • Study and Analysis of Recursive Side DNS Security

      2024, 35(10):4876-4911. DOI: 10.13328/j.cnki.jos.006987 CSTR: 32375.14.jos.006987

      Abstract (661) HTML (346) PDF 4.81 M (2691) Comment (0) Favorites

      Abstract:Internet users need to resolve through DNS before accessing network applications. DNS security is the first portal to ensure the normal operation of the network. If the security of DNS cannot be effectively guaranteed, even if the level of security protection measures of other network systems is high, attackers can attack the DNS system to make the network unusable. At present, DNS malignant events still have an upward trend, and the development of DNS attack detection and defense technology still cannot meet practical needs. From the perspective of recursive servers that directly serve users’ DNS requests, this study comprehensively summarizes the security problems faced in the DNS process through two classification methods, including various security events caused by attacks or system vulnerabilities, different detection methods for various security events, and various defense and protection technologies. When summarizing various security events, detection and defense protection technologies, the study analyzes the characteristics of the corresponding typical methods and prospects for the future research direction of the DNS security field.

    • Adaptive Data Placement Strategy in Mobile Distributed Storage System

      2024, 35(10):4912-4929. DOI: 10.13328/j.cnki.jos.006986 CSTR: 32375.14.jos.006986

      Abstract (411) HTML (339) PDF 10.02 M (1758) Comment (0) Favorites

      Abstract:Distributed storage system is receiving more and more attention in mobile network scenarios. Data placement, a key technology of distributed storage, is crucial to improve the success rate of distributed data storage. However, due to unstable wireless signals and fluctuating network bandwidth in mobile environments, the traditional data placement strategies, such as random placement strategy and storage-aware placement strategy, have low success rates of data transmission because both of them do not take network bandwidth into account during data placement. To solve the problem faced by mobile distributed storage systems, this study proposes a bandwidth-aware adaptive data placement strategy (BADP). The main breakthrough is that BADP adopts the group mobility model to sense the network bandwidth of nodes and takes the network bandwidth as an important factor for data placement, thus selecting nodes with good performance to achieve adaptive data placement and improve the success of data transmission. BADP consists of three design features: (1) adopting the group mobility model to sense the network bandwidth of nodes; (2) managing node information in groups to reduce communication overhead, and taking advantage of the heap to build a node selection tree; (3) selecting nodes with good performance using adaptive data placement to improve the success rate of data transmission. Experiments show that when the network changes dynamically, BADP gains at least 30.6% and 34.6% improvements in the success rate of data transmission compared with random placement strategy and storage-aware placement strategy. At the same time, it consistently keeps communication overhead low.

    • Novel and Universal OS Structure Model Based on Hierarchical Software Bus

      2024, 35(10):4930-4947. DOI: 10.13328/j.cnki.jos.006965 CSTR: 32375.14.jos.006965

      Abstract (723) HTML (331) PDF 3.86 M (1869) Comment (0) Favorites

      Abstract:The major challenges traditional operating system (OS) design faces are the increasing number, diversity, and distribution scope of resources to be managed and the frequent changes in system state. However, the structures of existing OSs have become the biggest obstacle to solving the above problems as (1) tight coupling and centralization of the structure lead to poor flexibility and scalability and separate OS ecology; (2) contradiction between various capabilities, e.g., security and performance, due to the unitary isolation mechanism such as kernel-user isolation. Therefore, this study combines the hierarchical software bus (softbus) principles with isolation mechanisms to organize the OS and proposes a new OS model termed Yggdrasil. Yggdrasil decomposes an OS into component nodes connected by softbuses, whose communications are standardized to message passing via the softbus. To support the division of isolated states such as supervisor mode and different software hierarchies, Yggdrasil introduces bridge nodes for cascading and controlled communication between softbuses, and enhances the logical representation capability and scalability of OS through self-similar topology. Additionally, the simplicity and hierarchy of the softbus help to achieve decentralization. To verify the feasibility of Yggdrasil, the study builds hierarchical softbus model for OS (HiBuOS) and demonstrates the feasibility of developing a new OS based on Yggdrasil’s ideas through three specific designs: (1) designing and planning a hierarchical softbus structure according to the scale and requirements of the target operating system; (2) selecting specific isolation and communication mechanisms to instantiate bridge nodes and softbuses; (3) realizing OS services based on the hierarchical softbus style. Finally, the evaluation shows that HiBuOS has notable potential and advantages to enhance system scalability, security, performance, and ecological development without significant performance loss.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063