• Volume 34,Issue 2,2023 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Survey of Evolutionary Multitasking Optimization

      2023, 34(2):509-538. DOI: 10.13328/j.cnki.jos.006704

      Abstract (1866) HTML (3610) PDF 3.02 M (5463) Comment (0) Favorites

      Abstract:Evolutionary multitasking optimization focuses on population-based search and solves multiple tasks simultaneously via genetic transfer between tasks. It is considered as the third problem optimization paradigm after single-objective optimization and multi-objective optimization, and has become a hot research topic in the field of computational intelligence in recent years. The evolutionary multitasking optimization algorithm simulates the biocultural phenomena of assortative mating and vertical cultural transmission in nature, which leads to the improved convergence characteristics of multiple optimization tasks with inter-task and intra-task transfer knowledge. This study gives a systematic review of the research progress in evolutionary multitasking in recent years. Firstly, the concept of evolutionary multitasking optimization is introduced and its related five definitions are given. This problem is also explained from the perspective of knowledge transfer optimization. Secondly, the basic framework of the evolutionary multitasking optimization algorithm is introduced in detail. The improvement of it and the implementation of other algorithms based on it are presented. Finally, the application in academic and engineering of this algorithm is summarized. At last, the existing challenges in the field of evolutionary multitasking optimization are pointed out and an outlook is presented for the further development of this direction.

    • Research on Dynamic Graph Partitioning Algorithms: A Survey

      2023, 34(2):539-564. DOI: 10.13328/j.cnki.jos.006705

      Abstract (2414) HTML (5111) PDF 2.84 M (7494) Comment (0) Favorites

      Abstract:Graph partitioning is the primary work of large-scale distributed graph processing, which plays a fundamental role in storage, query, processing, and mining of graph applications. Since graph data in the real world are always dynamic, the research of dynamic graph partitioning is a hot topic. This study systematically introduces the current algorithms for dynamic graph partitioning, which including streaming graph partitioning algorithm, incremental graph partitioning algorithm, and graph repartitioning algorithm. Firstly, the study introduces three different partitioning strategies, two different dynamic sources of graph and dynamic graph partitioning problem. Then, three different streaming graph partitioning algorithms are introduced, including hash algorithm, neighbor distribution-based algorithm, and novel algorithm. Secondly, two different incremental graph partitioning algorithms, single element incremental graph partitioning, and batch incremental graph partitioning are introduced. Thirdly, the repartitioning algorithm for graph structure and the repartitioning algorithm for graph computation are introduced, respectively. Finally, based on the analysis and comparison of the existing methods, the main challenges of dynamic graph partitioning are summarized and the corresponding research problems are proposed.

    • Overview of Evolutionary Algorithms for Complex Constrained Optimization Problems

      2023, 34(2):565-581. DOI: 10.13328/j.cnki.jos.006711

      Abstract (1941) HTML (3759) PDF 2.07 M (5670) Comment (0) Favorites

      Abstract:Most of engineering optimization problems can be formulated as constrained optimization problems. Evolutionary algorithms have been widely used in optimization constrained problems in recent years due to their sound performance. Nevertheless, the constraints make the solution space of the problem discrete, shrink and change, which bring great challenges to the evolutionary algorithm to solve the constrained optimization problem. The evolutionary algorithm integrating constraint handling technology has become a research hotspot. In addition, constraint processing techniques have been widely developed in the optimization of complex engineering application problems with the deepening of research in recent years, such as multi-objective, high-dimensional, equality constraint, etc. This study divides the evolutionary optimization for complex constraint optimization problems into evolutionary optimization algorithms for complex objectives and evolutionary algorithms for complex constraint scenarios according to the complexity. The challenges of constraint handling technology due to the complexity of practical engineering applications and the latest research progress in current research are discussed. Finally, the future research trends and challenges are summarized.

    • Comparing Software Defect Prediction Models: Research Problem, Progress, and Challenges

      2023, 34(2):582-624. DOI: 10.13328/j.cnki.jos.006714

      Abstract (1813) HTML (3059) PDF 1.97 M (5127) Comment (0) Favorites

      Abstract:In recent years, a large number of software defect prediction models have been proposed. Once a new defect prediction model is proposed, it is often compared with previous defect prediction models to evaluate its effectiveness. However, there is no consensus on how to compare the newly proposed defect prediction model with previous defect prediction models. Different studies often adopt different settings for comparison, which may lead to misleading conclusions in the comparisons of prediction models, and consequently lead to missing the opportunity to improve the effectiveness of defect prediction. This study systematically reviews the comparative experiments of software defect prediction models conducted by worldwide scholars in recent years. First, the comparisons of defect prediction models are introduced. Then, the research progress is summarized from the perspectives of defect dataset, dataset split, baseline models, performance indicators, and classification thresholds, respectively, in the comparisons. Finally, the opportunities and challenges are summarized in comparative experiments of defect prediction models and the research directions in the future are outlined.

    • Survey of Source Code Bug Detection Based on Deep Learning

      2023, 34(2):625-654. DOI: 10.13328/j.cnki.jos.006696

      Abstract (3520) HTML (4632) PDF 3.04 M (21572) Comment (0) Favorites

      Abstract:Source code bug (vulnerability) detection is a process of judging whether there are unexpected behaviors in the program code. It is widely used in software engineering tasks such as software testing and software maintenance, and plays a vital role in software functional assurance and application security. Traditional vulnerability detection research is based on program analysis, which usually requires strong domain knowledge and complex calculation rules, and faces the problem of state explosion, resulting in limited detection performance, and there is room for greater improvement in the rate of false positives and false negatives. In recent years, the open source community's vigorous development has accumulated massive amounts of data with open source code as the core. In this context, the feature learning capabilities of deep learning can automatically learn semantically rich code representations, thereby providing a new way for vulnerability detection. This study collected the latest high-level papers in this field, systematically summarized and explained the current methods from two aspects:vulnerability code dataset and deep learning vulnerability detection model. Finally, it summarizes the main challenges faced by the research in this field, and looks forward to the possible future research focus.

    • Querying Algorithm for Steiner Maximum Path-connected Components in Heterogeneous Information Networks

      2023, 34(2):655-675. DOI: 10.13328/j.cnki.jos.006687

      Abstract (671) HTML (1153) PDF 2.64 M (2211) Comment (0) Favorites

      Abstract:Heterogeneous information networks (HINs) are directed graphs including multi-typed objects (vertices) and links (edges), which can express rich semantic information and complex structural information. The problem of cohesive subgraph query in HINs, i.e., given a query vertex q, it could be found that the cohesive subgraphs containing q in HINs has become an important problem, and has been widely used in various areas such as event planning, biological analysis, and product recommendation. Yet existing methods mainly have the two deficiencies:(1) cohesive subgraphs based on relational constraint and motif cliques contain multiple types of vertices, which makes it hard to solve the scenario of focusing on a specific type of vertices; (2) although the method based on meta-path can query the cohesive subgraphs with a specific type of vertices, it ignores the meta-path-based connectivity between the vertices in the subgraphs. Therefore, the connectivity with novel edge-disjoint paths is firstly proposed based on meta-path in HINs, i.e., path-connectivity. Then, the k-path connected component (k-PCC) is raised based on path-connectivity, which requires the path-connectivity of subgraph to be at least k. Next, the Steiner maximum path-connected component (SMPCC) is further proposed, which is the k-PCC containing q with the maximum path-connectivity. Finally, an efficient graph decomposition-based k-PCC discovery algorithm is designed, and based on this, an optimized SMPCC query algorithm is proposed. A large number of experiments on five real and synthetic HINs prove the effectiveness and efficiency of the proposed approaches.

    • Cross-lingual Aspect-level Sentiment Classification with Graph Neural Network

      2023, 34(2):676-689. DOI: 10.13328/j.cnki.jos.006667

      Abstract (1167) HTML (1660) PDF 2.10 M (2275) Comment (0) Favorites

      Abstract:Most of the mature labeled dataset of aspect-level sentiment analysis are in English, it is quite rare in some low-resource language such as Chinese. For the sake of utilizing the vast but unlabeled Chinese aspect-level sentiment classification dataset, this study works on cross-lingual aspect-level sentiment classification. Nevertheless, the most central and difficult problem in cross-lingual mission is how to construct the connection between the documents in two languages. In order to solve this problem, this study proposes a method using graph neural network structure to model the connection of multilingual word-to-document and word-to-word, which could effectively model the interaction between the high-resource language (source language) and low-resource language (target language). The connections include multilingual word-to-document connection and monolingual word-to-document connection are constructed to tie the source language data and target language data, which are modeled by graph neural network to realize using English labeled dataset as trainset to predict Chinese dataset. Compared with other baseline model, the proposed model achieves a higher performance in F1-score, which indicates that the presented work does contributing to the cross-lingual aspect-level sentiment classification.

    • >Review Articles
    • Survey on Aspect Term Extraction

      2023, 34(2):690-711. DOI: 10.13328/j.cnki.jos.006709

      Abstract (1779) HTML (3154) PDF 2.52 M (4939) Comment (0) Favorites

      Abstract:Aspect term extraction is a natural language processing task that automatically recognizes and extracts aspect term from free expression text. The study first goes over the basic task of aspect term extraction, the authoritative datasets of it and general evaluation specifications on it. Based on these, the study comprehensively reviews on the state-of-the-art techniques for the task, including traditional extraction techniques based on statistical strategies and feature engineering, and the neural extraction techniques using deep learning. In particular, the study takes the essence of expression language as the starting point, combines with the limitations of existing techniques and gives an elaboration of the technical difficulties and the future development prospects of aspect term extraction.

    • Survey on Reverse-engineering Artificial Intelligence

      2023, 34(2):712-732. DOI: 10.13328/j.cnki.jos.006699

      Abstract (2437) HTML (3451) PDF 2.73 M (5668) Comment (0) Favorites

      Abstract:In the era of big data, artificial intelligence, especially the representative technologies of machine learning and deep learning, has made great progress in recent years. As artificial intelligence has been widely used to various real-world applications, the security and privacy problems of artificial intelligence is gradually exposed, and has attracted increasing attention in academic and industry communities. Researchers have proposed many works focusing on solving the security and privacy issues of machine learning from the perspective of attack and defense. However, current methods on the security issue of machine learning lack of the complete theory framework and system framework. This survey summarizes and analyzes the reverse recovery of training data and model structure, the defect of the model, and gives the formal definition and classification system of reverse-engineering artificial intelligence. In the meantime, this survey summarizes the progress of machine learning security on the basis of reverse-engineering artificial intelligence, where the security of machine learning can be taken as an application. Finally, the current challenges and future research directions of reverse-engineering artificial intelligence are discussed, while building the theory framework of reverse-engineering artificial intelligence can promote the develop of artificial intelligence in a healthy way.

    • Research and Development on Deep Hierarchical Reinforcement Learning

      2023, 34(2):733-760. DOI: 10.13328/j.cnki.jos.006706

      Abstract (2826) HTML (4706) PDF 3.14 M (5957) Comment (0) Favorites

      Abstract:Deep hierarchical reinforcement learning (DHRL) is an important research field in deep reinforcement learning (DRL). It focuses on sparse reward, sequential decision, and weak transfer ability problems, which are difficult to be solved by classic DRL. DHRL decomposes complex problems and constructs a multi-layered structure for DRL strategies based on hierarchical thinking. By using temporal abstraction, DHRL combines lower-level actions to learn semantic higher-level actions. In recent years, with the development of research, DHRL has been able to make breakthroughs in many domains and shows a strong performance. It has been applied to visual navigation, natural language processing, recommendation system and video description generation fields in real world. In this study, the theoretical basis of hierarchical reinforcement learning (HRL) is firstly introduced. Secondly, the key technologies of DHRL are described, including hierarchical abstraction techniques and common experimental environments. Thirdly, taking the option-based deep hierarchical reinforcement learning framework (O-DHRL) and the subgoal-based deep hierarchical reinforcement learning framework (G-DHRL) as the main research objects, those research status and development trend of various algorithms are analyzed and compared in detail. In addition, a number of DHRL applications in real world are discussed. Finally, DHRL is prospected and summarized.

    • Survey of Key Techniques of HTAP Databases

      2023, 34(2):761-785. DOI: 10.13328/j.cnki.jos.006713

      Abstract (2999) HTML (3917) PDF 2.93 M (6700) Comment (0) Favorites

      Abstract:Hybrid transactional analytical processing (HTAP) relies on a single system to process the mixed workloads of transactions and analytical queries simultaneously. It not only eliminates the extract-transform-load (ETL) process, but also enables real-time data analysis. Nevertheless, in order to process the mixed workloads of OLTP and OLAP, such systems must balance the trade-off between workload isolation and data freshness. This is mainly because of the interference of highly-concurrent short-lived OLTP workloads and bandwidth-intensive, long-running OLAP workloads. Most existing HTAP databases leverage the best of row store and column store to support HTAP. As there are different requirements for different HTAP applications, HTAP databases have disparate storage strategies and processing techniques. This study comprehensively surveys the HTAP databases. The taxonomy of state-of-the-art HTAP databases is introduced according to their storage strategies and architectures. Then, their pros and cons are summarized and compared. Different from previous works that focus on single-model and spark-based loosely-coupled HTAP systems, real-time HTAP databases with a row-column dual store are focused on. Moreover, a deep dive into their key techniques is accomplished regarding data organization, data synchronization, query optimization, and resource scheduling. The existing HTAP benchmarks are also introduced. Finally, the research challenges and open problems are discussed for HTAP.

    • Next Point-of-interest Recommendation Approach with Global and Local Feature Fusion

      2023, 34(2):786-801. DOI: 10.13328/j.cnki.jos.006712

      Abstract (1130) HTML (1540) PDF 2.13 M (3261) Comment (0) Favorites

      Abstract:As considerable amounts of mobility data have been accumulated, next point-of-interest (POI) recommendation has become one of the important tasks in location-based social networks. Existing approaches for next POI recommendation mainly focus on capturing local dynamic preferences from user's recent check-in records, but ignore global static information in historical mobility data. As a result, it prevents further mining of user's preferences and limits the recommendation accuracy. To this end, a global and local feature fusion based approach is proposed for next POI recommendation (GLNR). GLNR can model user dynamic behavior by taking advantage of the sequential dependencies between check-ins and the underlying relationships between entities contained in global static information. Two types of global static information are firstly introduced, i.e., user-POI association paths and POI-POI association paths, to learn user's global static preferences and the global dependency between successive check-ins. Specifically, a heterogeneous information network is constructed based on interactive data and geographical information. To capture global static features, a relevance-guided path sampling strategy and a hierarchical attention based representation learning method are designed. Moreover, the representations of POIs in the user's check-in sequence are updated based on the two types of global static features. Position and time interval aware self-attention mechanism are further utilized to model the sequential dependency between multiple check-ins. Then, the check-in probability is predicted and a set of next POIs is recommended for the target user. Finally, the extensive experiments are conducted on two real-world datasets to evaluate the performance of the proposed model GLNR. Experimental results validate the superiority of GLNR for improving recommendation accuracy. Besides, the case study indicates that the explicit paths in the global static information help GLNR to provide interpretable recommendations.

    • k-Lin-based Homomorphic Encryption Schemes

      2023, 34(2):802-817. DOI: 10.13328/j.cnki.jos.006694

      Abstract (855) HTML (1317) PDF 1.92 M (2289) Comment (0) Favorites

      Abstract:Blockchain, as one of the underlying key technologies of digital currency, has received extensive attention with the rapid development of digital currency. Due to the decentralization, tamper resistance, traceability, and other properties of blockchain, more and more enterprise/individual users now choose to use blockchain technology to achieve data transmission and recording. On the one hand, the openness and transparency of the blockchain can fully guarantee the availability of data, but on the other hand, it brings high risks to users' privacy. In order to balance the confidentiality and availability of data, homomorphic encryption is usually employed in security solutions of blockchain. However, in practice, the security strength of the deployed homomorphic encryption schemes is likely to change over time. Considering the complex diversity and distributed characteristics of blockchain application scenarios, once a homomorphic encryption scheme is deployed, the corresponding workload will be very heavy when its security strength needs to be adjusted over time. To make things worse, in practice of blockchain, when considering the regulation requirements in many cases (especially for the data published and transmitted by certain group members), a trusted third party (TTP) such as a regulator, which is able to decrypt all the corresponding ciphertexts on the chain, is needed. If a traditional homomorphic encryption scheme is deployed, the TTP needs to store all users' secret keys, which introduces lots of practical problems to key management and storage of the TTP. According to the current application scenarios and security requirements of blockchain, an additive homomorphic encryption scheme is proposed, whose security is based on the decisional k-Lin assumption over ZN2* where N=pq.. The proposed scheme can be proved IND-CCA1 secure in the standard model, and has the following three advantages:(i) fine-grained adjustment of the security strength of the proposed scheme can achieved via adjusting the parameter k; (ii) it is a double decryption scheme (i.e., it has two kinds of secret keys, where one of them is held by a certain user, and the other is kept by the TTP, so the TTP can use this key to decrypt all the ciphertexts encrypted by the users under their own public keys); (iii) it can easily degenerate into an IND-CPA secure homomorphic encryption scheme, such that the obtaining scheme, with shorter public-secret key pair and shorter ciphertexts, is also an additively homomorphic, double decryption scheme.

    • Continuous Leakage-resilient Broadcast Key-encapsulation Mechanism with CCA Security

      2023, 34(2):818-832. DOI: 10.13328/j.cnki.jos.006398

      Abstract (605) HTML (1136) PDF 1.96 M (1731) Comment (0) Favorites

      Abstract:Certificate-based cryptography which is attracted great interest can solve the certificate management issue of the traditional public-key cryptography system, at the same time, which can also avoid the key escrow in the identity-based cryptography, thus, it has attracted attention of cryptography researchers. The traditional security models assume that any adversary cannot obtain the leakage information on the internal secret states, such as secret keys, however, some leakage can be leaked through various leakage attacks in the actual environment. In addition, many cryptographic schemes with broadcast communication function were created, because broadcast communication has higher efficiency of message transmission. To further provide leakage resilience and broadcast communication for certificate-based broadcast key encapsulation mechanism (CB-BKEM), a concrete construction of CB-BKEM is proposed, and the leakage-resilient chosen-ciphertext attacks security is proved based on decisional Diffie-Hellman assumption. To further improve the practicability of CB-BKEM, continuous leakage-resilient CB-BKEM is researched, and the continuous leakage resilience of CB-BKEM can be obtained by performing key update. The performance analysis shows that the proposed construction has higher computational efficiency while maintaining the provable security, the leakage resilience and the broadcast communication.

    • Quantified Analysis of Congestion Situation in Intelligent Transportation Towards Frequency-reduced Spoofing Attack

      2023, 34(2):833-848. DOI: 10.13328/j.cnki.jos.006416

      Abstract (675) HTML (1318) PDF 7.68 M (2145) Comment (0) Favorites

      Abstract:With the development and openness of connected vehicle, the planning system of intelligent signal system (I-SIG system) has a big security threat from network attack. Former work has revealed that a frequency-fixed data spoofing attack to the planning weakness can cause a heavy traffic congestion. However, there is still very limited knowledge for security detection, warning, and defense, and there is no work that provides a full time-serial congestion situation quantification and analysis for various attack frequency from high to low. Targeting the open source I-SIG system and its COP planning algorithm, this study proposes a unified framework to quantify and analyze the congestion situation under multiple spoofing attack from high to low frequency. Firstly, a space-time tensor space of three ordersis constructed. Based on tensor computation, a function-dependent integrated analysis approach is implemented, in which the max-min analysis, stationarity analysis, and correlation analysis are developed. Experiments on the traffic simulation platform VISSIM show the effectiveness of quantification and analysis, and demonstrate that the results are meaningful.

    • Energy-efficient Computing Offloading Algorithm for Mobile Edge Computing Network

      2023, 34(2):849-867. DOI: 10.13328/j.cnki.jos.006417

      Abstract (1586) HTML (1973) PDF 7.15 M (3916) Comment (0) Favorites

      Abstract:Mobile edge computing (MEC) is an efficient technology that enables end users to achieve the goal of high bandwidth and low latency by offloading computationally intensive tasks from mobile devices to edge servers. Computing offloading in the mobile edge computing environment plays an important role in reducing user load and enhancing terminal computing capabilities. This study considers service caching, and proposes a cloud-side-end collaborative computing offloading framework, in which D2D communication and opportunistic networks are introduced. Based on the established model, the offloading decision problem is transformed into a mixed integer nonlinear programming problem, and an iterative mechanism is formulated for the non-cooperative game interaction between wireless characteristics and mobile users to jointly determine the computational offloading plan. The proposed computational offloading algorithm theoretically proves that the multi-user computational offloading game under this framework is an exact potential game (EPG), and the offloading decision is to uninstall under the optimal benefit strategy in the entire network. Taking into account the computing resources of the server, the amount of data for offloading tasks, and the delay requirements of tasks, based on the Gale-Shapley matching theory, the best user association matching algorithm is improved and proposed. Finally, the simulation results show that the proposed unloading decision algorithm has a faster convergence rate and is superior to other benchmark algorithms in terms of energy efficiency.

    • Robust Image Hashing in Encrypted Domain

      2023, 34(2):868-883. DOI: 10.13328/j.cnki.jos.006419

      Abstract (837) HTML (2030) PDF 4.00 M (2696) Comment (0) Favorites

      Abstract:With the development of cloud computing, more and more multimedia data is stored in the cloud. For security needs, it is often necessary to encrypt images before uploading them to the cloud for storage or computing operations. Without knowing the plaintext content of the encrypted image, in order to verify the integrity of the image information and the authenticity of the content, an image hash algorithm based on Paillier homomorphic encryption is proposed. The algorithm is mainly composed of three parts:the image owner encrypts the image, the cloud server generates a ciphertext image hash, and the receiver generates a plaintext image hash. Specifically, the image owner encrypts the image and uploads the encrypted image to the cloud server. The cloud server uses the algorithm of the Paillier cryptosystem to perform calculations of DCT and Watson human visual features in encrypted domain, and uses a key-controlled pseudo-random matrix to increase the randomness of the ciphertext hash, thereby improving the security of the hash. The receiver decrypts and analyzes the received ciphertext hash to obtain the plaintext image hash. Experimental results show that the proposed algorithm has ideal performance in terms of robustness, uniqueness, and security.

    • Offline Witness Encryption with Fully Adaptive Security

      2023, 34(2):884-898. DOI: 10.13328/j.cnki.jos.006423

      Abstract (934) HTML (1186) PDF 6.35 M (2301) Comment (0) Favorites

      Abstract:Compared with witness encryption, offline witness encryption is more extensive in the practical applications because of its high-efficiency by transferring the hard computation work to setup phase. However, most of the current offline witness encryption schemes only satisfy the selective security, that is, the adversary must commit a pair of challenge messages (m0, m1) and an instance x before obtaining the public parameters. Chvojka et al. proposed an offline witness encryption construction that achieves semi-adaptive security by introducing the puncturable encryption. The semi-adaptive security permits the adversary to choose challenge messages adaptively. However, the instance of the considered NP language that is used to create the challenge ciphertext must be fixed before the adversary gets the public parameters (ppe, ppd). Therefore, they leave it as an open problem to construct offline witness encryption schemes with fully adaptive security. This study firstly proposes an offline witness encryption scheme that achieves the fully adaptive security. The setup algorithm outputs public parameters (ppe, ppd), where ppe, the encryption key, contains two public keys, a common reference, and a commitment, and the decryption key ppd is an obfuscated circuit. This algorithm needs to be run only once, and the parameters can be used for arbitrary many encryptions. The encryption algorithm outputs a Naor-Yung's ciphertext by using key encapsulation mechanism and non-interactive witness indistinguishable proofs system. The problem of outputting the challenge plaintext in advance during the proving process of selective security have solved by selecting the encapsulation key in advance. In addition, the proposed scheme can also be turned into a functional offline witness encryption scheme directly to realize the reuse of the decryption key for the function f by embedding f into the decryption key in the key generation phase.

    • Real-time Route Selection in Urban Road Network Based on Road Risk Assessment

      2023, 34(2):899-914. DOI: 10.13328/j.cnki.jos.006424

      Abstract (673) HTML (1463) PDF 5.91 M (2455) Comment (0) Favorites

      Abstract:In order to alleviate urban traffic congestion and avoid the traffic accident, the route selection in urban road networks has been a hot research topic. With the development of edge computing and vehicle intelligent terminal technology, driving vehicles in urban road network are transiting from self-organizing network to Internet of vehicles (IoV) paradigm, which makes the route selection of vehicles change the computation based on static historic traffic data to real-time traffic information. In the current research on the route selection in urban road networks, many scholars focus on how to improve the efficiency of travel, reduce travel time, etc. Nevertheless, these studies do not consider the possible risk on the selected route. Based on the above issues, this study constructs a real-time road risk assessment model based on edge computing (R3A-EC) for the first time. Besides, it proposes a real-time route selection method based on risk assessment (R2S-RA). The R3A-EC model makes full use of the characteristics of low latency and high reliability of the edge computing technology to assess the risk on the urban road in real time, and uses the minimum risk Bayes decision making to validate whether there is a risk. Finally, based on the real-time risk assessment model, the route selection of urban road network is optimized to realize the real-time dynamic and low-risk route selection method. Compared with the traditional shortest path method Dijkstra and the shortest time method based on VANET, the dynamic path planning algorithm based on MEC and the bidirectional A* shortest path optimization algorithm, the proposed R2S-RA method can better choose the optimal route that takes road risk and travel time into account, so as to reduce the occurrence of traffic congestion and traffic accidents.

    • Two-party EdDSA Signature Scheme Against Differential Fault Attack

      2023, 34(2):915-931. DOI: 10.13328/j.cnki.jos.006505

      Abstract (633) HTML (1378) PDF 6.07 M (2199) Comment (0) Favorites

      Abstract:Cryptocurrencies such as Bitcoin and Libra based on blockchain technology have set off a wave of digital economy, which can ensure the verifiability and integrity of transactions through digital signatures, in which the private key ensures the ownership of currency assets, if the private key was lost or stolen, the security of cryptocurrency assets will be significantly threatened. Compared with elliptic curve digital signature algorithm (ECDSA), Edwards curves digital signature algorithm (EdDSA) has the advantages of faster calculation speed, smaller key and signature space, and is widely used in the signature of Libra transactions. However, as a deterministic signature algorithm, it is vulnerable to differential fault attacks resulting in key loss and leakage. It is a challenge that how to resist this kind of attack and design a provably secure EdDSA signature. Therefore, we firstly define the security properties are firstly defined that the digital signature scheme against differential fault attacks that must be meet, and differential fault attack technology is utilized to cryptanalyze the EdDSA signature algorithm, and an EdDSA signature scheme that resists differential fault attacks is proposed, and it is proved that the scheme satisfies the existence of unforgeable under adaptive selection message attack (EUF-CMA) and resistance to differential fault attack. In order to reduce the risk of signature private key leakage, with the help of Paillier homomorphic encryption technology, we design a two-party cooperative EdDSA signature scheme against differential fault attack is designed, and prove the security of the scheme based on the universally composable (UC) security model is proved. Finally, we implement the two-party cooperative ECDSA signature algorithm and the two-party cooperative EdDSA signature algorithm against differential fault attack are implemented, and the implementation demonstrates that the effectiveness of the proposed scheme.

    • Efficient Private Set Intersection Protocols with Semi-trusted Cloud Server Aided

      2023, 34(2):932-944. DOI: 10.13328/j.cnki.jos.006397

      Abstract (1174) HTML (1591) PDF 6.87 M (2428) Comment (0) Favorites

      Abstract:Private set intersection (PSI) is a hot topic in the privacy-preserving computation, which allows two parties computing the intersection of their sets without revealing any additional information except the resulting intersection. Prior PSI protocols mostly considers the scenario between two parties with the potential limitation of requiring expensive hardware. In addition, the weak client with low computation capability cannot outsource the computation to semi-trusted cloud without keeping the data privacy. This study designs a new oblivious two-party distributed pseudorandom function (Otd-PRF), which allows the semi-trusted cloud servers participating the equality test without any leakage of the set information. Based on Otd-PRF, a cloud-aided PSI protocol is designed which can delegate the major computation to the semi-trusted cloud. A formal security analysis is also provided in the semi-honest model and it is extended to support the computation of the private set intersection cardinality. Through the comparison with the related work, the proposed protocol is superior in the computation and communication complexity. This protocol is linear in the size of the client's set. Its performance analysis shows that the protocol is more friendly to the client with constrained device in the semi-honest model.

    • Incomplete Overlapping Removal Algorithm for Scatterplots

      2023, 34(2):945-963. DOI: 10.13328/j.cnki.jos.006673

      Abstract (580) HTML (1208) PDF 2.62 M (1926) Comment (0) Favorites

      Abstract:Data point overlapping frequently occurs in scatterplots, resulting in visual clutters to interfere visual analysis. Some overlapping removal algorithms have been proposed to remove data point overlapping completely, however, they have some common shortcomings, mainly including the increasing of canvas size, distortion of data distribution, and dissatisfaction of time consumption. This work proposes that the complete removal of data point overlapping is non-essential, while slight overlapping is acceptable in some data analytical scenarios. Therefore, an incomplete overlapping removal algorithm is designed for scatterplots. First, the algorithm generates virtual data points in the blank areas in a scatterplot by using a semi-random generation method. Second, the algorithm uses a Voronoi diagram to divide each data point into an irregular grid, and then moves data points to grid centers to reduce the rate of data point overlapping and maintain the natural contour of data distribution. At last, the algorithm iteratively runs the step of Voronoi meshing and data point moving until that the rate of data point overlapping reaches a preset threshold. A series of objective and subjective experiments are conducted to evaluate the performance of the proposed algorithm and reference algorithms. The results show that users can quickly and accurately accomplish visual analysis tasks, including data point selection and regional density estimation, in scatterplots with a slight data point overlapping. The results reflect that the proposed algorithm is superior to all of the reference algorithms in the objective and subjective indicators.

    • >Review Articles
    • Group Activity Recognition in Videos: A Survey

      2023, 34(2):964-984. DOI: 10.13328/j.cnki.jos.006693

      Abstract (1594) HTML (3469) PDF 2.31 M (4509) Comment (0) Favorites

      Abstract:Given a video containing a multi-person scene, group activity recognition model needs to recognize the group activity that multiple people in video are completing together. Group activity recognition is an important problem in video understanding and can be applied to sports videos analysis, surveillance video recognition, social behavior understanding, and other real scenarios. Multi-person scene video is complicated, and the spatial-temporal information is rich, which requires the model to extract key information. To accurately recognize group activity, the model should efficiently model the hierarchical relationships in the scene and extract distinguishing spatial-temporal features for people. Due to its wide range of application requirements, the problem of group activity recognition has received extensive attention from researchers. This study has conducted an in-depth analysis of a large number of research work on group activity recognition in recent years, and summarized the main challenges of group activity recognition research, systematically summarized six types of group activity recognition methods, including traditional non-deep learning recognition methods and recognition methods based on deep learning technology, and proposed the possible directions of future research.

    • Survey on Video Moment Retrieval

      2023, 34(2):985-1006. DOI: 10.13328/j.cnki.jos.006707

      Abstract (2353) HTML (2949) PDF 2.88 M (5253) Comment (0) Favorites

      Abstract:Given a natural language sentence as the query, the task of video moment retrieval aims to localize the most relevant video moment in a long untrimmed video. Based on the rich visual, text, and audio information contained in the video, how to fully understand the visual information provided in the video and utilize the text information provided by the query sentence to enhance the generalization and robustness of model, and how to align and interact cross-modal information are crucial challenges of the video moment retrieval. This study systematically sorts out the work in the field of video moment retrieval, and divides them into ranking-based methods and localization-based methods. Thereinto, the ranking-based methods can be further divided into the methods of presetting candidate clips, and the methods of generating candidate clips with guidance; the localization-based methods can be divided into one-time localization methods and iterative localization ones. The datasets and evaluation metrics of this fieldf are also summarized and the latest advances are reviewed. Finally, the related extension task is introduced, e.g., moment localization from video corpus, and the survey is concluded with a discussion on promising trends.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063