• Volume 33,Issue 7,2022 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Survey on Requirements Classification and Formalization of Trustworthy Systems

      2022, 33(7):2367-2410. DOI: 10.13328/j.cnki.jos.006587 CSTR:

      Abstract (2601) HTML (4646) PDF 3.92 M (5456) Comment (0) Favorites

      Abstract:Computer systems have been applied in many different areas, and the failure of these systems may bring catastrophic results. Systems in different areas have different requirements, and how to build trustworthy computer systems with high quality is the challenge faced by all these areas. Recently, formal methods with rigorous mathematical foundation have been widely recognized as effective methods for developing trustworthy software and hardware systems. Based on formal methods, this paper presents a classification of requirements of systems and their formalization, to support the design of trustworthy systems. First of all, six types of system characteristics are considered, namely, sequential systems, reactive systems, parallel and communicating systems, real-time systems, probabilistic and stochastic systems, and continuous and hybrid systems. All these systems may run in different application scenarios, with their respective requirements. Four classes of scenarios are considered, i.e., hardware systems, security protocols, information flow, and AI systems. For each class of systems and application scenarios above, the related formal methods are introduced and summarized, including formal modeling, property specification, verification methods and tools. This will allow users of formal methods to choose, based on different system characteristics and application scenarios, the appropriate formal models, verification methods and tools, which ultimately helps the design of more trustworthy systems.

    • Research Progress of Face Recognition Anti-spoofing

      2022, 33(7):2411-2446. DOI: 10.13328/j.cnki.jos.006590 CSTR:

      Abstract (2882) HTML (3596) PDF 3.23 M (5314) Comment (0) Favorites

      Abstract:Currently, face recognition theory and technology have achieved great success, and face recognition systems have been widely deployed in key fields such as government, finance, military, etc. Similar to other information systems, face recognition systems also face various security issues, among which, face spoofing is one of the most important issues. The so-called face spoofing refers to the use of attack methods such as printing photos, video re-play, and 3D masks to trick the face recognition system into making false decisions, and thus it must be addressed by a face recognition system. The recent progress of face anti-spoofing (FAS) is investigated. Initially, FAS-related concepts are outlined. Then, the main scientific problems of FAS and corresponding solutions, including the advantages and disadvantages of these solutions, are introduced. Next, existing FAS approaches are divided into two folds, i.e., traditional approaches and deep learning-based approaches, and they are depicted in detail, respectively. Moreover, regarding the domain generalization and interpretability issues of deep learning-based FAS, a detailed introduction is given from the perspective of theory and practice. Then, mainstream datasets adopted by FAS are discussed, and evaluation criteria and experimental results based on these datasets are explained as well. Finally, the future research directions are discussed and concluded.

    • >Special Issue's Articles
    • Minimal-unsatisfiable-core-driven Local Explainability Analysis for Random Forest

      2022, 33(7):2447-2463. DOI: 10.13328/j.cnki.jos.006586 CSTR:

      Abstract (1164) HTML (2851) PDF 1.87 M (3495) Comment (0) Favorites

      Abstract:With the broader adoption of machine learning (ML) in security-critical fields, the requirements for the explainability of ML are also increasing. The explainability aims at helping people understand models’ internal working principles and decision basis, which adds their realibility. However, the research on understanding ML models, such as random forest (RF), is still in the infant stage. Considering the strict and standardized characteristics of formal methods and their wide application in the field of ML in recent years, this work leverages formal methods and logical reasoning to develop a machine learning interpretability method for explaining the prediction of RF. Specifically, the decision-making process of RF is encoded into first-order logic formula, and the proposed approach is centered around minimal unsatisfiable cores (MUC) and local interpretation of feature importance and counterfactual sample generation method are provided. Experimental results on several public datasets illustrate the high quality of the proposed feature importance measurement, and the counterfactual sample generation method outperforms the state-of-the-art method. Moreover, from the perspective of user friendliness, the user report can be generated according to the analysis results of counterfactual samples, which can provide suggestions for users to improve their own situation in real-life applications.

    • Multi-path Back-propagation Method for Neural Network Verification

      2022, 33(7):2464-2481. DOI: 10.13328/j.cnki.jos.006585 CSTR:

      Abstract (1297) HTML (2244) PDF 2.00 M (38066) Comment (0) Favorites

      Abstract:Symbolic propagation methods based on linear abstraction play a significant role in neural network verification. This study proposes the notion of multi-path back-propagation for these methods. Existing methods are viewed as using only a single back-propagation path to calculate the upper and lower bounds of each node in a given neural network, being specific instances of the proposed notion. Leveraging multiple back-propagation paths effectively improves the accuracy of this kind of method. For evaluation, the proposed method is quantitatively compared using multiple back-propagation paths with the state-of-the-art tool DeepPoly on benchmarks ACAS Xu, MNIST, and CIFAR10. The experiment results show that the proposed method achieves significant accuracy improvement while introducing only a low extra time cost. In addition, the multi-path back-propagation method is compared with the Optimized LiRPA based on global optimization, on the dataset MNIST. The results show that the proposed method still has an accuracy advantage.

    • Fairness Verification Method of Tree-based Model Based on Probabilistic Model Checking

      2022, 33(7):2482-2498. DOI: 10.13328/j.cnki.jos.006584 CSTR:

      Abstract (1188) HTML (37887) PDF 1.74 M (3153) Comment (0) Favorites

      Abstract:More and more social decisions are made using machine learning models, including legal decisions, financial decisions, and so on. For these decisions, the fairness of algorithms is very important. In fact, one of the goals of introducing machine learning into these environments is to avoid or reduce human bias in decision-making. However, datasets often contain sensitive attributes that can cause machine learning algorithms to generate biased models. Since the importance of feature selection for tree-based models, they are susceptible to sensitive attributes. This study proposes a probabilistic model checking solution to formally verify fairness metrics of the decision tree and tree ensemble model for underlying data distribution and given compound sensitive attributes. The fairness problem is transformed into the probabilistic verification problem and different fairness metrics are measured. The tool called FairVerify is developed based on the proposed approach and it is validated on multiple classifiers based on different datasets and compound sensitive attributes, showing sound performance. Compared with the existing distribution-based verifiers, the method has higher scalability and robustness.

    • Deep Learning Test Optimization Method Using Multi-objective Optimization

      2022, 33(7):2499-2524. DOI: 10.13328/j.cnki.jos.006583 CSTR:

      Abstract (1619) HTML (2859) PDF 2.35 M (3967) Comment (0) Favorites

      Abstract:With the rapid development of deep learning technology, the research on its quality assurance is raising more attention. Meanwhile, it is no longer difficult to collect test data owing to the mature sensor technology, but it costs a lot to label the collected data. In order to reduce the cost of labeling, the existing work attempts to select a test subset from the original test set. They only ensure that the overall accuracy (the accuracy of the target deep learning model on all test inputs of the test set) of the test subset is similar to that of the original test set. However, existing work only focuses on estimating overall accuracy, ignoring other properties of the original test set. For example, it can not fully cover all kinds of test input in the original test set. This study proposes a method based on multi-objective optimization called DMOS (deep multi-objective selection). It firstly analyzes the data distribution of the original test set based on HDBSCAN (hierarchical density-based spatial clustering of applications with noise) clustering method. Then, it designs the optimization objective based on the characteristics of the clustering results and then carries out multi-objective optimization to find out the appropriate selection solution. A large number of experiments are carried out on 8 pairs of classic deep learning test sets and models. The results show that the best test subset selected by DMOS method (corresponding to the Pareto optimal solution with the best performance) can not only cover more test input categories in the original test set, but also estimate the accuracy of each test input category extremely close to the original test set. Meanwhile, it can also ensure that the overall accuracy and test adequacy are close to the original test set: The average error of the overall accuracy estimation is only 1.081%, which is 0.845% less than the PACE (practical accuracy estimation), with the improvement of 43.87%. The average error of the accuracy estimation of each category of test input is only 5.547%, which is 2.926% less than PACE, with the improvement of 34.53%. The average estimation error of the five test adequacy measures is only 8.739%, which is 7.328% lower than PACE, with the increase improvement of 45.61%.

    • Transfer-based Adversarial Attack with Rectified Adam and Color Invariance

      2022, 33(7):2525-2537. DOI: 10.13328/j.cnki.jos.006589 CSTR:

      Abstract (1400) HTML (2147) PDF 1.56 M (3585) Comment (0) Favorites

      Abstract:Deep neural networks have been widely used in object detection, image classification, natural language processing, speech recognition, and so on. Nevertheless, deep neural networks are vulnerable to adversarial examples which could misclassify deep neural network classifiers by adding imperceptible perturbations to the input. Moreover, the same perturbation can deceive multiple classifiers across models and even across tasks. The cross-model transfer characteristics of adversarial examples limit the application of deep neural network in real life. The threat of adversarial examples to deep neural networks has stimulated researchers' interest in adversarial attack. Recently, researchers have proposed several adversarial attacks, but the cross-model ability of adversarial examples generated by the existing attacks is often poor, especially for the defense models via adversarial training or input transformation. To improve the transferability of adversarial examples in black box environment, this study proposes a method, namely, RLI-CI-FGSM. RLI-CI-FGSM is a transfer-based attack, which employs the gradient-based white-box attack RLI-FGSM to generate adversarial examples on the substitute model, as well as CIM to expand the substitute model so that RLI-FGSM is able to attack the substitute model and the extended model at the same time. Specifically, RLI-FGSM integrates the RAdam optimization algorithm into iterative fast gradient sign method, and makes use of the second-derivative information of objective function to generate adversarial examples, which prevents optimization algorithm from falling into poor local optimum. Based on the color transformation-invariant property of deep neural networks, CIM optimizes the perturbations of the color transform image sets to generate adversarial examples that are less sensitive to the defense models. The experimental results show that the proposed method has a higher success rate in both normal and adversarial network models.

    • Safe Reinforcement Learning Algorithm and Its Application in Intelligent Control for CPS

      2022, 33(7):2538-2561. DOI: 10.13328/j.cnki.jos.006588 CSTR:

      Abstract (1635) HTML (2867) PDF 2.45 M (3795) Comment (0) Favorites

      Abstract:The problem of safe controller design for cyber-physical systems (CPS) is a hot research topic. The existing safe controller design based on formal methods has problems such as excessive reliance on system models and poor scalability. Intelligent control based on deep reinforcement learning can handle high-dimensional nonlinear complex systems and uncertain systems, and is becoming a very promising CPS control technology, but it lacks safety guarantees. This study addresses the safety issues of reinforcement learning control by focusing on a case study of a typical industrial oil pump control system, and carries out research in designing new safe reinforcement learning algorithm and applying the algorithm in intelligent control scenario. First, the safe reinforcement learning problem of the industrial oil pump is formulated, and simulation environment of the oil pump is built. Then, by designing the structure and activation function of the output layer, the neural network type oil pump controller is constructed to satisfy the linear inequality constraints of the oil pump switching time. Finally, in order to better balance the safety and optimality control objectives, a new safe reinforcement learning algorithm is designed based on the augmented Lagrange multiplier method. Comparative experiment on the industrial oil pump shows that the controller generated by the proposed algorithm surpasses existing algorithms in the same category, both in safety and optimality. In further evaluation, the neural network controllers generated in this study pass rigorous formal verification with probability of 90%. Meanwhile, compared with the theoretically optimal controller, neural network controllers achieve a loss of optimal objective value as low as 2%. The method proposed in this study is expected to be extended to more application scenarios, and the case study scheme is expected to be referenced by other researchers in the field of intelligent control and formal verification.

    • Reverse Modeling Support Method for Domain-driven Design

      2022, 33(7):2562-2580. DOI: 10.13328/j.cnki.jos.006278 CSTR:

      Abstract (870) HTML (1469) PDF 2.61 M (2531) Comment (0) Favorites

      Abstract:As a software development method to tackling the domain complexity, domain-driven design has been widely applied in recent years. However, as a key activity in domain-driven design, domain modeling is still facing the problems caused by the loose relationship between domain model and programming. To address this issue, this study proposes a code to model transformation method following the model-driven reverse engineering methodology. The method can abstract model designing from code in real-time, thus facilitates domain modeling in two ways. On the one hand, it enables comparing domain model and programs to avoid the deviation of programming from modeling. On the other hand, it reduces the dependence on the code details during the knowledge crunching process, thus promotes the feedback on modeling. A case study is conducted in a real scenario and it proves the validity of the proposed method.

    • Empirical Study on Restored Code Changes in Code Review

      2022, 33(7):2581-2598. DOI: 10.13328/j.cnki.jos.006312 CSTR:

      Abstract (1293) HTML (1705) PDF 2.25 M (3094) Comment (0) Favorites

      Abstract:Code review is manual inspection of source code by developers other than the author. In a code review system, software developers submit code changes to fix software defects or add software features. Not all of the code changes will be integrated into the code base. Some of the code changes will be abandoned. The abandoned code changes could be restored for review, which allows contributors to further improve the code changes. However, it takes more time to review the restored code changes. This study collects 920 700 restored code changes from four open-source projects, investigates reasons for which code changes have been restored, uses thematic analysis method to identify 11 categories of reasons for restoring code changes, and quantitatively analyzes the characteristics of restored code changes. The main findings are as followings. (1) Among the reasons for which code changes are restored, the reason “improve and update” accounts for the largest proportion. (2) The distribution of reasons across the four projects are different, but the difference is not significant. (3) Compared with non-restored code changes, restored code changes have a 10% lower acceptance rate, 1.9 times more comments, and 5.8 times longer review time on average. (4) 81% of restored code changes has been accepted, while 19% of them has still been abandoned.

    • Multi-objective Parallel Attribute Reduction Algorithm in Rough Set

      2022, 33(7):2599-2617. DOI: 10.13328/j.cnki.jos.006286 CSTR:

      Abstract (903) HTML (1926) PDF 1.79 M (2634) Comment (0) Favorites

      Abstract:Solving the minimal attribute reduction (MAR) in rough set theory (RST) is an NP-hard combinatorial optimization problem. Ant colony optimization algorithm (ACO) is a globally heuristic optimization algorithm in evolutionary algorithms, so combining RST with ACO is an effective and feasible way to solve attribute reduction. The ACO algorithm often fall into local optimal solution, and the convergence speed is slow. This study first uses an improved information gain rate as heuristic information, and then deduction test is performed on each selected attribute and the optimal reduction set of each generation. And the mechanism of calculating probability in advance is proposed to avoid repeated calculation of information on the same path in the search process for each ant. But the algorithm can only handle small-scale data sets. The ACO algorithm has good parallelism and the equivalent classes in rough set theory can be calculated by cloud computing. This study proposes a parallel attribute reduction algorithm based on ACO and cloud computing to solve massive data sets and further investigate a multi-objective parallel solution scheme, which can simultaneously calculate the importance of the remaining attributes relative to the current attribute or reduction set. Experiments show that the proposed algorithm can obtain the MAR in the case of processing big data, and complexity of time on calculating the importance of attribute decreases from O(n2) to O(|n|).

    • Law Article Recommendation Approach Based on Deep Semantic Matching

      2022, 33(7):2618-2632. DOI: 10.13328/j.cnki.jos.006287 CSTR:

      Abstract (960) HTML (1213) PDF 2.01 M (2452) Comment (0) Favorites

      Abstract:Law articles are the main basis of legal judgment and related law article recommendation can improve the quality of legal judgment prediction (LJP). Currently, the state-of-the-art methods belong to supervised classification model with the finite law articles as discrete class labels, which, however, has the downside that the semantic information of law articles may be underused. This observation implies that the quality of law article recommendation can be further enhanced. To solve this problem, this study proposes a deep semantic matching based law article recommendation approach (DeepLawRec) by transforming the traditional classification solution to pairwise matching learning. The proposed DeepLawRec includes local semantic matching module and global semantic recommendation module with a bi-transformer convolutional neural network and regression tree based recommendation, respectively. It can not only extract the key semantic features from the fact descriptions of cases, but also learn their related and local semantic features with a given law article. Moreover, with the help of regression tree, the recommendation results could be interpreted. The experimental results on a public dataset show the proposed DeepLawRec approach can improve the quality of recommendation and outperform the state-of-the-art techniques in terms of precision and interpretability.

    • Object Detection Model for Examination Classroom Based on Cascade Attention and Point Supervision Mechanism

      2022, 33(7):2633-2645. DOI: 10.13328/j.cnki.jos.006289 CSTR:

      Abstract (851) HTML (1122) PDF 3.98 M (2248) Comment (0) Favorites

      Abstract:Smart examination classroom is an important part of smart campus, and accurately and quickly detecting students in the examination classroom is a basic task of building a smart classroom. However, due to the dense distribution and imaging difference of the examinees in an examination classroom, most of the existing object detection methods can not precisely detect all the examinees in real-time. Moreover, most of the object detection methods rely on predefined anchor boxes, which are lack of portability. Aiming at the above problems, this study proposes an efficient one-stage object detection model based on fully convolutional network, which is anchor-free, with a prediction on the input image in pixel-level. In this model, a feature enhancement module is firstly designed based on cascade attention, which can effectively enhance the discriminability of the feature map by gradually refining and modifying the features. Secondly, in order to enable the network to distinguish overlapping objects in the examination classroom, a point supervision mechanism is proposed. Finally, this study verifies the above model on the special dataset of standardized examination classroom. With the cascade attention module and point supervision mechanism, the proposed model achieves 92.9% in mAP at the speed of 22.1 f/s, and is superior to most the state-of-the-art detection models. Especially, for object detection in new classroom environments, the proposed model achieves the best results.

    • Knowledge Graph Construction Method via Internet-based Collective Intelligence

      2022, 33(7):2646-2666. DOI: 10.13328/j.cnki.jos.006313 CSTR:

      Abstract (3042) HTML (2145) PDF 2.44 M (4896) Comment (0) Favorites

      Abstract:Knowledge graph is a graph-based structural representation of knowledge. One of the key problems about knowledge graph in both research and practice is how to construct large-scale high-quality knowledge graphs. This paper presents an approach to construct knowledge graphs based on Internet-based human collective intelligence. The core of this approach is a continuously executing loop, called the EIF loop or EIFL, consisting of three activities: free exploration, automatic integration, and proactive feedback. In free exploration activity, each participant tries to construct an individual knowledge graph alone. In automatic integration activity, all participants’ current individual knowledge graphs are integrated in real-time into a collective knowledge graph. In proactive feedback activity, each participant is provided with personalized feedback information from the current collective knowledge graph, in order to improve the participant’s efficiency of constructing an individual knowledge graph. In particular, a hierarchical knowledge graph representation mechanism is proposed, a knowledge graph merging algorithm is designed driven by the goal of minimizing the collective knowledge graph’s general entropy, and two ways for context-dependent and context-independent information feedback are introduced, repectively. In order to investigate the feasibility of the proposed approach, three kinds of experiment are designed and carried out: (1) the merging experiment on simulated graphs with structural information only; (2) the merging experiment on real large-scaled knowledge graphs; (3) the construction experiment of knowledge graphs with different number of participants. The experimental results show that: (1) the proposed knowledge graph merging algorithm can find high-quality merging solutions of knowledge graphs by utilizing both structural information of knowledge graphs and semantic information of elements in knowledge graphs; (2) EIFL-based collective collaboration improves both the efficiency of participants in constructing individual knowledge graphs and the scale of the collective knowledge graph merged from individual knowledge graphs, and shows sound scalability with respect to the number of participants in knowledge graph construction.

    • Label-correlation-based Common and Specific Feature Selection for Hierarchical Classification

      2022, 33(7):2667-2682. DOI: 10.13328/j.cnki.jos.006335 CSTR:

      Abstract (661) HTML (1151) PDF 1.91 M (2268) Comment (0) Favorites

      Abstract:In the era of big data, the sizes of data sets in terms of the number of samples, features, and classes have dramatically increased, and the classes usually exists a hierarchical structure. It is of great significance to select features for hierarchical data. In recent years, relevant feature selection algorithms have been proposed. However, the existing algorithms do not take full advantage of the information of the hierarchical structure of classes, and ignore the common and specific features of different class nodes. This study proposes a label- correlation-based feature selection algorithm for hierarchical classification with common and specific features. The algorithm uses recursive regularization to select the corresponding specific features for each internal node of the hierarchical structure, and makes full use of the hierarchical structure to analyze the label correlation, and then utilizes regularized penalty to select the common features of each subtree. Finally, the proposed model not only can address hierarchical tree data, but also can address more complex hierarchical DAG data directly. Experimental results on six hierarchical tree data sets and four hierarchical DAG data sets demonstrate the effectiveness of the proposed algorithm.

    • Encrypted Malicious Traffic Detection Based on Hidden Markov Model

      2022, 33(7):2683-2698. DOI: 10.13328/j.cnki.jos.006282 CSTR:

      Abstract (1004) HTML (1870) PDF 1.82 M (3457) Comment (0) Favorites

      Abstract:In recent years, with the popularization of network encryption technology, malicious attacks using network encryption technology have increased year by year. Traditional detection methods that rely on the content of data packets are now unable to effectively deal with malware attacks hidden in encrypted traffic. In order to deal with the detection of encrypted malicious traffic under different protocols, this study proposes an encrypted malicious traffic detection algorithm based on profile HMM. This method uses the genetic sequence comparison analysis in bioinformatics to realize the identification of encrypted attack traffic by matching key gene sub-sequences. Open source datasets are used to conduct experiments under different conditions, the results demonstrate the effectiveness of the algorithm. In addition, two methods of evasion detection are designed, and experiments have also verified that the algorithm has a better performance to resist evasion detection. Compared with the existing research, the work of this study has a wide range of application scenarios and higher detection accuracy. It provides a more effective solution to the research field of malware detection based on encrypted traffic.

    • Lightweight Blockchain Authentication Mechanism for Smart Home

      2022, 33(7):2699-2715. DOI: 10.13328/j.cnki.jos.006288 CSTR:

      Abstract (1140) HTML (2065) PDF 1.71 M (2834) Comment (0) Favorites

      Abstract:The promotion of 5G provides new opportunities for the rapid development of the smart home industry, while the authentication issue of smart home systems has become a concern. The traditional centralized management and authentication methods adopted by smart home systems face centralized trust issues, and have the disadvantages of high performance overhead. Blockchain technology has become a research hotspot due to its advantages of decentralized and non-tampering features, providing new ideas for the realization of security certification for distributed smart home. Nevertheless, it also faces two challenges: the efficiency of user authentication with multiple distributed terminals and the leakage of user privacy. This study proposes a dynamic trusted lightweight authentication mechanism (DTL) based on blockchain. DTL uses consortium blockchain to build a blockchain system, which not only ensures that only authorized smart home sensor nodes can join the network, but also meets the needs of distributed security and scalability. DTL can achieve the following two advantages. (1) Aiming at the issue of authentication efficiency, by improving the consensus algorithm, a dynamic trusted sensor group (DTSG) authentication mechanism for smart homes is established, which avoids low access efficiency and low user access rate caused by one-to-one frequent authentication between the user and sensor terminal or gateway node. DTL has realized lightweight authentication. (2) For addressing the problem of user privacy protection, an authentication scheme combining DTSG mechanism and zero-knowledge proof is innovatively designed, which realizes user identity authentication without leaking user privacy. These security features are demonstrated by carrying out security analysis. Meanwhile, extensive simulations are conducted to validate the practicality and lightweight of DTL.

    • Class-aware Instance Normalization Mechanism for Face Age Synthesis

      2022, 33(7):2716-2728. DOI: 10.13328/j.cnki.jos.006281 CSTR:

      Abstract (558) HTML (1009) PDF 13.94 M (2345) Comment (0) Favorites

      Abstract:In recent years, the generation adversarial networks (GAN) family has been successfully applied for face age synthesis. Nevertheless, it is found that even if the conditional generation adversarial networks (CGAN) are good at using age prior information, the important age information will be discarded to some extent, when CGAN is used to address the problem of face age synthesis. This is an important factor that makes the performance of the GAN family represented by CGAN in face age synthesis task reach the bottleneck period. Therefore, a class-aware instance normalization (CAIN) is proposed, which can be flexibly embedded in CGANs, called CAIN-GAN, for thoroughly leveraging the age prior information to improve the performance of face age synthesis. Experiments on the public datasets show that the proposed CAIN-GAN can improve the performance of face age synthesis only by leveraging the face age-related information, compared with several GAN-based face age synthesis methods.

    • >Review Articles
    • Survey on Sketch Segmentation Algorithm Based on Deep Learning

      2022, 33(7):2729-2752. DOI: 10.13328/j.cnki.jos.006299 CSTR:

      Abstract (2833) HTML (3495) PDF 3.07 M (6488) Comment (0) Favorites

      Abstract:Sketches have always been one of the important tools for human communication. As it can express some complex human thoughts quickly in a succinct form, the sketch processing algorithm is one of the research hotspots in the filed of computer vision. Currently, the research on sketches mainly focuses on the recognition, retrieval, and completion. As researchers focus on the fine-grained operation of sketches, research on sketch segmentation has also received more and more attention. In recent years, with the development of deep learning and computer vision technology, a large number of sketch segmentation methods based on deep learning have been proposed. Moreover, the accuracy and efficiency of sketch segmentation have also been significantly increased. Nevertheless, sketch segmentation is still a very challenging topic because of the abstraction, sparsity, and diversity of sketches. This study organizes, categorizes, analyzes, and summarizes the sketch segmentation algorithm based on deep learning to solve the above deficiency. Firstly, three basic sketch representation methods and commonly used sketch segmentation datasets are shown. According to the sketch segmentation algorithm prediction results, sketch semantic segmentation, sketch perceptual grouping, and sketch parsing are introduced respectively. Moreover, the evaluation results of sketch segmentation are collected and analyzed on the primary data sets. Finally, the application of sketch segmentation is summarized and the possible future development direction is discussed.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063