• Volume 33,Issue 10,2022 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Knowledge Collaborative Fine-tuning for Low-resource Knowledge Graph Completion

      2022, 33(10):3531-3545. DOI: 10.13328/j.cnki.jos.006628 CSTR:

      Abstract (2809) HTML (3165) PDF 2.03 M (5109) Comment (0) Favorites

      Abstract:Knowledge graph completion can make the knowledge graph more complete. Unfortunately, most of existing methods on knowledge graph completion assume that the entities or relations in the knowledge graph have sufficient triple instances. Nevertheless, there are great deals of long-tail triple sin general domains. Furthermore, it is challenging to obtain a large amount of high-quality annotation data in vertical domains. To address these issues, a knowledge collaborative fine-tuning approach is proposed for low-resource knowledge graph completion. The structured knowledge is leveraged to construct the initial prompt template and the optimal templates, labels, and model parameters are learnt through a collaborative fine-tuning algorithm. The proposed method leverages the explicit structured knowledge in the knowledge graph and the implicit triple knowledge from the language model, which can be applied to the tasks of link prediction and relation extraction. Experimental results show that the proposed approach can obtain state-of-the-art performance on three knowledge graph reasoning datasets and five relation extraction datasets.

    • FactChain: A Blockchain-based Crowdsourcing Knowledge Fusion System

      2022, 33(10):3546-3564. DOI: 10.13328/j.cnki.jos.006627 CSTR:

      Abstract (2366) HTML (2690) PDF 2.44 M (4244) Comment (0) Favorites

      Abstract:Knowledge graphs (KGs) have drawn massive attention from both academia and industry, and become the backbones of many AI applications. Current KGs are often constructed and maintained by large parties, which provide services in the form of RDF dumps or SPARQL endpoints. This kind of centralized management has inherent drawbacks like non-durable accessibility. Furthermore, some facts in KGs may be outdated or conflicting, and there is no convenient way of resolving them democratically. As an innovative distributed infrastructure, blockchain has many characteristics such as decentralization and consensus, which is of great significance for the construction and management of KGs. This study designs a blockchain-enhanced knowledge management framework called FactChain, which aims to establish a new decentralized ecology for knowledge sharing and fusion. FactChain leverages a consortium architecture containing blockchain, organizations, and participants. The on-chain smart contracts enable the truth discovery algorithm of multiple- source conflicting knowledge. FactChain also supports participant management, mapping between local schemata and global ontology and integration of on/off-chain knowledge based on the decentralized application (DApp) in organizations.

    • Short Text Classification Model Combining Knowledge Aware and Dual Attention

      2022, 33(10):3565-3581. DOI: 10.13328/j.cnki.jos.006630 CSTR:

      Abstract (2414) HTML (2507) PDF 2.26 M (4473) Comment (0) Favorites

      Abstract:As the core problem of text mining, text classification task has become an essential issue in the field of natural language processing. Short text classification is a hot-spot topic, and one of many urgent problems to be solved in text classification due to its sparseness, real-time, and non-standard characteristics. In certain specific scenarios, short texts have many implicit semantics, which brings challenges to tasks such as mining implicit semantic features in limited texts. The existing research methods mainly apply traditional machine learning or deep learning algorithms for short text classification. However, this series of algorithm is complex and requires enormous cost to build an effective model, meanwhile, the algorithms are not efficient. In addition, short text contains less effective information and abundant colloquial language, which requires a stronger feature learning ability of the model. In response to the above problems, the KAeRCNN model is proposed based on the TextRCNN model, which combines knowledge aware and the dual attention mechanism. The knowledge-aware is constructed in two parts, which includes the stage of knowledge graph entity linking and knowledge graph embedding, as external knowledge can be introduced to obtain semantic features. At the same time, the dual attention mechanism can improve the model's efficiency in extracting effective information from short texts. Excessive experimental results show that the KAeRCNN model proposed in this study is significantly better than traditional machine learning algorithms in terms of classification accuracy, the F1 score, and practical application effects. The performance and adaptability of the algorithm are further verified with different datasets. The accuracy rate of the proposed approach reaches 95.54%, and the F1 score reaches 0.901. Compared with the four traditional machine learning algorithms, the accuracy rate is increased by about 14% on average, and the F1 score is increased by about 13%. Compared with TextRCNN, the KAeRCNN model improves accuracy by about 3%. In addition, the experimental results of comparison with deep learning algorithms also show that the proposed model has better performance in classification of short text from other fields. Both theoretical and experimental results indicate that the KAeRCNN model proposed in this study is effective for short text classification.

    • >Review Articles
    • Progress of Graph Neural Networks on Complex Graph Mining

      2022, 33(10):3582-3618. DOI: 10.13328/j.cnki.jos.006626 CSTR:

      Abstract (3516) HTML (5960) PDF 4.19 M (6954) Comment (0) Favorites

      Abstract:Graph neural networks (GNNs) establish a deep learning framework for non-Euclidean spatial data. Compared with traditional network embedding methods, they perform deeper aggregating operations on graph structures. In recent years, GNNs have been extended to complex graphs. Nevertheless, there lacks qualified surveys which give comprehensive and systematic classification and summary on GNNs based on complex graphs. This study divides the complex graphs into 3 categories, i.e., heterogeneous graphs, dynamic graphs, and hypergraphs. GNNs based on heterogeneous graphs are divided into 2 types, i.e., edge-type aware and meta-path aware, according to the procedure that the information is aggregated. Dynamic GNNs graphs are divided into three categories: RNN-based methods, autoencoderbased methods, and spatio-temporal graph neural networks. Hypergraph GNNs are divided into expansion methods and non-expansion methods, and the expansion methods are further divided into star-expansion, clique-expansion, and line-expansion according to the expansion mode they use. The core idea of every method is illustrated in detail, the advantages and disadvantages of different algorithms are compared, the key procedures, (cross) application fields, and commonly used data sets of different complex graph GNNs are systematically listed, and some possible research directions are proposed.

    • >Special Issue's Articles
    • Recommendation Method Based on Multi-view Embedding Fusion for HINs

      2022, 33(10):3619-3634. DOI: 10.13328/j.cnki.jos.006632 CSTR:

      Abstract (1631) HTML (2249) PDF 2.58 M (3810) Comment (0) Favorites

      Abstract:HINs (heterogeneous information networks) have rich semantic information, which are widely used in recommendation tasks. Traditional recommendation methods for heterogeneous information networks ignore the heterogeneity of association relationships and the interaction between different association types. In this study, a recommendation model based on multi-view embedding fusion is proposed, which can effectively guarantee the accuracy of recommendation by mining the deep potential features of networks from the view of homogenous association and heterogeneous association respectively. For the view of homogenous association, a graph convolutional network (GCN)-based embedding fusion method is proposed. The local fusion of node embeddings is realized through the lightweight convolution of neighborhood information under the action of homogeneous associations. For the view of heterogeneous association, an attention-based embedding fusion method is proposed, which uses attention mechanism to distinguish the influence of different association types on node embedding, and realizes the global fusion of node embedding. The feasibility and effectiveness of the key technology proposed in this study are verified by experiments.

    • >Review Articles
    • Survey of Novel Video Analysis Systems Based on Deep Learning

      2022, 33(10):3635-3655. DOI: 10.13328/j.cnki.jos.006631 CSTR:

      Abstract (2714) HTML (4051) PDF 2.97 M (5315) Comment (0) Favorites

      Abstract:The popularity of camera devices in daily life has led to a rapid growth in video data, which contains rich information. Earlier, researchers developed video analytics systems based on traditional computer vision techniques to extract and then to analyze video data. In recent years, deep learning has made breakthroughs in areas such as face recognition, and novel video analysis systems based on deep learning have appeared. This paper presents an overview of the research progress of novel video analytics systems from the perspectives of applications, technologies, and systems. Firstly, the development history of video analytics systems is reviewed and the differences are pointe out between novel video analytics systems and traditional video analytics systems. Secondly, the challenges of the novel video analysis system are analyzed in terms of both computation and storage, and the influencing factors of the novel video analysis system are discussed in terms of the organization and distribution of video data and the application requirements of video analysis. Then, the novel video analytics systems are classified into two categories: Optimized for computation and optimized for storage, typical representatives of these systems are selects and their main ideas are introduced. Finally, the novel video analytics systems are compared and analyzed from multiple dimensions, the current problems of these systems are pointed out, and the future research and development direction of novel video analytics systems are looked at accordingly.

    • >Special Issue's Articles
    • Breakthrough in Smart Education: Course Recommendation System Based on Graph Learning

      2022, 33(10):3656-3672. DOI: 10.13328/j.cnki.jos.006629 CSTR:

      Abstract (2704) HTML (2880) PDF 2.08 M (4923) Comment (0) Favorites

      Abstract:With the rapid development of Internet technology, online course learning platforms like MOOCs have gained wide popularity in recent years. In order to facilitate the personalized smart education of "teach students in accordance with their aptitude", artificial intelligence technologies, represented by recommendation algorithms, have been widely focused by communities of academia and industry. Although it has been successfully applied in e-commerce and other fields, the integration of recommendation algorithms into online education scenarios still faces severe challenges: Existing algorithms are not competent in mining implicit interactives, guidance from knowledge towards recommendation is not effective, and there are still few sights on practical recommendation system software. Therefore, an intelligent course recommendation system is proposed for industrial scenarios, which includes: (1) an offline recommendation engine based on graph convolutional neural network, modeling the "user-course" implicit interaction behavior as a heterogeneous graph, and extracting course knowledge information to guide the model learning and training. Therefore, the relationship like "user-course-knowledge" can be fully and deeply mined; (2) an efficient online recommendation system prototype based on multi- stage pipeline of "preprocess-recall-offline sort-online recommend-result fuse". Besides quickly responding to course recommendation requests, it can effectively alleviate the major obstacle to recommender systems, the cold start problem. Finally, based on a course learning dataset from real platform, extensive experiments are conducted to show that the proposed offline recommendation engine is competitive with other mainstream recommendation algorithms. And based on the analysis of two typical use cases, the usability of the proposed online recommendation system when facing industrial needs is verified.

    • >Review Articles
    • Survey on Theory of Distributed Sampling

      2022, 33(10):3673-3699. DOI: 10.13328/j.cnki.jos.006372 CSTR:

      Abstract (1676) HTML (2670) PDF 2.89 M (4333) Comment (0) Favorites

      Abstract:Sampling is a fundamental class of computational problems. The problem of generating random samples from a solution space according to certain probability distribution has numerous important applications in approximate counting, probability inference, statistical learning, etc. In the big data era, the distributed sampling attracts considerably more attentions. In recent years, there is a line of research works that systematically study the theory of distributed sampling. This study surveys important results on distributed sampling, including distributed sampling algorithms with theoretically provable guarantees, the computational complexity of sampling in the distributed computing model, and the mutual relation between sampling and inference in the distributed computing model.

    • Diversity Classification and Distance Regression Assisted Evolutionary Algorithm

      2022, 33(10):3700-3716. DOI: 10.13328/j.cnki.jos.006310 CSTR:

      Abstract (567) HTML (1301) PDF 2.56 M (1885) Comment (0) Favorites

      Abstract:This study proposed a diversity classification and distance regression assisted evolutionary algorithm (DCDREA) to solve expensive many-objective optimization problems (MaOPs). In DCDREA, the random forest (RF) is adopted as the global classification surrogate model. All the solutions in the population are as the training samples and classified into positive or negative samples according to whether they are minimum correlative solutions, so that the model can learn the classification criteria contained in the training samples. The global classification surrogate model is mainly used to filter the newly generated candidates to obtain a group of promising candidates. In addition, Kriging is adopted as the local regression surrogate model, where the solutions closest to the current candidates in the population are selected as the training samples, and the distance between the training samples and the ideal point is approximated by the model. Then, by the K-means method, the candidates are divided into μ clusters, and from each cluster, one candidate is selected for real function evaluation. In the experimental part, the DTLZ suite with large scale 3, 4, 6, 8, and 10 objectives was used to compare DCDREA with the current popular surrogate-assisted evolutionary algorithms. For different test problems, each algorithm was run independently for 20 times. Then the inverted generational distance (IGD) and algorithm running time were counted. At last, the Wilcoxon rank sum test was used to analyze the results. The result comparison shows that DCDREA performs better in most cases, indicating that DCDREA has sound effectiveness and feasibility.

    • Statistical Model Checking for Verification of Rare Properties of Stochastic Hybrid System

      2022, 33(10):3717-3731. DOI: 10.13328/j.cnki.jos.006301 CSTR:

      Abstract (418) HTML (930) PDF 1.93 M (1846) Comment (0) Favorites

      Abstract:Statistical model checking (SMC) is an important method for the verification of safety of stochastic hybrid system (SHS), while for the system with extremely high safety requirements, the unsafe events and the failures of the system are rare events. In this case, it is difficult for SMC to draw the samples satisfying the rare properties and the SMC becomes infeasible. To solve this problem, an SMC method based on cross entropy iterative learning is proposed in this study. First, a continuous time Markov chain (CTMC) is used to represent the path probability space of the SHS, and based on the path space, a parameterized probability distribution family is derived. Then, the cross-entropy optimization model on the path space is constructed and an iterative learning algorithm is proposed, which can find the optimal importance distribution in the path space. Finally, an algorithm for verification of rare properties is given. Experimental results show that the proposed method can effectively verify rare properties of the SHS, and compared with some heuristic importance sampling methods, in the same number of samples, the estimated value of the proposed method can be better distributed near the sample mean, and the standard deviation and relative error are reduced by more than an order of magnitude.

    • >Review Articles
    • Prototype Learning in Machine Learning: A Literature Review

      2022, 33(10):3732-3753. DOI: 10.13328/j.cnki.jos.006365 CSTR:

      Abstract (3824) HTML (3662) PDF 2.87 M (7580) Comment (0) Favorites

      Abstract:With the in-depth penetration of information technology in various fields, there are many data in the real world. This can help data-driven algorithms in machine learning obtain valuable knowledge. Meanwhile, high-dimension, excessive redundancy, and strong noise are inherent characteristics of these various and complex data. In order to eliminate redundancy, discover data structure, and improve data quality, prototype learning is developed. By finding a prototype set from the target set, the data in the sample space can be reduced, and then the efficiency and effectiveness of machine learning algorithms can be improved. Its feasibility has been proven in many applications. Thus, the research on prototype learning has been one of the hot and key research topics in the field of machine learning recently. This study mainly introduces the research background and application value of prototype learning. Meanwhile, it also provides an overview of specialties of various related methods in prototype learning, quality evaluation of prototypes, and typical applications. Then, the research progress of prototype learning with respect to supervision mode and model design is presented. In particular, the former involves unsupervision, semi-supervision, and full supervision mode, and the latter compares four kinds of prototype learning methods based on similarity, determinantal point process, data reconstruction, and low-rank approximation, respectively. Finally, this study looks forward to the future development of prototype learning.

    • Piece-wise Delay Cost-sensitive Three-way Decisions

      2022, 33(10):3754-3775. DOI: 10.13328/j.cnki.jos.006302 CSTR:

      Abstract (561) HTML (1202) PDF 2.51 M (1615) Comment (0) Favorites

      Abstract:In classic decision-theoretic rough sets (DTRS), the cost objective function of three-way decision is typical monotone linear function. However, it is usually found that the functional relationship between delay decision's cost and probability is non monotonic in practical experience. Hence, the classical cost sensitive three-way decision model in DTRS is not suitable for modeling and reasoning this non monotonic phenomenon. In order to solve the non-monotonic phenomena in the cost sensitive three-way decision problem, a novel piece-wise delay cost sensitive three-way decision model is proposed based on the classical positive/negative domain decision loss functions. The novel model defines two different sets of delay decision loss functions which have the characteristics of monotonous increase and monotonic decrease, and constructs segmented delay three-way decision cost objective function systems, measurement indexes, and segment decision strategies. Then, on the basis of the relationship among conditional probability, loss function and basic metrics, segmented delay cost sensitivity three-way decision model is proposed, and the corresponding three-way classification thresholds reasoning are implemented. Finally, a group of typical analysis examples are used to verify the feasibility of the segmented delay cost sensitivity three-way decision model and its classification.

    • Dynamically Transfer Entity Span Information for Cross-domain Chinese Named Entity Recognition

      2022, 33(10):3776-3792. DOI: 10.13328/j.cnki.jos.006305 CSTR:

      Abstract (1872) HTML (1498) PDF 2.03 M (2727) Comment (0) Favorites

      Abstract:Boundaries identification of Chinese named entities is a difficult problem because of no separator between Chinese texts. Furthermore, the lack of well-marked NER data makes Chinese named entity recognition (NER) tasks more challenging in vertical domains, such as clinical domain and financial domain. To address aforementioned issues, this study proposes a novel cross-domain Chinese NER model by dynamically transferring entity span information (TES-NER). The cross-domain shared entity span information is transferred from the general domain (source domain) with sufficient corpus to the Chinese NER model on the vertical domain (target domain) through a dynamic fusion layer based on the gate mechanism, where the entity span information is used to represent the scope of the Chinese named entities. Specifically, TES-NER first introduces a cross-domain shared entity span recognition module based on a bidirectional long short-term memory (BiLSTM) layer and a fully connected neural network (FCN) which are used to identify the cross-domain shared entity span information to determine the boundaries of the Chinese named entities. Then, a Chinese NER module is constructed to identify the domain-specific Chinese named entities by applying independent BiLSTM with conditional random field models (BiLSTM-CRF). Finally, a dynamic fusion layer is designed to dynamically determine the amount of the cross-domain shared entity span information extracted from the entity span recognition module, which is used to transfer the knowledge to the domain-specific NER model through the gate mechanism. This study sets the general domain (source domain) dataset as the news domain dataset (MSRA) with sufficient labeled corpus, while the vertical domain (target domain) datasets are composed of three datasets: Mixed domain (OntoNotes 5.0), financial domain (Resume), and medical domain (CCKS 2017). Among them, the mixed domain dataset (OntoNotes 5.0) is a corpus integrating six different vertical domains. The F1 values of the model proposed in this study are 2.18%, 1.68%, and 0.99% higher than BiLSTM-CRF, respectively.

    • Idiom Cloze Algorithm Integrating with Pre-trained Language Model

      2022, 33(10):3793-3805. DOI: 10.13328/j.cnki.jos.006307 CSTR:

      Abstract (782) HTML (1733) PDF 1.74 M (3233) Comment (0) Favorites

      Abstract:One of the crucial tasks in the field of natural language processing (NLP) is identifying suitable idioms due to context. The available research considers the Chinese idiom cloze task as a textual similarity task. Although the current pre-trained language model plays an important role in textual similarity, it also has apparent defects. When pre-trained language model is used as a feature extractor, it ignores the mutual information between sentences; while as a text matcher, it requires high computational cost and long running time. In addition, the matching between context and candidate idioms is asymmetric, which influences the effect of the pre-trained language model as a text matcher. In order to solve the above two problems, this study is motivated by the idea of parameter sharing and proposes a TALBERT-blank network. Idiom selection is transformed from a context-based asymmetric matching process into a blank-based symmetric matching process by TALBERT-blank. The pre-trained language model acts as both a feature extractor and a text matcher, and the sentence vector is utilized for latent semantic matches. This greatly reduces the number of parameters and the consumption of memory, improves the speed of train and inference while maintaining accuracy, and produces a lightweight and efficient effect. The experimental results of this model on CHID data set prove that compared with ALBERT text matcher, the calculation time is further shortened by 54.35 percent for the compression model with a greater extent under the condition of maintaining accuracy.

    • Neural Machine Translation Based on Multi-task Learning of Discourse Structure

      2022, 33(10):3806-3818. DOI: 10.13328/j.cnki.jos.006316 CSTR:

      Abstract (640) HTML (1509) PDF 1.68 M (2250) Comment (0) Favorites

      Abstract:Document-level translation methods improve translation quality with cross-sentence contextual information. Document contains structural semantic information, which can be formally represented as dependency relations between elementary discourse units (EDUs). However, existing neural machine translation (NMT) methods seldom utilize discourse structural information. Therefore, this study proposes a document-level translation method that can explicitly model EDU segmentation, discourse dependency structure prediction, and discourse relation classification tasks in the encoder-decoder framework of NMT, so as to obtain the representation of EDU enhanced by structural information. The representation is integrated with the encoding and decoding state vectors by gating weighted fusion and hierarchical attention, respectively. In addition, in order to alleviate the dependence on discourse parsers at the inference phase, the multi-task learning strategy is applied to guide the joint optimization of translation and discourse analysis tasks. Experimental results on public datasets show that the proposed method can effectively model and utilize the dependency structural information between discourse units to improve the translation quality significantly.

    • Survey of Intelligent Partition and Layout Technology in Database System

      2022, 33(10):3819-3843. DOI: 10.13328/j.cnki.jos.006384 CSTR:

      Abstract (1357) HTML (1926) PDF 2.99 M (2841) Comment (0) Favorites

      Abstract:In the era of big data, there are more and more application analysis scenarios driven by large-scale data. How to quickly and efficiently extract the information for analysis and decision-making from these massive data brings great challenges to the database system. At the same time, the real-time performance of analysis data in modern business analysis and decision-making requires that the database system can process ACID transactions and complex analysis queries. However, the traditional data partition granularity is too coarse, and cannot adapt to the dynamic changes of complex analysis load; the traditional data layout is single, and cannot cope with the modern increasing mixed transaction analysis application scenarios. In order to solve the above problems, "intelligent data partition and layout" has become one of the current research hotspots. It extracts the effective characteristics of workload through data mining, machine learning, and other technologies, and design appropriate partition strategy to avoid scanning a large number of irrelevant data and guide the layout structure design to adapt to different types of workloads. This paper first introduces the background knowledge of data partition and layout techniques, and then elaborates the research motivation, development trend, and key technologies of intelligent data partition and layout. Finally, the research prospect of intelligent data partition and layout is summarized and prospected.

    • Double Index Data Compression Method for Onboard Computer

      2022, 33(10):3844-3857. DOI: 10.13328/j.cnki.jos.006308 CSTR:

      Abstract (486) HTML (1548) PDF 1.87 M (1810) Comment (0) Favorites

      Abstract:As the functions of on-board computer systems become more and more complex, the scale of programs is also growing rapidly. A stable and effective code compression function is required to ensure the storage and operation of on-board software under the background of extremely limited storage resources. Hybrid compression is currently the mainstream algorithm for lossless data compression, and it has the characteristics of high compression rate, large code size, and large demand for computing resources. However, embedded systems such as spaceborne computers require reliability and anti-jamming capabilities, and hybrid compression cannot play its amazing effect. At the same time, the compression rate of a single model is too low to meet the demand. This study proposes an improvement method based on the advantage of the low resource requirement of LZ77 algorithm, the details are as follows. The new algorithm uses a new matching record table for the compression. This table stores high-value indexes to assist in compression, which realizes the complementarity between the local advantages of the original algorithm and the global distribution of high-value data, and reduces data redundancy to a greater extent. In addition, the new algorithm combines dynamic filling, variable length coding, and other methods to further optimize the coding structure and reduce storage requirements. Finally, the lossless data compression algorithm (LZRC) is designed and implemented, which is more suitable for the aerospace field. Experimental results show that: (1) the code size of the new algorithm is 3.5 KB more than the code size of the original algorithm, and the average compression ratio of software has increased by 17%; (2) compared with hybrid compression, the runtime memory of the proposed algorithm is only 12%, and the code size is also reduced by 84%, which is more suitable for on-board computer systems.

    • Empirical Analysis of Lightning Network: Topology, Evolution, and Fees

      2022, 33(10):3858-3873. DOI: 10.13328/j.cnki.jos.006380 CSTR:

      Abstract (1416) HTML (2461) PDF 2.30 M (2992) Comment (0) Favorites

      Abstract:Being one of the most deployed payment channel networks (PCN), the lightning network (LN) has attracted much attention since it was proposed in 2016. The LN is a layer-2 technology addressing the scalability problem of bitcoin. In LN, participants only need to submit layer-1 transactions on the blockchain to open and close the payment channel, and they can issue multiple transactions off-chain. This working mechanism avoids the waste of time on waiting for every transaction to be verified and simultaneously saves transaction fees. However, as the time of LN put in practice is rather short, previous works were based on small volume and rapidly-changing data, which lacks necessary time-effectiveness. To fill in the gap and get a comprehensive understanding of the topology of LN and its evolving trend, this study characterizes both static and dynamic features of LN by leveraging graph analysis based on data of high time- effectiveness updated to July, 2020. A clustering analysis of the nodes is carried out, and some conclusions and insights derived of the clustering results are presented. Moreover, an additional study of the charging mechanism in LN is conducted by comparing the on-chain and off-chain transaction fees.

    • Universal Steganalysis Based on Few-shot Learning

      2022, 33(10):3874-3890. DOI: 10.13328/j.cnki.jos.006358 CSTR:

      Abstract (1400) HTML (1646) PDF 2.53 M (3201) Comment (0) Favorites

      Abstract:In recent years, deep learning has shown excellent performance in image steganalysis. At present, most of the image steganalysis models based on deep learning are special steganalysis models, which are only applied to a specific steganography. To detect the stego images of other steganographic algorithms using the special steganalysis model, a large number of stego images encoded by the steganographic algorithms are regarded as datasets to retrain the model. However, in practical steganalysis tasks, it is difficult to obtain a large number of encoded stego images, and it is a great challenge to train the universal steganalysis model with very few stego image samples. Inspired by the research results in the field of few-shot learning, a universal steganalysis method is proposed based on transductive propagation network. First, the feature extraction network is improved based on the existing few-shot learning classification framework, and the multi-scale feature fusion network is designed, so that the few-shot classification model can extract more steganalysis features for the classification task based on weak information such as secret noise residue. Second, to solve the problem that steganalysis model based on few-shot learning is difficult to converge, the initial model with prior knowledge is obtained by pre-training. Then, the steganalysis models based on few-shot learning in frequency domain and spatial domain are trained respectively. The results of self-test and cross-test show that the average detection accuracy is above 80%. Furthermore, the steganalysis models based on few-shot learning in frequency domain and spatial domain are retrained by means of dataset enhancement, so that the detection accuracy of the steganalysis models based on few-shot learning is improved to more than 87% compared with the previous steganalysis model based on few-shot learning. Finally, the proposed steganalysis model based on few-shot learning is compared with the existing steganalysis models in frequency domain and spatial domain, the result shows that the detection accuracy of the universal steganalysis model based on few-shot learning is slightly below those of SRNet and ZhuNet in spatial domain and is beyond that of existing best steganalysis model in frequency domain under the experimental setup of few-shot learning. The experimental results show that the proposed method based on few-shot learning is efficient and robust for the detection of unknown steganographic algorithms.

    • Memory Leakage-resilient Multi-stage Secret Sharing Scheme with General Access Structures

      2022, 33(10):3891-3902. DOI: 10.13328/j.cnki.jos.006296 CSTR:

      Abstract (1148) HTML (1093) PDF 1.65 M (2814) Comment (0) Favorites

      Abstract:In the multi-stage secret sharing scheme, the participants of authorized sets in each level of access structures can jointly reconstruct the corresponding secret. But in reality, adversaries who corrupted an unauthorized set can obtain some or even all of the share information of the uncorrupted participants through memory attacks, thereby illegally obtaining some or even all of the shared secrets. Facing with such memory leaks, the existing multi-stage secret sharing schemes are no longer secure. Based on this, this study firstly proposes a formal computational security model of indistinguishable ability against chosen secret attack for multi-stage secret sharing. Then, using the combination of the physical unclonable function and the fuzzy extractor, a verifiable memory leakage-resistant multi- stage secret sharing scheme for general access structures is constructed based on the minimal linear codes. Furthermore, in the presence of a memory attacker, it is proved that the scheme is computational secure in the random oracle model. Finally, the proposed scheme is compared with the existing schemes in terms of their properties and computational complexity.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063