WEI Chi-Xuan , WANG Zhi-Hai , YUAN Ji-Dong , LIN Qian-Hong
2022, 33(12):4411-4428. DOI: 10.13328/j.cnki.jos.006346
Abstract:For many real-world applications, capturing patterns at diverse window scales can help to discover the different periodicity of time series. At the same time, it is helpful to gain more knowledge by analyzing time series from both time-domain and frequency-domain. This study proposes a novel method to detect distinctive patterns at variable scales in time-domain and frequency-domain of time series, and discuss its application on classification. This method integrates multiple scales, the symbolic approximation and symbolic Fourier approximation techniques to explore multi-scales and multi-domain patterns efficiently in time series. Meanwhile, statistical method is applied to select some of the most discriminative patterns for time series classification, which also can effectively reduce time complexity of the algorithm. The experiments performed on various datasets demonstrate that the proposed method has higher accuracy and better interpretability. In addition, it can be extended to multi-dimensional time series easily.
TANG Xiao-Chun , ZHU Zi-Yu , MAO An-Qi , FU Ying , LI Zhan-Huai
2022, 33(12):4429-4451. DOI: 10.13328/j.cnki.jos.006362
Abstract:Data-intensive tasks include a large number of tasks. Using GPU devices to improve the performance of tasks is the main method currently. However, in the case of solving the fair sharing of GPU resources between data-intensive tasks and reducing the cost of data network transmission, the existing research methods do not comprehensively consider the contradiction between resource fairness and data transmission costs. The study analyzes the characteristics of GPU cluster resource scheduling, and proposes an algorithm based on the minimum cost and the maximum number of tasks in GPU cluster resource scheduling. The method can solve the contradiction between the fair allocation of GPU resources and the high cost of data transmission. The scheduling process is divided into two stages. In the first stage, each job gives its own optimal plan according to the data transmission costs, and in the second stage, the resource allocator merges the plan of each job. Firstly, the study gives the overall structure of the framework, and the source allocator works globally after each job giving its own optimal plan. Secondly, the network bandwidth estimation strategy and the method of computing the data transmission cost of the task are given. Thirdly, the basic algorithm for the fair allocation of resources based on the number of GPUs is given. Fourthly, the scheduling algorithm with the smallest cost and the largest number of tasks is proposed, which describing the implementation strategies of resource non-grabbing, robbing and resource fairness strategies. Finally, six data-intensive computing tasks are designed, and the algorithm proposed in the study is tested, and the experiments verifies that the scheduling algorithm can achieve about 90% of resource fairness, while also ensuring that the parallel operation time of jobs is minimized.
CHEN Dao-Kun , LIU Fang-Fang , YANG Chao
2022, 33(12):4452-4463. DOI: 10.13328/j.cnki.jos.006381
Abstract:Sparse triangular solver (SpTRSV) is an important computation kernel in scientific computing. The irregular memory access pattern of SpTRSV makes efficient data reuse difficult to achieve. Structured grid problems possess special nonzero patterns. OnSW26010 processor, the major building block of Sunway Taihulight supercomputer, these patterns are often exploited during the task partitioning stage to facilitate on-chip reuse of computed unknowns. Software-based routing is usually employed to implement inter-thread communication. Routing incurs overhead and imposes certain restrictions on nonzero patterns. This study achieves on-chip data reuse without routing. The input problem is partitioned and mapped onto SW26010 such that threads with data dependencies are always connected by the register communication network. This enables direct thread communication and obviates routing. The proposed solver is described and it is tested over a variety of problems. In the experiments, the proposed solver sustains an average memory bandwidth utilization of 88.2% with peak efficiency reaching 94.5% (24.5 GB/s).
ZHAO Xu-Hui , DENG Yu-Xin , FU Hong-Fei
2022, 33(12):4464-4475. DOI: 10.13328/j.cnki.jos.006341
Abstract:Probabilistic programs combine probabilistic reasoning models with Turing complete programming languages, unify formal descriptions of calculation and uncertain knowledge, and can effectively deal with complex relational models and uncertain problems. This study provides a tool for analyzing termination and assertions for affine probabilistic programs, namely, PROPER. On one hand, it can help to analyze the termination property of affine probabilistic programs both qualitatively and quantitatively. It can check whether a probabilistic program terminates with probability 1, estimate the upper bound of expected termination time, and calculate the number of steps after which the termination probability of the given probabilistic program decreases exponentially. On the other hand, it can estimate the correct probability interval for a given assertion to hold, which helps to analyze the influence of uncertainty of variables on the results of probabilistic programs. The experiments show that PROPER is effective for analyzing various affine probabilistic programs.
YUE Xiao-Meng , YANG Qiu-Song , LI Ming-Shu
2022, 33(12):4476-4503. DOI: 10.13328/j.cnki.jos.006695
Abstract:Simultaneous multi-threading (SMT) technology is the standard of modern high-performance processor technology, which is important micro structure optimization technology to improve the thread level parallelism. Compared with cross-cores and cross-processors, the timing channel security in SMT technology is more difficult to deal with and protect, and new security problems have emerged successively. At present, there is no systematic method to describe the timing channel security problem in SMT environment. Starting from the principle of timing channel attack using SMT technology, focusing on the timing channel attack generated by shared resources in SMT environment and its attack mechanism, based on the topological sort graph (TSG) model and data flow analysis extension, a description model of timing channel security problem suitable for SMT environment, ETSG-SMT is proposed. Firstly, this study introduces the technical characteristics of the utilization and protection of timing channel under SMT environment and the limitation and deficiency of TSG model to analyze these security problems. Then, on the basis of TSG model for SMT technical characteristics and its security problems of formal description characteristics combined with data flow analysis technology, a set of new modeling method is formulated. Finally, by applying ETSG-SMT model to the existing time channel attack methods and protection cases in the SMT environment, it is proved that the ETSG-SMT model has a sound application value in the analysis of the principle of timing channel attack and the derivation of protection technology in the SMT environment.
QIAN Ya-Guan , MA Jun , HE Nian-Nian , WANG Bin , GU Zhao-Quan , LING Xiang , Wassim Swaileh
2022, 33(12):4504-4516. DOI: 10.13328/j.cnki.jos.006352
Abstract:The emergence of adversarial examples brings challenges to the robustness of deep learning. With the development of edge intelligence, how to train a robust and compact deep learning mode on edge devices with limited computing resources is also a challenging problem. Since compact models cannot obtain sufficient robustness through conventional adversarial training, a method called two-stage adversarial knowledge transfer is proposed. The method transfers adversarial knowledge from data to models and complex models to compact models. The so-called adversarial knowledge has two forms, one is contained in data with the form of adversarial examples, and the other is contained in models with the form of decision boundary. The GPU clusters of cloud center is first leveraged to train the complex model with adversarial examples to realize the transfer of adversarial knowledge from data to models, and then an improved distillation approach is leveraged to realize the further transfer of adversarial knowledge from complex models to compact models on edge nodes. The experiments over MNIST and CIFAR-10 show that this two-stage adversarial knowledge transfers can efficiently improve the robustness and convergence of compact models.
CAI Rui-Chu , ZHENG Li-Juan , LI Zi-Jian
2022, 33(12):4517-4533. DOI: 10.13328/j.cnki.jos.006366
Abstract:Recent years have witnessed the widespread use of domain adaptation. Thought having achieved significant performance in different fields, these methods are hungry for a large amount of labeled data, which requires unaffordable cost to meet the data quality and quantity and hinders the further application of deep learning model. Fortunately, domain adaptation, which not only relaxes the I.I.D assumption between the source and the target domain but also uses the labeled source domain data and the unlabeled target domain data simultaneously, is beneficial to achieve a well-generalized model. Among all the domain adaptation setting, multi-source domain adaptation, which takes full advantage of the information of multiple source domains, are more suitable to the real-world application. This study proposes a multi-source domain adaptation method via multi-measure framework and weighted disentangled semantic representation. Motivated from the data generation process in causal view, it is first assumed that the observed samples are controlled by the semantic latent variables and the domain latent variables, and it is further assumed that these variables are independent. As for the extraction of these variables, the duel adversarial training schema is used to extract and disentangle the semantic latent variables and the domain latent variables. As for the multi-domain aggregation, three different domain aggregation strategies are employed to obtain the weighted domain-invariant semantic representation. Finally, the weighted domain-invariant semantic representation is used for classification. Experiment studies not only testify that the proposed method yields state-of-the-art performance on many multi-source domain adaptation benchmark datasets but also validate the robust of the proposed method.
LI Xiang , LIU Ming , LIU Ming-Hui , JIANG Qing , CAO Yang
2022, 33(12):4534-4544. DOI: 10.13328/j.cnki.jos.006371
Abstract:In recent years, the performance of deep neural networks in many tasks has been comparable to or even surpassed that of humans, but its generalization ability is still far from that of humans. How to improve the generalization of the network has always been an important research direction, and a lot of fruitful research has been carried out around this direction. Many effective methods have been proposed from the perspectives of expanding and enhancing training data, suppressing model complexity through regularization, and optimizing training strategies. These methods are a global strategy for the training data set, and each sample data will be treated equally. However, due to the difference in the amount of information and noise carried by each sample data, the impact on the fitting performance and generalization performance of the model during the training process should also be different. Are some samples more likely to overfit the model during repeated iterative training? How to find these samples? Can the model obtain better generalization performance by adopting a differentiated anti-overfitting strategy for different samples? In response to these problems, a method for training deep neural networks is proposed based on individual differences in sample data. First, the pre-training model is used to evaluate each training sample to determine the fit effect of each sample to the model. Then, according to the evaluation results, the training set is divided into two subsets:samples that are easy to overfit the model and the remaining ordinary samples. Finally, two subsets of data are used to train the model. In the process, a stronger anti-overfitting strategy is adopted for the subset that is more likely to overfit the model. Through a series of experiments on various deep models on different data sets, the effect of the proposed method on typical classification tasks and fine-grained classification tasks is verified.
WANG Mei-Ling , SHAO Wei , ZHANG Dao-Qiang
2022, 33(12):4545-4558. DOI: 10.13328/j.cnki.jos.006376
Abstract:Recently, with the rapid development of imaging and genomic techniques, the brain imaging genetics has received extensive attention. In the brain imaging genetic studies, it is a challenging task to examine the influence of genetic variants, i.e., single nucleotide polymorphisms (SNPs), on structures or functions of human brains. In addition, multimodal brain imaging phenotypes extracted from different perspectives and imaging markers from the same region consistently showing up in multimodalities gives more ways to understand the diseases mechanism, such as Alzheimer's disease (AD). Accordingly, This work exploits multi-modal brain imaging phenotypes as intermediate traits to bridge genetic risk factors and disease status. Consistent phenotype between genetic risk factors and disease status is discovered via the designed label-aligned multi-modality regression method in AD. Specifically, standard multi-modality method is first applied to explore the relationship between the well-known AD risk SNP APOEe4 rs429358 and multimodal brain imaging phenotypes. Secondly, to utilize the label information among labeled subjects, a new label-aligned regularization is included into the standard multi-modality method. In such way, all multimodality subjects with the same class labels should be closer in the new embedding space. Finally, the experiments are conducted on three baseline brain imaging modalities, i.e., voxel-based measures extracted from structural magnetic resonance imaging, fluorodeoxyglucose positron emission tomography and F-18 florbetapir PET scans amyloid imaging, from the Alzheimer's disease neuroimaging initiative (ADNI) database. Related experimental results validate that the proposed method can identify robust and consistent regions of interests over multi-modality imaging data to guide the disease-induced interpretation. Furthermore, the values of correlation coefficient have been increased by 8%, 9%, and 5% in comparison with the best results of the existing algorithms on three modalities.
CHEN Qi-Jian , WANG Li , GUO Shun-Chao , DENG Ze-Yu , ZHANG Jian , WANG Li-Hui
2022, 33(12):4559-4573. DOI: 10.13328/j.cnki.jos.006499
Abstract:Accurately predicting the status of 1p/19q is of great significance for formulating treatment plans and evaluating the prognosis of gliomas. Although there are some works which can predict the status of 1p/19q accurately based on magnetic resonance images and machine learning methods, they require to delineate the tumor contour preliminarily, which cannot satisfy the needs of computer-aided diagnosis. To deal with this issue, this work proposes a novel deep multi-scale invariant features-based network (DMIF-Net) for predicting 1p/19q status in glioma. Firstly, it uses the wavelet-scattering network to extract multi-scale and multi-orientation invariant features, and deep split and aggregation network to extract semantic features. Then, it reduces the feature dimensions using a multi-scale pooling module and fuses these features with concatenation. Finally, with inputting the bounding box of the tumor region it can predict the 1p/19q status accurately. The experimental results illustrate that, without requiring to delineate the tumor region accurately, the AUC predicted by DMIF-Net can reach 0.92 (95%CI=[0.91, 0.94]). Compared with the best deep learning model, the AUC, sensitivity, and specificity increased by 4.1%, 4.6%, and 3.4%, respectively. Compared with the state-of-the-art models on glioma, AUC and accuracy have increased by 4.9% and 5.5%, respectively. Moreover, the ablation experiments demonstrate that the proposed multi-scale invariant feature extraction module can promote effectively the 1p/19q prediction performance, which verify that combining the semantic and multi-scale invariant features can significantly increase the prediction accuracy for 1p/19q status without knowing the boundaries of tumor region, providing therefore an auxiliary means for formulating personalized treatment plan for low-grade glioma.
LIU Jun , LI Wei , CHEN Shu-Yu , XU Guang-Xia
2022, 33(12):4574-4589. DOI: 10.13328/j.cnki.jos.006515
Abstract:This study proposes a feature extraction algorithm based on the principal component analysis (PCA) of the anisotropic Gaussian kernel penalty which is different from the traditional kernel PCA algorithms. In the non-linear data dimensionality reduction, the nondimensionalization of raw data is ignored by the traditional kernel PCA algorithms. Meanwhile, the previous kernel function is mainly controlled by one identical kernel width parameter in each dimension, which cannot reflect the significance of different features in each dimension precisely, resulting in the low accuracy of dimensionality reduction process. To address the above issues, contraposing the current problem of nondimensionalization of raw data, an averaging algorithm is proposed in this study, which has shown sound performance in improving the variance contribution rate of the original data typically. Then, anisotropic Gaussian kernel function is introduced owing each dimension has different kernel width parameters which can critically reflect the importance of the dimension data features. In addition, the feature penalty function of kernel PCA is formulated based on the anisotropic Gaussian kernel function to represent the raw data with fewer features and reflect the importance of each principal component information. Furthermore, the gradient descent method is introduced to update the kernel width of feature penalty function and control the iterative process of the feature extraction algorithm. To verify the effectiveness of the proposed algorithm, several algorithms are compared on UCI public data sets and KDDCUP99 data sets, respectively. The experimental results show that the feature extraction algorithm of the PCA based on the anisotropic Gaussian kernel penalty is 4.49% higher on average than the previous PCA algorithms on UCI public data sets. The feature extraction algorithm of the PCA based on the anisotropic Gaussian kernel penalty is 8% higher on average than the previous PCA algorithms on KDDCUP99 data sets.
YUN Yue , DAI Huan , ZHANG Yu-Pei , SHANG Xue-Qun , LI Zhan-Huai
2022, 33(12):4590-4615. DOI: 10.13328/j.cnki.jos.006518
Abstract:Recently, with the rapid development of information technology, emerging technologies represented by artificial intelligence are widely applied in education, triggering profound changes in the concept and mode of learning. In addition, online learning transcends the limitations of time and space, providing more possibilities for learners to learn "anytime and anywhere". Nevertheless, the separation of time and space of teachers and students in online learning makes teachers could not handle students' learning process, limits the quality of teaching and learning. Diversified learning targets and massive learning resources generate some new problems, such as how to quickly accomplish learning targets, reduce learning costs, and reasonably allocate learning resources. These problems have become the limitations of the development of individuals and the society. However, traditional "one size fits all" educational model can no longer fit human's needs, thus, one more efficient and scientific personalized education model is needed to help learners maximize their learning targets with minimal learning costs. Based on these considerations, new adaptive learning system is needed which could automatically and efficiently identify learner's personalized characteristics, efficiently organize and allocate learning resources, and plan a global personalized learning path. This study systematically reviews and analyzes the current researches on personalized learning path recommendation and the different research vision from multidisciplinary perspective. Then, the most applied algorithm in current research is summarized. Finally, the main shortcomings of the current research, which should be paid more attention to, are highlighted.
ZHAO Meng-Yuan , HUANG Xiao-Wen , SANG Ji-Tao , YU Jian
2022, 33(12):4616-4643. DOI: 10.13328/j.cnki.jos.006521
Abstract:Recommender system is an information filtering system that helps users filter a large number of invalid information to obtain information or items by estimating their interests and preferences. The mainstream traditional recommendation system mainly uses offline and historical user data to continuously train and optimize offline models, and then recommend items for online users. There are three main problems:the unreliable estimation of user preferences based on sparse and noisy historical data, the ignorance of online contextual factors that affect user behavior, and the unreliable assumption that users are aware of their preferences by default. Since the dialogue system focuses on the user's real-time feedback data and obtains the user's current interaction intentions, "conversational recommendation" combines the interactive form of the dialogue system with the recommendation task, and becomes an effective means to solve the traditional recommendation problem. Through online interactive methods, conversational recommendation can guide and capture users' current preferences and interests, and provide timely feedback and updates. Thanks to the widespread use of voice assistants and chatbot technologies, as well as the mature application of technologies such as reinforcement learning and knowledge graphs in recommendation strategies, in the past few years, more and more researchers have paid attention to conversational recommendation systems. This survey combs the overall framework of the conversational recommendation system, classifies the datasets used in the conversational recommendation algorithm, and discusses the relevant metrics to evaluate the effect of the conversational recommendation. Focusing on the background interaction strategy and recommendation logic in conversational recommendation, this survey summarizes the existing research achievements of the domestic and foreign researchers in recent years. And finally, this survey also summarizes and prospects future works of conversational recommendation.
HOU Zhong-Ni , JIN Xiao-Long , CHEN Jian-Yun , GUAN Sai-Ping , WANG Yuan-Zhuo , CHENG Xue-Qi
2022, 33(12):4644-4667. DOI: 10.13328/j.cnki.jos.006522
Abstract:Reasoning over knowledge graphs aims to infer new facts based on known ones, so as to make the graphs as complete as possible. In recent years, distributed embedding-based reasoning methods have made great success on this task. However, due to their black-box nature, these methods cannot provide interpretability for a specific prediction. Therefore, there has been a growing interest in how to design user-understandable and user-trustworthy reasoning models. Starting from the basic concept of interpretability, this work systematically studies the recently developed methods for interpretable reasoning on knowledge graphs. Specifically, it introduces the research progress of ante-hoc and post-hoc interpretable reasoning models. According to the scope of interpretability, ante-hoc interpretable models can be further divided into local-interpretable and global-interpretable models. In post-hoc interpretable reasoning models, this study reviews representative reasoning methods and introduces two post-hoc interpretation methods in detail. Next, it also summarizes the application of explainable knowledge reasoning in such fields as finance and healthcare. Then, this study summarizes the current situation in explainable knowledge learning. Finally, the future technological development of interpretable reasoning models is prospected.
CHEN Ke-Jia , FEI Zi-Yang , CHEN Jing-Qiang , YANG Zi-Nong
2022, 33(12):4668-4687. DOI: 10.13328/j.cnki.jos.006544
Abstract:Text style transfer is one of the hot issues in the field of natural language processing in recent years. It aims to transfer the specific style or attributes of the text (such as emotion, tense, gender, etc.) through editing or generating while retaining the text content. The purpose of this study is to sort out the existing methods in order to advance this research field. First, the problem of text style transfer is defined and the challenges are given. Then, the existing methods are classified and reviewed, focusing on the TST methods based on unsupervised learning and further dividing them into the implicit methods and the explicit methods. The implementation mechanisms, advantages, limitations, and performance of each method are also analyzed. Subsequently, the performance of several representative methods on automatic evaluation indicators such as transfer accuracy, text content retention, and perplexity are compared through experiments. Finally, the research of text style transfer is concluded and prospected.
CHEN Jing-Shuang , CHEN Ke , SHOU Li-Dan , JIANG Da-Wei , CHEN Gang
2022, 33(12):4688-4703. DOI: 10.13328/j.cnki.jos.006354
Abstract:Learned indexes are capable of predicting the accurate location of data in storage by learning the data distribution. These indexes can significantly reduce storage consumption while providing efficient query processing. Existing learned indexes are mostly optimized for read-only queries, but inadequate in supporting insertions and updates. In an attempt to address the challenges faced by learned index, this study proposes a workload-adaptive learned index named ALERT. Generally, ALERT employs a Radix Tree to manage variable-length segments, where each segment contains a linear interpolation model with a maximum error-bound. Meanwhile, ALERT utilizes an insertion memory buffer to reduce the cost of updates. Following the database-cracking approach, the study proposes adaptive index maintenance during the run-time processing of point queries and range queries. The maintenance technique is implemented by performing workload-aware dynamic re-organization on the insertion buffer. Experimental results confirm that, when compared to state-of-the-art learned index, ALERT achieves competitive results as it reduces the index's average construction time by 81%, the average memory utilization by 75%, the average latency of insert by 50%, while maintaining competitive read performances. The average query latency of ALERT is also reduced by 15%, owing to its effective workload-aware optimization.
TANG Xiao-Chun , ZHAO Quan , FU Ying , ZHU Zi-Yu , DING Zhao , HU Xiao-Xue , LI Zhan-Huai
2022, 33(12):4704-4726. DOI: 10.13328/j.cnki.jos.006356
Abstract:The use of the Dataflow model integrates the batch processing and stream processing of big data computing. Nevertheless, the existing cluster resource scheduling frameworks for big data computing are oriented either to stream processing or to batch processing, which are not suitable for batch processing and stream processing jobs to share cluster resources. In addition, when GPUs are used for big data analysis and calculations, resource usage efficiency is reduced due to the lack of effective CPU-GPU resource decoupling methods. Based on the analysis of existing cluster scheduling frameworks, a hybrid resource scheduling framework called HRM is designed and implemented that can perceive batch/stream processing applications. Based on a shared state architecture, HRM uses a combination of optimistic blocking protocols and pessimistic blocking protocols to ensure different resource requirements for stream processing jobs and batch processing jobs. On computing nodes, it provides flexible binding of CPU-GPU resources, and adopts queue stacking technology, which not only meets the real-time requirements of stream processing jobs, but also reduces feedback delays and realizes the sharing of GPU resources. By simulating the scheduling of large-scale jobs, the scheduling delay of HRM is only about 75% of the centralized scheduling framework; by using actual load testing, the CPU resource utilization is increased by more than 25% when batch processing and stream processing share clusters; by using the fine-grained job scheduling method, not only the GPU utilization rate is increased by more than 2 times, the job completion time can also be reduced by about 50%.
ZHAO Meng , CHEN Ke , SHOU Li-Dan , WU Sai , CHEN Gang
2022, 33(12):4727-4745. DOI: 10.13328/j.cnki.jos.006686
Abstract:NL2SQL refers to a technology that automatically converts query expressed in natural language into a structured SQL expression, which can be parsed and executed by the DBMS. NL2SQL can provide ordinary users with a natural interactive interface for database query access, thereby realizing question-answering atop database systems. NL2SQL for complex queries is now a research hotspot in the database community. The most prevalent approach uses the sequence-to-sequence (Seq2seq) encoder and decoder to convert complex natural language to SQL. However, most of the existing work focuses on English language. This approach is not ready to address the special colloquial expressions in Chinese queries. In addition, the existing work cannot correctly output query clauses containing complex calculation expressions. To solve the above problems, this study proposes to use a tree model instead of the sequence representation. The proposed approach disassembles complex queries from top to down to comprise a multi-way tree, where the tree nodes represent the elements of SQL. It uses a depth-first search to predict and generate SQL statements. The proposed approach has achieved the championship and 1st runner-up in two official tests of DuSQL Chinese NL2SQL Competition. The experimental results confirm the effectiveness of the proposed approach.
MA Wen-Jing , WU You-Qing , YIN Zhao-Xia
2022, 33(12):4746-4757. DOI: 10.13328/j.cnki.jos.006350
Abstract:With the popularization of digital information technology, the reversible data hiding in encrypted images (RDHEI) has gradually become the research hotspot of privacy protection in cloud storage. As a technology which can embed additional information in encrypted domain, extract the embedded information correctly, and recover the original image without loss, RDHEI has been widely paid attention by researchers. To embed sufficient additional information in the encrypted image, a high-capacity RDHEI method using adaptive encoding is proposed in this study. Firstly, the occurrence frequency of different prediction errors of the original image is calculated and the corresponding adaptive Huffman coding is generated. Then, the original image is encrypted with stream cipher and the encrypted pixels are marked with different Huffman codewords according to the prediction errors. Finally, additional information is embedded in the reserved room of marked pixels by bit substitution. The experimental results show that the proposed algorithm can extract the embedded information correctly and recover the original image losslessly. Compared with similar algorithms, the proposed algorithm makes full use of the characteristics of the image itself and greatly improves the embedding rate of the image. On UCID, BOSSBase, and BOWS-2 datasets, the average embedding rate of the proposed algorithm reaches 3.162 bpp, 3.917 bpp, and 3.775 bpp, which is higher than the state-of-the-art algorithm of 0.263 bpp, 0.292 bpp, and 0.280 bpp, respectively.
YUAN Yi-Lin , ZHANG Jian-Biao , XU Wan-Shan , LI Zheng
2022, 33(12):4758-4770. DOI: 10.13328/j.cnki.jos.006360
Abstract:Cloud storage systems provide users with storage services with large capacity, high access efficiency, and reasonable prices. Nevertheless, the users who use cloud storage services will lose absolute control over the data once they upload files to the CSP. As it is well known, CSP (cloud server provider) is not reliable. Whether the data on the cloud is with integrity has become a problem worth considering. Under the public cloud storage environment, this study defines a company, organization or organization as a group, and the group is managed by the person in charge who can help the users in the group using the cloud storage service conveniently. In this scenario, to solve the problem of user data integrity verification in the same group, a data integrity verification scheme is proposed for group users in this study. To assist users in one group to carry out a series of operations, an entity named Agency is proposed. In this scheme, the design of the tag is based on IBE (identity-based encryption), which frees the users from complicated certificate management. In the integrity verification process, by adopting random sampling, the performance overhead of the system is greatly reduced. With the help of the random oracle model, the security of the proposed scheme is proved. A practical experiment validates the feasibility of the scheme in the end.
LI Shun-Dong , WANG Wen-Li , CHEN Ming-Yan , WANG Yu-Lin
2022, 33(12):4771-4783. DOI: 10.13328/j.cnki.jos.006361
Abstract:The rapid development of the Internet, IOT, and big data brings great chance to share data owned by different entities, but it also brings severe challenge to privacy-preserving of private data. Secure multiparty computation is a key privacy-preserving technology, an important field of cryptography, and a focus of international cryptographic community. Privately comparing two numbers is a basic problem of secure multiparty computation. The protocols for this problem are building blocks to construct other privacy-preserving protocols. If the two numbers to be compared is small, there is no reliable solution to this problem that can resist active attacks. In many scenarios, the participants may be malicious and they may actively attack a protocol. If this is the case, there is no solution that can be used to privately compare the numbers. Therefore, it is of important theoretical and practical significance to design a protocol that can resist active attacks. This study first proposes a new technique called encrypt-and-choose and a new technology to resist active attacks:encoding+secure shuffle. Based on these techniques, a secure comparison protocol is first designed that is secure in the semi-honest model. Its security is proved by using the simulation paradigm. All possible active attacks are analyzed that the protocol may suffer from, and ElGamal multiplicative homomorphism and zero-knowledge proof of discrete logarithm and secure shuffle are used to resist possible active attacks. The protocol is then converted to one that can resist active attacks, and it is proved that it is secure against active attacks by using the ideal-real paradigm. Finally, the efficiency of the protocol is analyzed and tested. The experimental results demonstrate that the protocol is practical.
ZENG Guang , LI Jing-Yu , YANG Yang
2022, 33(12):4784-4803. DOI: 10.13328/j.cnki.jos.006378
Abstract:As one of the most widely used Hash functions, the research on related attack techniques on SHA-1 algorithm has been widely concerned by cryptographers since it was put forward. In the collision attack against SHA-1, the construction of the differential path is an important step that affects the complexity of the attack. This study proposes the concept of a differential path with bitconditions and its construction method. The path comprehensively describes the Boolean function difference, bitcondition, message difference, and working state difference of each step. It is not only compatible with the original first round path construction, but also can save the complicated technologies such as path reduction and message reduction of the last three rounds. Therefore, the differential path with bitconditions has good compatibility. In addition, before proposing a corresponding construction algorithm for the differential path with bitconditions, this study firstly analyzes the value of the output Boolean function difference and its input bitconditions when the three input working state differences are fixed. That is the foundation for the later work. The differential path construction algorithm is divided into three sub-algorithms of forward expansion, backward expansion, and connect algorithm. The forward expansion and backward expansion mainly consider the relationship between the bitcondition on the working state and the output difference of the Boolean function. The forward of each step is based on the expansion result of the previous step, and so is the backward algorithm. The goal of the connect algorithm is to connect the results of forward expansion and backward expansion to form a complete and valid differential path. Whether the connect algorithm is successful determines whether the collision attack can be continued. If the connect algorithm fails, it is necessary to renew the forward expansion and backward expansion. In order to improve the success rate of connection algorithm, this study proposes three related indexes of update times to bitconditions, compatibility of extension results, and accuracy of candidate sets. Analyzing these three indexes, the relationship between the success rate and them is achieved. The number of forward expansions is proportional to the update times to bitconditions. As for the compatibility of extension results, the times to judge the consistency between the expansion result and original conditions is inversely proportional to the success rate. These two indexes are affected by the number of forward or backward expansions. The accuracy of candidate sets refers to the correct rate of the selectable of working state difference, Boolean function difference and message difference. Analyses finds that the order of the expansions can affect this accuracy. The accuracy of two expansions in opposite directions is higher than that in the same direction. So forward expansions and backward expansions are executed alternately at the first four expansions of the connect algorithm in this study. According to these conclusions, the optimal connect algorithm is selected from 25 possible algorithms. Combining early-abort strategy, the connect algorithm proposed in this study has higher success rate and lower complexity. Theoretical analysis results show that this method helps to improve the success rate of SHA-1 differential path construction. At last, valid differential paths which are used for collision search can be constructed using the algorithm in this study.
LU Dian-Jun , LI Zhi-Hui , YAN Chen-Hong , LIU Lu
2022, 33(12):4804-4815. DOI: 10.13328/j.cnki.jos.006379
Abstract:This study proposes a verifiable multi-party quantum key agreement protocol which based on X operation and four-qubit cluster states. The protocol allows two participants to perform X operations on two particles of each four-qubit cluster states using their own key at a time, and to perform delay measurements on the converted cluster state, which ensures that each participant contributes equally to the agreement key. The proposed protocol uses mutually unbiased bases particles as decoy particles and performs unitary operations on these decoy particles using a pair of function values of symmetric binary polynomial, which can not only perform eavesdropping checks, but also achieved the identity authentication between participants. The scheme can be applied to any situation with more than 2 participants. The security analysis shows that the proposed protocol can resist external attacks and participant attacks. Compared with the existing multi-party QKA, the proposed protocol not only has advantages in the use of decoy particles, but also has high quantum bit efficiency.
WU Hai-Bo , XU Yao-Gong , LI Jun
2022, 33(12):4816-4837. DOI: 10.13328/j.cnki.jos.006382
Abstract:As a new future Internet architecture, information-centric networking (ICN) emerges as the time goes and is widely used in the field of Internet of Things. Caching technology is an important feature of ICN, and has an important impact on the content transmission performance in ICN-IoT. Due to the characteristics of data updating frequently and the strict requirements for data freshness in ICN-IoT, the traditional network caching technology of ICN is facing challenges. This study proposes a distributed caching strategy based on content popularity and network topology, considering the freshness of content. To maximize the caching efficiency, each caching node first caches the content with higher popularity and closer to users. In order to adapt to the frequent updating of content in IoT, a content caching reward prediction method based on grey prediction is proposed, which is convenient to obtain the caching reward of new content quickly. Simulation results show that, compared with the traditional caching strategy, the proposed scheme can effectively improve the caching efficiency and hit ratio, reduce the access delay, and improve the user experience.
ZHANG Lei-Min , DONG Jian-Feng , BAO Cui-Zhu , JI Shou-Ling , WANG Xun
2022, 33(12):4838-4850. DOI: 10.13328/j.cnki.jos.006368
Abstract:Video click-through rate (CTR) prediction is one of the important tasks in the context of video recommendation. According to click-through prediction, recommendation systems can adjust the order of the recommended video sequence to improve the performance of video recommendation. In recent years, with the explosive growth of videos, the problem of video cold start has become more and more serious. Aim for this problem, a novel video click-through prediction model is proposed which utilizes both the video content features and context features to improve CTR prediction; a simulation training of the cold start scenario and neighbor-based new video replacement method are also proposed to enhance the model's CTR prediction ability for new videos. The proposed model is able to predict CTR for both old and new videos. The experiments on two real-world video CTR datasets (Track_1_series and Track_2_movies) demonstrate the effectiveness of the proposed method. Specifically, the proposed model using both video content and contextual information improves the performance of CTR prediction for old videos, which also outperforms the existing models on both datasets. Additionally, for new videos, a baseline model without considering the cold start problem achieves an AUC score of about 0.57. By contrast, the proposed model gives much better AUC scores of 0.645 and 0.615 on Track_1_series and Track_2_movies, respectively, showing the better robustness to the cold start problem.
WU Kun-Yao , CHAI Yun-Peng , ZHANG Da-Fang , WANG Xin
2022, 33(12):4851-4868. DOI: 10.13328/j.cnki.jos.006359
Abstract:In recent years, traditional HDDs' areal density is reaching its limit. To extend the capacity of disk drives, several new storage techniques were proposed, including shingled magnetic recording (SMR), which is the first one to reach market among those new technologies. However, the shingled track structure of SMR disks encounters serious write amplification and performance declining when processing random write requests. Furthermore, constructing RAID5 based on SMR drives worsens the write amplification (WA) because the parity updating of RAID5 is very frequent to produce many random writes. This study, for current SMR disks' structure, finds that the first track of each band can be overwritten without impacting other tracks, because the wide write head can be moved a bit to cover both the first track and the guard region. In other words, the first track of each band can be called the free track, because it can be overwritten freely without causing any write amplification. Therefore, a new free-track-based RAID system (FT-RAID) is propose based on SMR drives, to fully develop the potentials of the overwriting-free region in SMR disk drives. FT-RAID is consisted of two key techniques, i.e., FT-Mapping and FT-Buffer. FT-Mapping is an SMR-friendly data mapping manner in RAID, which maps the frequently updated parity blocks to the free tracks; FT-Buffer adopts an SMR-friendly two-layer cache structures, in which the upper level can support in-place updating for hot blocks and the lower level can supply higher capacity for the write buffer. Both of them are designed to mitigate the degradation of performance by reducing SMR WA, leading to an 80.4% lower WA ratio than CMR-based RAID5 based on practical enterprise I/O workloads.