2019, 30(12):3579-3589. DOI: 10.13328/j.cnki.jos.005672 CSTR:
Abstract:EPR state as the most basic quantum entangled state plays an important role in quantum teleportation. In this work, the universal single-qubit quantum teleportation adapted to any type of EPR channel is studied and generalized to universal N-qubit quantum teleportation. In the single-bit quantum teleportation mode with four kinds of EPR states as quantum channels respectively, design single-bit universal circuits by analyzing the relationship between the EPR and the quantum gate, then design a two-bit standard quantum teleportation circuit and use Mathematica to verify the correctness of the circuit, and then generalize it to an N-bit quantum teleportation circuit, the next step, the single-qubit universal circuit and the N-qubit quantum teleportation circuit are implemented for fusion, finally any N-bit quantum teleportation universal circuit is designed. The universal circuits for N-particle quantum bits perform unitary transformation with parameters by the recipient of the information, where the parameters are determined by the type of EPR prepared, which solves the problem of information transmission failure due to an error in the EPR preparation center.
2019, 30(12):3590-3604. DOI: 10.13328/j.cnki.jos.005598 CSTR:
Abstract:Quantified constraint satisfaction problem (QCSP) is a central problem in artificial intelligence and automated reasoning. The tractable class is an important method to analyze its computation complexity. This study proposed a new method to determine tractability of quantified variables by analyzing constraint structures and the ordering of universally quantified variables in the prefix on a binary QCSP. Based on this method, the existing tractable class was extended with the broken-triangle property, and then a more generalized hybrid tractable class was proposed. Furthermore, an application was presented that was identifying backdoor sets through the new tractable class, and the experimental results were analyzed to show the size of backdoor sets identified by those hybrid tractable classes. To perform the experiment, a state-of-the-art QCSP solver was modified based on a backtracking algorithm by integrating a backdoor set detection module, and the advantage of the new generalized tractable class is shown where the size of backdoor set identified by it is smaller than the existing one on randomly generated instances. Finally, it is indicated that the method proposed in this study can be employed to extend other hybrid tractable classes.
2019, 30(12):3605-3621. DOI: 10.13328/j.cnki.jos.005611 CSTR:
Abstract:Because of the simplicity of taking complement operation on alternating (tree) automata and the equivalence relationship between alternating (tree) automata and nondeterministic (tree) automata, the study on alternating (tree) automata becomes a new research area of automata and model checking. Based on notions of L-valued alternating automata and alternating tree automata, the notion of L-valued alternating tree automata is introduce, and closure properties and expressive power of L-valued alternating tree automata are studied. Firstly, it is proved that after taking dual operations on transitions and changing the weight of each final state to its complement, a new L-valued alternating tree automaton is achieved which is the complement of the starting one. Afterwards, the closure is illustrated under conjunction and disjunction of languages accepted by L-valued alternating tree automata. Finally, the expressive power of L-valued alternating tree automata, L-valued tree automata, and L-valued nondeterministic automata are discussed. The equivalence relationship is proved between L-valued alternating tree automata and L-valued tree automata, the algorithms are given between them and complexities are discussed of algorithms; simultaneously, a method is provided to show how to use L-valued nondeterministic automata to simulate L-valued alternating tree automata.
2019, 30(12):3622-3636. DOI: 10.13328/j.cnki.jos.005599 CSTR:
Abstract:In this study, a parallel computation model named delta-stepping synchronous parallel (DSP) is introduced, which is a more general form than BSP (bulk synchronous parallel). Compared with BSP, delta steps of local computation substitute the single local computation in each superstep. The added local computation is named as speculative computation step (SCStep). SCStep could further explore the locality hidden in data and accelerate value diffusion. It turns out to be dramatically effective on reducing the number of iterations and shortening the convergence time. Meanwhile, it is found that excessively using the SCStep is not appropriate considering the increased computation overhead. To identify applicable algorithms and also prove the correctness, the iterative process is formalized and the convergence condition is deduced. Finally, case studies and evaluations show that DSP model could significantly reduce the number of iterations and shorten the convergence time by dozens of times.
ZHOU Ta , DENG Zhao-Hong , JIANG Yi-Zhang , WANG Shi-Tong
2019, 30(12):3637-3650. DOI: 10.13328/j.cnki.jos.005590 CSTR:
Abstract:Although Takagi-Sugeno-Kang (TSK) is widely used in practically every profession, how to enhance its classification accuracy and interpretability is still a research focus. In this study, a deep TSK fuzzy classifier is proposed. This classifier (i.e., RCC-DTSK-C) can randomly select features and combine features and own triplely concise interpretability for fuzzy rules. There are several other varieties of RCC-DTSK-C such as reasonable structure for rule representation, namely, (1) the proposed RCC-DTSK-C consists of many base-training units and each base-training unit can be trained independently. According to the principle of stacked generalization, the input of the next base-training unit consists of the training set and random result obtained from random projections about prediction results of current base-training unit. (2) In RCC-DTSK-C, the hidden layer of each base-training unit is represented by triplely concise interpretable fuzzy rules which are in the sense of randomly selected features. These features are selected by dividing into the not-fixed several fuzzy partitions and randomly combining rules and keeping the same input space in every base-training unit. (3) The source data set is mapped into each of the independent base-training units as the same input space, which effectively ensures that all the features of the source data are preserved in each separate training unit. The extensive experimental results show RCC-DTSK-C can achieve the enhanced classification performance and triplely concise interpretability for fuzzy rules.
CHEN Xiao-Ji , SHI Chuan , ZHOU Ai-Min , WU Bin
2019, 30(12):3651-3664. DOI: 10.13328/j.cnki.jos.005602 CSTR:
Abstract:In multiobjective evolutionary algorithms, how to select the optimal solutions from the offspring candidate set significantly affects the optimization process. At present, the selection of the optimal solutions is largely based on the real objective values or surrogate model to estimate objective values. However, these selections are usually very time-consuming or of poor accuracy problems, especially for some real complex optimization problems. Recently, some researchers began to employ supervised classification to assist offspring selection, but these works are difficult to prepare the exact positive and negative samples or of time-consuming parameter adjustment problems. In order to solve these disadvantages, a novel hybrid individual selection mechanism is proposed through integrating classification and surrogate to select the optimal solutions from the offspring candidate set. Concretely, in each generation, the selection mechanism employs a classifier to select good solutions firstly; then, it designs a cheap surrogate model to estimate objective values of each good solution; finally, it sorts these good solutions according to objective values and selects the optimal solution as the offspring solution. Based on the typical multiobjective evolutionary algorithm MOEA/D, the hybrid individual selection mechanism is employed to design a new algorithm framework MOEA/D-CS. Compared with the current popular multiobjective evolutionary algorithms based on decomposition, experimental results show that the proposed algorithm obtains the best performance.
CHEN Jie-Hao , ZHANG Qin , WANG Shu-Liang , SHI Ji-Yun , ZHAO Zi-Qian
2019, 30(12):3665-3682. DOI: 10.13328/j.cnki.jos.005640 CSTR:
Abstract:With the rapid development of Internet advertising, how to predict the target user's click-through rate of Internet advertisement has become a key technology for accurate advertising and has become a hot topic in the field of computational advertising and the application of deep neural networks. To improve the accuracy of CTR (click-through rate) prediction, this work proposed a prediction model based on deep belief nets and studied the influence of the number of hidden layers and the number of units in each layer on prediction results by taking experiments on the 10 million samples in the dataset provided by Kaggle Data Mining platform. In order to solve the problem of training efficiency of deep belief nets in large-scale industrial solutions, this study took wide experiments to prove that there are a lot of stagnation points in the loss function of deep belief nets and it has great negative effect on the training process. To improve the efficiency of training, starting from the characteristics of network loss function, this study further proposed a network optimization fusion model based on stochastic gradient descent algorithm and improved particle swarm optimization algorithm. The fusion algorithm can jump out of the stagnation ground and continue the normal training process. The experiment results show that compared with the traditional prediction model based on gradient boost regression tree and logistic regression, and the deep learning model based on fuzzy deep neural network, the proposed training model has better accuracy in prediction and performs 2.39%, 9.70%, 2.46% and 1.24%, 7.61%, 1.30% better in mean squared error, area under curves, and LogLoss. The fusion method will improve the training efficiency of deep belief nets at the level of 30%~70%.
2019, 30(12):3683-3693. DOI: 10.13328/j.cnki.jos.005596 CSTR:
Abstract:The AGM postulates are for the belief revision (revision by a single belief), and the DP postulates are for the iterated revision (revision by a finite sequence of beliefs). Li gave an R-calculus for R-configurations △|Γ, where △ is a set of atomic formulas or the negations of atomic formulas, and Γ is a finite set of formulas. With an idea to preserve as much as possible information of statements to be revised, another definition of the minimal change is considered:pseudo-subconcept-minimal (≤-minimal) change, where ≤ is the pseudo-subconcept relation, and then give a new R-calculus TDL which is sound and complete with respect to ≤-minimal change such that △|Γ is reduced to a theory △∪Θ in TDL (denoted by ├TDL △|Γ⇒△,Θ) if and only if Θ is a ≤-minimal change of Γ by △.
CHEN Xiang , ZHAO Ying-Quan , GU Qing , NI Chao , WANG Zan
2019, 30(12):3694-3713. DOI: 10.13328/j.cnki.jos.005604 CSTR:
Abstract:By mining software repositories, software defect prediction can construct models to predict potential defective modules of projects under testing in advance and then optimize the allocation of test resources. When considering effort-aware performance measures, the performance comparison between supervised methods and unsupervised methods has been a recent hot topic. In the recent study for file-level defect prediction problem, Yan et al. conducted empirical studies by using unsupervised and supervised methods considered by Yang et al. and obtained the conclusion that some unsupervised methods can outperform the supervised methods. The empirical studies based on 10 projects from the open source community were conducted. Final results show that under the within-project defect prediction scenario, MULTI method can improve 105.81% and 123.84% respectively on average when compared to the best unsupervised method and the best supervised method based on ACC performance measure. While MULTI method can improve 35.61% and 38.70% respectively on average when compared to the best unsupervised method and the best supervised method based on POPT performance measure. Under the cross- project defect prediction scenario, MULTI method can improve 22.42% and 34.95% respectively on average when compared to the best unsupervised method and the best supervised method based on ACC performance measure. While MULTI method can improve 11.45% and 17.92% respectively on average when compared to the best unsupervised method and the best supervised method based on POPT performance measure. Based on PMI and IFA performance measures proposed by Huang et al., it is found that MULTI method has the issue of trade-off, but it is still better than the best two unsupervised methods when considering ACC and POPT performance measures. Besides, MULTI method is compared with the recently proposed OneWay and CBS methods. The results show that MULTI performs significantly better than these two methods. Based on F1 performance measure, MULTI method also shows the superiority. Finally, the analysis on the time cost of the model construction shows that the overhead of MULTI method is acceptable.
LIN Ze-Qi , ZOU Yan-Zhen , ZHAO Jun-Feng , CAO Ying-Kui , XIE Bing
2019, 30(12):3714-3729. DOI: 10.13328/j.cnki.jos.005609 CSTR:
Abstract:Natural language text is a common form of knowledge representation in various software artifacts. During the practice of software reuse, software developers usually need to search the large amount of textual resource. This paper presents a software text semantic search approach based on code structure knowledge. This approach extracts a code structure graph from software source code and leverages it as a domain-specific knowledge base to analyze the semantic meanings of natural language texts. The semantic information is combined with information retrieval technology to re-rank text search results semantically. Experimental results on StackOverflow dataset show that this approach achieves at least 13.77% improvement in mean average precision (MAP) comparing to several text retrieval approaches.
LAI Ying-Xu , LIU Yan , LIU Jing
2019, 30(12):3730-3749. DOI: 10.13328/j.cnki.jos.005603 CSTR:
Abstract:Trusted connect architecture (TCA) technology was introduced to solve the problem of trusted connect between networks in "Pushing Forward the Internet plus Advanced Manufacturing" plan. Based on the idea of TCA technology, this study proposed a trusted connection protocol (TCA-SNI) for trusted authentication and evaluation between networks. The two-way authentication process is introduced and the interaction of TCA-SNI is given. The extended SVO logic system is used to infer the protocol logicalness, which proves that the protocol is safe and reliable. The protocol is detected using the Dolev-Yao model. Experimental results show that the proposed protocol has achieved the security goal, and can withstand attacks in the real network.
SHEN Ying-Zhu , GU Chun-Xiang , CHEN Xi , ZHANG Xie-Li , LU Zheng-Yu
2019, 30(12):3750-3764. DOI: 10.13328/j.cnki.jos.005612 CSTR:
Abstract:OpenVPN is widely used in the real network, the assessment of its security has important practical significance. In this study, technology of state fuzzing is used to carry out black box test on OpenVPN implementation to infer state machine of the target system automatically based on model learning method in automata theory. Time compression model is proposed and state machine of OpenVPN is simplified to remove the redundant states and transitions. Then, the behavior characteristics of the protocol state machine will be obtained accurately to discover a number of special behavior paths and potential security risks outside the expected behavior path. It provides a new idea for the security evaluation of OpenVPN and has important significance for obtaining the internal design details of similar security protocols with little specification but widely used.
CAI Ling , WANG Xing-Wei , WANG Jin-Kuan , HUANG Min
2019, 30(12):3765-3781. DOI: 10.13328/j.cnki.jos.005621 CSTR:
Abstract:In order to improve the caching performance in information centric networks, an adaptive caching strategy based on concept drifting learning (CDL) was proposed. Considering the supplementary action of the node data and content data on improving caching performance, firstly, the status data flow of nodes and content were used as network resources, and then the mapping relationship, namely concept, between the multidimensional state attribution data based on the status data flow and the matching relationship value was mined. Finally, utilizing this mapping function, a matching algorithm to predict the matching relationship between the node and the content in the next time period was proposed. In order to improve the accuracy of the matching algorithm, a concept drifting detection algorithm based on information entropy was proposed. When the concept drifting of the state attribution data by the information entropy was captured, a new mapping relationship was learning by the proposed recurring concept caching algorithm. Simulation results show that CDL outperforms CEE, LCD, Prob, and OPP when looking at cost reduction of network operation and enhancement in quality of user experience.
NI Wei-Wei , LI Ling-Qi , LIU Jia-Qiang
2019, 30(12):3782-3797. DOI: 10.13328/j.cnki.jos.005583 CSTR:
Abstract:With the thriving of location based service, privacy-preserving nearest neighbor querying over road networks receives widespread attention. Most of existing methods highly depend on the trusted anonymous server and auxiliary server-side global index structure of the whole road networks. The trusted anonymous server is inclined to be the bottleneck of the querying system and the global index structure is commonly with low utilization. Concerning these problems, a new privacy-preserving k nearest neighbor querying schema is proposed leveraging local index schema at server side. Two rounds of handshakes are initiated between query user and the server side to avoid the dependence on any trusted anonymous server. At the first round, the query user generates anonymous query sequence satisfying l-diversity of road nodes at client side via initiating road sections request that locate within a special cloaking area. Thereafter, the anonymous query sequence and the number of target object k are submitted to the server to achieve the candidate query results. At the server, the whole road network is partitioned into a series of basic units. A hitting frequency guided segmented query processing strategy is proposed, which differentiates basic units with high hitting frequency from those ones with low hitting frequency. For those units with high hitting frequency, a local Voronoi-R* index is designed and constructed. The k nearest neighbors of each location in anonymous query sequence can be easily achieved leveraging the new index, which promises higher querying processing efficiency. In contrast, the traditional server-side query strategy is applied to those units with low hitting frequency, which grasps nearest neighbors leveraging increment network expansion querying. This strategy can avoid redundant query process and index storage cost originated from global server side index structure, in parallel with no species difference query pattern based on the global index. Theoretical and empirical analysis demonstrates the effectiveness and efficiency of the proposed solution.
LUO En-Tao , WANG Guo-Jun , LIU Qin , MENG Da-Cheng , TANG Ya-Yuan
2019, 30(12):3798-3814. DOI: 10.13328/j.cnki.jos.005601 CSTR:
Abstract:With the rapid developments of mobile devices and online social networks, users of mobile social networks (MSNs) can easily discover and make new social interactions with others by profiles matching. However, personal profiles usually contain sensitive information of individuals, while the emerging requirement of profile matching in proximity mobile social networks may occasionally leak the sensitive information and hence violate people's privacy. A profile matching protocol in MSNs is proposed, users utilize the confusion matrix transformation algorithm and dot product to achieve secure and efficient matching results; at the same time, users can customize the matching metrics to involve their own matching preference and to make the matching results more precise. In addition, opportunistic computing is adopted to simulate the real friend making senario to guarantee the effectiveness. Security analysis shows that the proposed scheme possesses higher privacy, serviceability, and lower computation and communication cost. Assessed by real social network data, the results demonstrate that the proposed scheme is superior to the existing works.
ZHANG Shu-Guang , XIAN He-Qun , WANG Li-Ming , LIU Hong-Yan
2019, 30(12):3815-3828. DOI: 10.13328/j.cnki.jos.005610 CSTR:
Abstract:Deduplication states that only one copy of the same data is stored in the cloud server. In order to protect data privacy, users usually encrypt their data before uploading them. When encrypted with different keys, the same data may have different ciphertext results. It is difficult for the cloud server to identify and eliminate the duplicate copies. Most current solutions to the problem rely heavily on online trusted third parties, resulting in unsatisfying efficiency and security. A secure cloud encrypted data deduplication scheme is proposed, which supports offline key deliver. By constructing a duplicate check tag, it can be verified whether encrypted data originate from the same plaintext data. The ciphertext policy attribute based encryption is used to ensure the check tag is securely generated. The initial uploader of some specific data is able to deliver the encryption key to the subsequent uploaders via the cloud server in an offline manner. Deduplication can be completed without online participation of any trusted third party. Security analysis and proving are presented. The feasibility and efficiency of the scheme are verified via simulation experiments.
XU Ping-Hua , HU Wen-Bin , QIU Zhen-Yu , NIE Cong , TANG Chuan-Hui , GAO Kuang , LIU Zhong-Zhou
2019, 30(12):3829-3845. DOI: 10.13328/j.cnki.jos.005593 CSTR:
Abstract:Community detection is a popular and difficult problem in the field of social network analysis. Most of the current researches mainly focus on optimizing the modularity index, evaluating the similarity of nodes, and designing different models to fit particular networks. These approaches usually suffer from following problems:(1) just a few of them can deal with directed networks as well as undirected networks; and (2) real-world networks being more complex than synthetic networks, many community detection strategies cannot perform well in real-world networks. To solve these problems, this paper presents an algorithm for community detection in complex networks based on random walk method. Different from existing methods based on random walk method, the asymmetric transition probability is designed for the nodes according to network topology and other information. The event propagation law is also applied to the evaluation of nodes importance. The algorithm CDATP performs well on both real-world networks and synthetic networks.
ZHENG Jian-Wei , LI Zhuo-Rong , WANG Wan-Liang , CHEN Wan-Jun
2019, 30(12):3846-3861. DOI: 10.13328/j.cnki.jos.005606 CSTR:
Abstract:The explosion of information has been evoking a leading wave of big data research during recent years. Despite many empirical successes of spectral clustering algorithms, it is still challenging to cluster the high dimensional data due to the curse of dimensionality. This study proposes a novel algorithm referred to as joint Laplacian regularization and adaptive feature learning (LRAFL), which adaptively learns the feature weights and fits the feature selection as well as clustering into a unified framework, rather than the two-phase strategy of typical approaches. With a new rank constraint imposed on the Laplacian matrix, the connected components in the resulted similarity matrix are exactly equal to the cluster number. An effective approach is also proposed to solve the formulated optimization problem. Comprehensive analyses, including convergence behavior, computational complexity, and together with parameter determination are also presented. Surprisingly sound experimental results can be achieved on synthetic data and benchmark datasets by the proposed algorithm when compared with the related state-of-the-art clustering approaches.
JIN Yao , XIONG Yu-Long , ZHOU Yong-Quan , ZHANG Hua-Xiong , HE Li-Li
2019, 30(12):3862-3875. DOI: 10.13328/j.cnki.jos.005586 CSTR:
Abstract:Traditional geodesic-based Poison merging method requires time-consuming computation of rotational and scale fields, which restricts its interactive applications. This study proposed an efficient mesh merging method with reusable Laplacian operator. The method reduces problems of geometry merging, interpolation of rotational and scale fields into solving linear equations with the same Laplace matrix. It obtains eight scalar fields used in merging step by conducting Cholesky decomposition once and back substitutions several times, which is two orders of magnitude faster than the traditional geodesic-based method. To optimize the mesh nearby the merging boundary, it uses a robust method based on constrained Delaunay triangulation and discrete minimal surface. Meanwhile, it adopts reusable Laplacian operator again to merge the texture coordinates along with the geometry merging. The proposed method can handle models with complex topology and multiple boundaries, and the results are comparable to the traditional Poisson method but with much less time cost. The advantages make it capable of meeting the requirements of interactive response.
TANG Shu , WAN Sheng-Dao , YANG Shu-Li , XIE Xian-Zhong , XIA Ming , ZHANG Xu
2019, 30(12):3876-3891. DOI: 10.13328/j.cnki.jos.005587 CSTR:
Abstract:The accurate motion blur kernel (MBK) estimation is the key for the success of the single-image blind motion deblurring. Nevertheless, because of the imperfect useful edges selection and the simple regularizers design, the MBKs estimated by existing methods are inaccurate and contain various flaws. Therefore, in order to estimate an accurate MBK, in this study, a spatial-scale-information method for MBK estimation is proposed. First, in order to accurately extract the useful image edges and remove the pernicious image structures, an image smoothing model based on the spatial scale information is proposed, such that the useful image edges can be extracted accurately and quickly. Then, according to the characteristics of the MBK, a regularization constraint model, which combines spatial L0 norm with gradient L2 norm, is proposed for preserving the continuity and the sparsity of the MBK well. By combining the extracted useful image edges and the proposed regularization constraint model, the accurate MBK estimation can be achieved. Finally, a half-quadratic splitting alternating optimization strategy is employed to solve the proposed model. Extensive experiments results demonstrate that the proposed method can estimate more accurate MBKs and obtain higher quality deblurring images in terms of both quantitative metrics and subjective vision.
2019, 30(12):3892-3906. DOI: 10.13328/j.cnki.jos.005592 CSTR:
Abstract:Due to the fact that the geometric active contour model is sensitive to the position of initial contours, the distance regularized level set evolution (DRLSE) model must set the initial contour curve inside or outside the target boundary. An adaptive level set evolution (ALSE) for contour extraction is able to reduce the influence of the location of initial contours. However, both of these two models are easy to fall into false boundaries and leak from weak edges, besides, they have poor resistance to noise. This paper provides a novel active contour model, which combines an adaptive sign function with an adaptive edge indication function. This improvement makes the model robust to initial curves, and solves the problems of having slow convergence rate and being easy to leak from weak edges. In addition, a new distance regularization term is presented, which makes the evolution more stable. Experiments on some real images have proved that the proposed model not only improves the accuracy of segmentation and reduces segmentation time, but also enhances the robustness to initial contours.