CHU Bei , LI Zhan-Shan , ZHANG Meng-Lin , YU Hai-Hong
2018, 29(9):2547-2558. DOI: 10.13328/j.cnki.jos.005395 CSTR:
Abstract:In classification, feature selection has been an important, but difficult problem. Recent research results disclosed that feature selection using forest optimization algorithm (FSFOA) has a better classification performance and good dimensionality reduction ability. However, the randomness of initialization phase, the limitations of updating mechanism and the inferior quality of the new tree in the local seeding stage severely limit the classification performance and dimensionality reduction ability of the algorithm. In this paper, a new initialization strategy and updating mechanism are used and a greedy strategy is added in the local seeding stage to form a new feature selection algorithm (IFSFOA) in order to maximize the classification performance and simultaneously minimize the number of features. In experiment, IFSFOA uses SVM, J48 and KNN classifiers to guide the learning process while utilizing the machine learning database UCI for testing. The results show that compared with FSFOA, IFSFOA has a significant improvement in classification performance and dimensionality reduction. Comparing IFSFOA algorithm with more efficient feature selection methods proposed in recent years, IFSFOA is still very competitive in both accuracy and dimensionality reduction.
LIU Yi , CAO Jian-Jun , DIAO Xing-Chun , ZHOU Xing
2018, 29(9):2559-2579. DOI: 10.13328/j.cnki.jos.005394 CSTR:
Abstract:With the development of big data and the wide application of machine learning, data from all walks of life is growing massively. High dimensionality is one of its most important characteristics, and applying feature selection to reduce dimensions is one of the preprocessing methods of high dimensional data. Stability of feature selection is an important research direction, and it stands for the robustness of results with respect to small changes in the dataset composition. Improving the stability of feature selection can help to identify relevant features, increase experts' confidence to the results, and further reduce the complexity and costs of getting original data. This paper reviews current methods for improving the stability, and presents a classification of those methods with analysis and comparison on the characteristics and range of application of each category. Then it summarizes the evaluations of stability of feature selection, and analyzes the performance of stability measurement and validates the effectiveness of four ensemble approaches through experiments. Finally, it discusses the localization of current works and a perspective of the future work in this research area.
HE Yi-Chao , WANG Xi-Zhao , ZHAO Shu-Liang , ZHANG Xin-Lu
2018, 29(9):2580-2594. DOI: 10.13328/j.cnki.jos.005400 CSTR:
Abstract:In order to solve a combinatorial optimization problem in discrete domains by using evolutionary algorithms, this study draws lessons from the design concept of genetic algorithm (GA), binary particle swarm optimization (BPSO) and binary differential evolution with hybrid encoding (HBDE), to propose a simple and practical method for designing discrete evolution algorithm (DisEA) based on the idea of mapping transformation. This method is named encoding transformation method (ETM). For illustrating the practicability of ETM, a discrete particle swarm optimization (DisPSO) algorithm based on ETM is presented. For showing the feasibility and effectiveness of ETM, GA, BPSO, HBDE and DisPSO are used to solve the set union knapsack problem (SUKP) and the discounted {0-1} knapsack problem (D{0-1}KP), respectively. The results show that for SUKP and D{0-1}KP, the discrete evolutionary algorithms based on ETM, i.e. BPSO, HBDE and DisPSO have better performance than GA. This indicates that the design of DisEA based on ETM is not only a feasible method, but also a very practical and efficient method.
LIANG Jing , LIU Rui , YU Kun-Jie , QU Bo-Yang
2018, 29(9):2595-2605. DOI: 10.13328/j.cnki.jos.005398 CSTR:
Abstract:With the development of engineering technology and the improvement of mathematical model, a large number of optimization problems have been developed from low dimensional optimization to large-scale complex optimization. Large scale global optimization is an active research topic in the real-parameter optimization. Based on the analysis of the characteristics of large scale problems, a stochastic dynamic cooperative coevolution strategy is proposed in the article. Additionally, a strategy is added to the dynamic multi-swarm particle swarm optimization algorithm. Then, the dual grouping of population and decision variables is realized. Next, the performance of the novel optimization on the set of benchmark functions provided for the CEC2013 special session on large scale optimization is reported. Finally the validity of the algorithm is verified by comparing with other algorithms.
LI Xue-Qiang , HUANG Han , HAO Zhi-Feng
2018, 29(9):2606-2615. DOI: 10.13328/j.cnki.jos.005397 CSTR:
Abstract:Complex single-objective optimization problem is a hot topic in the field of evolutionary computation. Differential evolution and covariance evolution are considered to be two of the most effective algorithms for this problem, as the difference information similar to the gradient can effectively guide the algorithm towards the optimal solution direction, and the covariance is based on statistics to generate an offspring population. In this paper, the covariance information is introduced to improve the difference operator, then an evolutionary algorithm based on neighborhood difference and covariance information (DEA/NC) is proposed to deal with complex single-objective optimization problem. The two commonly used difference operators generated by random selection individuals and combined by the current optimal solution are analyzed. With the first approach, the difference information cannot be used as a local gradient information to guide the search of the algorithm when the Euclidean distance between randomly selected individuals is large. Meanwhile, the second approach will make the population search in the direction of the current optimal solution, which will lead the population to quickly fall into local optimum. In this paper, a neighborhood difference method is proposed to improve the effectiveness of the differential operator while avoiding the diversity of population loss. In addition, a covariance is introduced to measure the correlation between individual variables, and the correlation is used to optimize the difference operator. Finally, the algorithm tests the single-objective optimization problem in cec2014, and compares the results with the existing differential evolution algorithms. The experimental results show the effectiveness of the proposed algorithm.
XU Xin , MU Nan , ZHANG Xiao-Long
2018, 29(9):2616-2631. DOI: 10.13328/j.cnki.jos.005396 CSTR:
Abstract:Human visual attention based saliency model is an effective way for the active perception of important information in image, and it can play an important role for exploring large-scale perceptual information organization in the early stage of visual cognitive process. However, nighttime images usually have low signal-to-noise ratio and low contrast properties. Conventional salient object detection models may face great challenges in this scenario, such as noise influence, and weak texture blur. This paper proposes a region covariance and global search based salient object detection model for nighttime images. The input image is firstly divided into superpixel regions to estimate their covariance. Then, covariance feature based local saliency and global search based image saliency can be calculated respectively. Finally, a graph-based diffusion process is performed to refine the saliency maps. Extensive experiments have been conducted to evaluate the performance of the proposed method against eleven state-of-the-art models on five benchmark datasets and a nighttime image dataset.
2018, 29(9):2632-2648. DOI: 10.13328/j.cnki.jos.005399 CSTR:
Abstract:Software reliability problem is one of the key factors in the process of system design, research and running. Different from most current researches on software reliability allocation limited to series parallel models, an effective optimization algorithm is applied to large complex software reliability allocation in this paper. Estimation of distribution algorithm (EDA) has fast convergence rate and strong global search capability, but is easily trapped in local optimization. Differential evolution (DE) has good local search capability with slower convergence speed. To address the issue, a new penalty guided hybrid estimation of distribution and self-adaptive crossover differential evolution algorithm (PHEDA-SCDE) is proposed in this paper. PHEDA-SCDE has fast convergence rate and strong global search capability. Also, it is not easily trapped in local optimization. In addition, software reliability is estimated based on four specific architecture styles-sequential, parallel, circulation and fault tolerant. To demonstrate the generality of the algorithm, experiments are carried out on three numerical examples including single-input/single-output system, single-input/multiple-output system and multiple-input/multiple-output system. The experimental results show that the PHEDA-SCDE is significantly feasible and efficient in reliability allocation compared with similar algorithms.
CHEN Huang-Ke , WU Guo-Hua , HUO Li-Su , QI Yu-Tao
2018, 29(9):2649-2663. DOI: 10.13328/j.cnki.jos.005278 CSTR:
Abstract:Currently, multiobjective evolutionary algorithm has been applied widely in various fields, and become one of the most attractive topics in the optimization area. This paper analyzes the deficiency of traditional multiobjective evolutionary algorithms in maintaining population diversity, and further proposes an objective space division based adaptive multiobjective evolutionary algorithm (SDA-MOEA) to solve multiobjective optimization problems. The proposed algorithm divides the objective space of a multiobjective optimization problem into a series of subspaces. During the evolution process, each subspace in SDA-MOEA can maintain a set of non-dominated solutions to guarantee the population diversity. Besides, SDA-MOEA self-adaptively distributes the evolutionary opportunities for each subspace according to its forward distance, which can promote the population convergence. Finally, 14 multiobjective problems of three groups are selected to measure the performance of SDA-MOEA. By comparing with five existing multiobjective evolutionary algorithms, the experimental results demonstrate that SDA-MOEA shows obvious superiority over these existing algorithms on 10 problems.
LIU Jie-Fang , JIANG Yi-Zhang , WANG Jun , DENG Zhao-Hong , WANG Shi-Tong
2018, 29(9):2664-2680. DOI: 10.13328/j.cnki.jos.005265 CSTR:
Abstract:Based on the maximum a posteriori (MAP) principle and Bayesian framework, the Bayesian fuzzy clustering (BFC) method recently proposed exhibits promising characteristics in estimating the number of clusters and finding the globally optimal clustering solution, for the method effectively combines the advantages of both probability theory and fuzzy theory. However, since it suffers from its high computational burden, BFC becomes impractical for large-scale datasets. In this paper, in order to circumvent this drawback of BFC, a weighted Bayesian fuzzy clustering (WBFC) algorithm is first proposed by introducing weighting mechanism in BFC. Then, a fast single pass Bayesian fuzzy clustering (SPBFC) algorithm is developed by combining WBFC with a single pass clustering framework. Theoretical analysis on convergence and time complexity is also discussed. The experimental results show that SPBFC not only inherits the promising characteristics, but also has a fast convergence speed for large-scale datasets.
WU Bin , LOU Zheng-Zheng , YE Yang-Dong
2018, 29(9):2681-2696. DOI: 10.13328/j.cnki.jos.005274 CSTR:
Abstract:Recommender systems have been successfully adopted as an effective tool to alleviate information overload and assist users to make decisions. Recently, it has been demonstrated that incorporating social relationships into recommender models can enhance recommendation performance. Despite its remarkable progress, a majority of social recommendation models have overlooked the item relations-a key factor that can also significantly influence recommendation performance. In this paper, a approach is first proposed to acquire item relations by measuring correlations among items. Then, a co-regularized recommendation model is put forward to integrate the item relations with social relationships by introducing co-regularization term in the matrix factorization model. Meanwhile, that the co-regularization term is a case of weighted atomic norm is illustrated. Finally, based on the proposed model a recommendation algorithm named CRMF is constructed. CRMF is compared with existing state-of-the-art recommendation algorithms based on the evaluations over four real-world data sets. The experimental results demonstrate that CRMF is able to not only effectively alleviate the user cold-start problem, but also help obtain more accurate rating predictions of various users.
HU Qiang , REN Zhi-Kao , ZHAO Zhen , DU Jun-Wei , DU Yu-Yue
2018, 29(9):2697-2715. DOI: 10.13328/j.cnki.jos.005279 CSTR:
Abstract:Structure evolution is an efficient way for service processes to perform process reconstruction. It can make good use of the existing service processes to build a new value-added service process. However, the traditional research methods of service evolution focus more on compatible substitution of the partial component services or interface parameters in the service processes. Meanwhile, the operations of structure evolution are too simple in the existing theoretical methods and fail to cope with the complex evolutionary requirements. To solve these problems, a formal method is proposed to achieve structure evolution for service processes based on logic Petri net in this paper. The service process is modeled as a service net based on logic Petri net. Several structure evolution operations are defined to deal with different evolutionary requirements. Structure normal form is introduced to evaluate the soundness of service processes. Property preservation based on the soundness of service process is also investigated by using the structure analysis and validation methods of Petri net. Finally, a framework for customization of service processes based on structure evolution is proposed and the simulation platform where evolution operations can be performed is also designed to illustrate the effectiveness of the proposed method.
2018, 29(9):2716-2732. DOI: 10.13328/j.cnki.jos.005595 CSTR:
Abstract:The emergence of digital currency is seen as another major revolution in the form of currency and is expected to become the main currency and important financial infrastructure in the era of digital economy. It is imperative for central banks to promote the issuance of central bank digital currency (CBDC). Based on initial achievements of research on digital fiat currency (DFC) conducted by the People's Bank of China, this paper explores a closed loop which encompasses DFC issuance, transfer and return in the "central bank-commercial banks" binary model. Moreover, the paper designs the encrypted character string of CBDC, the issuance/return mechanism based on a 1:1 exchange ratio between deposit reserve and DFC, and the conversion mechanism of CBDC during its transfer. The paper goes on to explore the overall architecture, system architecture and technical architecture of the prototype and discusses possible ways to improve the usability of distributed ledger technology (DLT) based on partial application of DLT. Finally, this paper explores data analysis of CBDC under the prerequisite of consumer privacy protection, and concludes that consensus of distributed ledger and SM2 algorithm are key factors influencing the performance of CBDC transfer. Some suggestions for further improvement are also offered.
HUANG Mei-Gen , HUANG Yi-Cai , YU Bin , ZHOU Wei-Wei
2018, 29(9):2733-2752. DOI: 10.13328/j.cnki.jos.005605 CSTR:
Abstract:In this paper, the problems of distributed wireless sensor networks in heterogeneous interconnection and resource management are studied, and the necessity of combining software-defined networking with wireless sensor networks is analyzed. After summarizing a large number of software-defined wireless sensor networks architectures, a general architecture, in which the application plane, control plane and data plane can be described in detail, is proposed. In addition, the existing challenges and key technologies are laid out from four aspects:heterogeneous interconnection, resource management, reliable control and network security. Furthermore, advantages and prospects of the software-defined wireless sensor networks are illustrated via case comparison, and some future research work is outlined.
WU Peng-Fei , SHEN Qing-Ni , QIN Jia , QIAN Wen-Jun , LI Cong , WU Zhong-Hai
2018, 29(9):2753-2777. DOI: 10.13328/j.cnki.jos.005591 CSTR:
Abstract:With the development of cloud computing and big data technology, privacy protection draws more people's attention. Data encryption is a common way to protect data privacy, but solely using encryption cannot resist all types of attacks. Adversary can observe the access pattern on how users access to the data, to infer the private information including the importance of the data, the relevance between the data and even the plaintext of encrypted data. Oblivious RAM (ORAM) is an important method to protect the access pattern, including access operations and access locations, by obscuring an actual access, which makes adversary unable to distinguish it from a random one. ORAM makes an important role in designing secure cloud storage systems and secure computation. ORAM can reduce the possibility of the adversary inferring the private information through the access pattern and reduce the attack surface of the system, so as to provide a safer and more complete service. This paper summarizes the researches and application settings of the ORAM, mainly introducing the relevant concepts of the model as well as design methods with focus placed on analyzing and summarizing common strategies to optimize the model and their advantages and disadvantages, as well as optimizations for amortized and worst-case bandwidth between client and server, storage overheads reduction and round-trips reduction. Moreover, this paper discusses general issues of the ORAM used for secure storage system designing including data integrity and concurrent accesses for multi-clients. The paper also discusses some issues of the ORAM used for secure computation, including secure computation protocols designing and oblivious data structure designing, and finally makes a conclusion for the future research directions of the ORAM.
WANG Juan , FAN Cheng-Yang , CHENG Yue-Qiang , ZHAO Bo , WEI Tao , YAN Fei , ZHANG Huan-Guo , MA Jing
2018, 29(9):2778-2798. DOI: 10.13328/j.cnki.jos.005594 CSTR:
Abstract:Security is an essential requirement for cloud computing. However, how to protect critical applications and data in cloud computing and prevent platform administrators from violating user privacy is still an unsolved problem. In 2013, Intel proposed SGX, a new processor security technology which can provide trust zones on a computing platform to ensure the confidentiality and integrity of key user code and data. As a major research progress in the field of system security, SGX has a very important significance for system security, especially the security protection of cloud computing. In this paper, the mechanisms and properties of SGX are introduced, the key principle and technology are analyzed, and the side-channel attack and defense against the SGX technology are presented. Meanwhile, the paper surveys the state of the art of SGX and compares it with other trusted computing technologies. Finally, the research challenges and the future application requirements of SGX are suggested.
ZHOU Yu-Yang , CHENG Guang , GUO Chun-Sheng , DAI Mian
2018, 29(9):2799-2820. DOI: 10.13328/j.cnki.jos.005597 CSTR:
Abstract:As a dynamic and active defense technology, moving target defense can defeat the attacker's attack by constantly shifting the attack surface and reducing the static, isomorphic and deterministic nature of the system. With the continuous development and changes of network attacks, in-depth study of moving target defense is of great significance to China's cyberspace security. As a key problem in moving target defense field, attack surface dynamic transfer technology has attracted wide attention of researchers. The dynamic transfer technology takes advantage of uncertainty, dynamicity and randomness, can realize dynamic defense of the information system and effectively overcome the certainty, static and isomorphism of traditional defense. In this paper, the basic concept of the attack surface is first laid out, and the formal definitions of the attack surface and attack surface transfer are explained. Then, the attack surface dynamic transfer technologies are introduced from four aspects, including data attack surface, software attack surface, network attack surface and platform attack face. Furthermore, different dynamic transfer techniques, are analyzed and compared, and their advantages and shortcomings are pointed out. Finally, the future possible research directions of attack surface dynamic transfer technology are discussed with emphasis on the multi-level attack surface dynamic transfer technology integration, comprehensive evaluation method of attack surface dynamic transfer, dynamic transfer method of attack surface based on perception and attack surface transfer decision-making based on the three-party game model.
ZHAO Yan-Min , LIU Yu , WANG Mei-Qin
2018, 29(9):2821-2828. DOI: 10.13328/j.cnki.jos.005271 CSTR:
Abstract:For years, many cryptanalysts have been devoted to working on analyzing the security of block ciphers against differential attacks and linear attacks. Thus, there are copious methods to cryptanalyze a block cipher with differential and linear cryptanalyses. An original method proposed by Achiya Bar-On et al. enables attackers to analyze more rounds of a partial SPN network in differential and linear cryptanalyses. The method involves two auxiliary matrices, which makes it possible that more constraints on differences can be exploited to sieve the inappropriate pairs. In the paper, the method is implemented to SMS4 in the setting of a multiple differential cryptanalysis. By utilizing the 214 existing 19-round differential characteristics, the paper carries out a 23-round key-recovery attack on SMS4, which leads to a lower data and memory complexities than previous multiple differential attack results on 23-round SMS4, namely,2113.5 chosen plaintexts and 217 bytes at a success possibility of 0.9. The attack presented in the paper can recover 128-bit key within 2126.7 equivalent 23-round encryptions.
LI Feng , SI Ya-Li , CHEN Zhen , LU Ning , SHEN Li-Min
2018, 29(9):2829-2843. DOI: 10.13328/j.cnki.jos.005273 CSTR:
Abstract:This paper proposes a security opportunistic routing decision method based on trust mechanism (TOR). In this scheme, every node locally maintains a trust vector to record trust degree of other nodes, which indicates their ability of carry and forward messages. Using layered coin model and digital signature mechanism, the forwarding evidences of relay node signature are bound dynamically on message packet during the relay process, and the message carries evidence chain to the destination node. The node broadcasts periodically the trust vector with signature and time-stamp to network by flooding. Through multi-iteration, the read-only trust routing table (TRT) with multidimensional row vectors is built on every node, which will become the key-player of selecting the next-hop relay node and dividing copy number. The node with greater trust degree is taken as the next-hop relay node. Therefore, the message can be delivered to the destination along the direction of trust gradient increment. Simulation results show that compared with existing algorithms, TOR algorithm can resist the network destruction behavior of malicious nodes and selfish nodes with higher probability of delivery and lower average delivery delay, and it only needs very small buffer and computing ability of node.
HU Wen-Bin , QIU Zhen-Yu , NIE Cong , WANG Huan , YAN Li-Ping , DU Bo
2018, 29(9):2844-2860. DOI: 10.13328/j.cnki.jos.005277 CSTR:
Abstract:With the rapid development of mobile networks and a great increase in the computing ability of mobile devices, a huge number of people tend to obtain information through mobile networks, which poses some new challenges for real-time on-demand data broadcasting:(1) The data types and sizes are diverse; (2) The real-time characteristics and demand diversity of the user requests greatly increase the volume of hot-spot data (the most access data) and the volume of broadcast data; (3) The users' demands for high service quality become stronger. Current research has been focusing on the fixed-channel models and algorithms and ignoring the changes of real-time data broadcast environments. The problems of fixed-channel models are as follows:(1) They are limited to specific network with fixed channel-models which lack generality; (2) The size and number of channels cannot be adjusted with the changing of broadcast environments automatically. This paper studies the possibility of an automatic channel split and allocation method that can adapt to the environment, and proposes an optimized channel split method (OCSM), which can adjust the bandwidth and number of broadcast channels to the different characteristics of real-time requests. The method includes the following algorithms:(1) A weight average and size cluster algorithm (WASC) for data characteristics mining; (2) A weight evaluating algorithm (R×W/SL) for evaluating the priority of data item; (3) A channel split algorithm (CSA) for channel split. The experiments undertaken in this study include two aspects:(1) Determining the different strategies under different data size distributions and deadline distributions; (2) Verifying the validity of OCSM by validating the effectiveness in different situations through a series of experiments. The results reveal that significantly better performance can be obtained by using the OCSM rather than other state-of-the-art scheduling algorithms.
SHI Tai-Rong , GUAN Jie , LI Jun-Zhi , WANG Sen-Peng
2018, 29(9):2861-2873. DOI: 10.13328/j.cnki.jos.005282 CSTR:
Abstract:MORUS is a third-round CAESAR candidate of authenticated cipher designed by H. Wu et al. With a fault model, the diffusion property of MORUS is analyzed in this paper. By using a bit-oriented random fault model, the search algorithm for the differential chain of MORUS is improved with the usage of differential analysis and meet-in-the-middle technique. Through this algorithm, a 5-step differential chain is discovered with a probability of 2-85. The differential-distinguish attack on the initialization of 5-step reduced version of MORUS-640-128 is proposed with the data complexity of 289 and the distinguishing advantage of 0.99965. By using differential fault analysis method, the forgery attack on 3-step authentication of MORUS-640-128 is formed.
2018, 29(9):2874-2895. DOI: 10.13328/j.cnki.jos.005264 CSTR:
Abstract:In cloud computing, how to prove the trust of a virtual platform is a hot problem. A virtual platform includes the virtual machine manager that runs on the physical platform and the virtual machines that are different logical entities with hierarchy and dynamics. Existing trusted computing remote attestation schemes, such as the privacy certification authority (PCA) scheme and the direct anonymous attestation (DAA) scheme, cannot be directly used for trusted virtual platform. Moreover, the remote attestation scheme of trusted virtual platform in virtualized trusted platform architecture specification of TCG is only a framework without concrete implementation plan. To address these issues, this paper proposes a top-down remote attestation project, called TVP-PCA, for trusted virtual platform. This project designs and implements an attestation agent in the top-level virtual machine and an attestation service in the underlying virtual machine manager. With this approach, a challenger can first use the top-level agent to prove that the virtual machine is trusted, and then use the underlying service to prove that the virtual machine manager can be trusted, both attestations together ensure the credibility of the entire virtual platform. This paper solves the identity problem of the top-level attestation and the underlying attestation effectively. Experiments show that this project can not only prove the trust of the virtual machine, but also prove the trust of the virtual machine manager and the physical platform, thus establishing that the virtual platform of the cloud computing is trusted.