• Volume 29,Issue 9,2018 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Research on Improvements of Feature Selection Using Forest Optimization Algorithm

      2018, 29(9):2547-2558. DOI: 10.13328/j.cnki.jos.005395 CSTR:

      Abstract (6094) HTML (3038) PDF 1.30 M (6874) Comment (0) Favorites

      Abstract:In classification, feature selection has been an important, but difficult problem. Recent research results disclosed that feature selection using forest optimization algorithm (FSFOA) has a better classification performance and good dimensionality reduction ability. However, the randomness of initialization phase, the limitations of updating mechanism and the inferior quality of the new tree in the local seeding stage severely limit the classification performance and dimensionality reduction ability of the algorithm. In this paper, a new initialization strategy and updating mechanism are used and a greedy strategy is added in the local seeding stage to form a new feature selection algorithm (IFSFOA) in order to maximize the classification performance and simultaneously minimize the number of features. In experiment, IFSFOA uses SVM, J48 and KNN classifiers to guide the learning process while utilizing the machine learning database UCI for testing. The results show that compared with FSFOA, IFSFOA has a significant improvement in classification performance and dimensionality reduction. Comparing IFSFOA algorithm with more efficient feature selection methods proposed in recent years, IFSFOA is still very competitive in both accuracy and dimensionality reduction.

    • Survey on Stability of Feature Selection

      2018, 29(9):2559-2579. DOI: 10.13328/j.cnki.jos.005394 CSTR:

      Abstract (6874) HTML (3636) PDF 2.06 M (9980) Comment (0) Favorites

      Abstract:With the development of big data and the wide application of machine learning, data from all walks of life is growing massively. High dimensionality is one of its most important characteristics, and applying feature selection to reduce dimensions is one of the preprocessing methods of high dimensional data. Stability of feature selection is an important research direction, and it stands for the robustness of results with respect to small changes in the dataset composition. Improving the stability of feature selection can help to identify relevant features, increase experts' confidence to the results, and further reduce the complexity and costs of getting original data. This paper reviews current methods for improving the stability, and presents a classification of those methods with analysis and comparison on the characteristics and range of application of each category. Then it summarizes the evaluations of stability of feature selection, and analyzes the performance of stability measurement and validates the effectiveness of four ensemble approaches through experiments. Finally, it discusses the localization of current works and a perspective of the future work in this research area.

    • Design and Applications of Discrete Evolutionary Algorithm Based on Encoding Transformation

      2018, 29(9):2580-2594. DOI: 10.13328/j.cnki.jos.005400 CSTR:

      Abstract (5324) HTML (2676) PDF 1.56 M (6193) Comment (0) Favorites

      Abstract:In order to solve a combinatorial optimization problem in discrete domains by using evolutionary algorithms, this study draws lessons from the design concept of genetic algorithm (GA), binary particle swarm optimization (BPSO) and binary differential evolution with hybrid encoding (HBDE), to propose a simple and practical method for designing discrete evolution algorithm (DisEA) based on the idea of mapping transformation. This method is named encoding transformation method (ETM). For illustrating the practicability of ETM, a discrete particle swarm optimization (DisPSO) algorithm based on ETM is presented. For showing the feasibility and effectiveness of ETM, GA, BPSO, HBDE and DisPSO are used to solve the set union knapsack problem (SUKP) and the discounted {0-1} knapsack problem (D{0-1}KP), respectively. The results show that for SUKP and D{0-1}KP, the discrete evolutionary algorithms based on ETM, i.e. BPSO, HBDE and DisPSO have better performance than GA. This indicates that the design of DisEA based on ETM is not only a feasible method, but also a very practical and efficient method.

    • Dynamic Multi-Swarm Particle Swarm Optimization with Cooperative Coevolution for Large Scale Global Optimization

      2018, 29(9):2595-2605. DOI: 10.13328/j.cnki.jos.005398 CSTR:

      Abstract (5923) HTML (3354) PDF 1.23 M (7442) Comment (0) Favorites

      Abstract:With the development of engineering technology and the improvement of mathematical model, a large number of optimization problems have been developed from low dimensional optimization to large-scale complex optimization. Large scale global optimization is an active research topic in the real-parameter optimization. Based on the analysis of the characteristics of large scale problems, a stochastic dynamic cooperative coevolution strategy is proposed in the article. Additionally, a strategy is added to the dynamic multi-swarm particle swarm optimization algorithm. Then, the dual grouping of population and decision variables is realized. Next, the performance of the novel optimization on the set of benchmark functions provided for the CEC2013 special session on large scale optimization is reported. Finally the validity of the algorithm is verified by comparing with other algorithms.

    • Evolutionary Algorithm for Single-Objective Optimization Based on Neighborhood Difference and Covariance Information

      2018, 29(9):2606-2615. DOI: 10.13328/j.cnki.jos.005397 CSTR:

      Abstract (5173) HTML (2592) PDF 1.61 M (5428) Comment (0) Favorites

      Abstract:Complex single-objective optimization problem is a hot topic in the field of evolutionary computation. Differential evolution and covariance evolution are considered to be two of the most effective algorithms for this problem, as the difference information similar to the gradient can effectively guide the algorithm towards the optimal solution direction, and the covariance is based on statistics to generate an offspring population. In this paper, the covariance information is introduced to improve the difference operator, then an evolutionary algorithm based on neighborhood difference and covariance information (DEA/NC) is proposed to deal with complex single-objective optimization problem. The two commonly used difference operators generated by random selection individuals and combined by the current optimal solution are analyzed. With the first approach, the difference information cannot be used as a local gradient information to guide the search of the algorithm when the Euclidean distance between randomly selected individuals is large. Meanwhile, the second approach will make the population search in the direction of the current optimal solution, which will lead the population to quickly fall into local optimum. In this paper, a neighborhood difference method is proposed to improve the effectiveness of the differential operator while avoiding the diversity of population loss. In addition, a covariance is introduced to measure the correlation between individual variables, and the correlation is used to optimize the difference operator. Finally, the algorithm tests the single-objective optimization problem in cec2014, and compares the results with the existing differential evolution algorithms. The experimental results show the effectiveness of the proposed algorithm.

    • Robust Salient Object Detection Model for Nighttime Images

      2018, 29(9):2616-2631. DOI: 10.13328/j.cnki.jos.005396 CSTR:

      Abstract (5894) HTML (3149) PDF 2.46 M (6397) Comment (0) Favorites

      Abstract:Human visual attention based saliency model is an effective way for the active perception of important information in image, and it can play an important role for exploring large-scale perceptual information organization in the early stage of visual cognitive process. However, nighttime images usually have low signal-to-noise ratio and low contrast properties. Conventional salient object detection models may face great challenges in this scenario, such as noise influence, and weak texture blur. This paper proposes a region covariance and global search based salient object detection model for nighttime images. The input image is firstly divided into superpixel regions to estimate their covariance. Then, covariance feature based local saliency and global search based image saliency can be calculated respectively. Finally, a graph-based diffusion process is performed to refine the saliency maps. Extensive experiments have been conducted to evaluate the performance of the proposed method against eleven state-of-the-art models on five benchmark datasets and a nighttime image dataset.

    • Complex Software Reliability Allocation Based on Hybrid Intelligent Optimization Algorithm

      2018, 29(9):2632-2648. DOI: 10.13328/j.cnki.jos.005399 CSTR:

      Abstract (5361) HTML (2642) PDF 1.74 M (5615) Comment (0) Favorites

      Abstract:Software reliability problem is one of the key factors in the process of system design, research and running. Different from most current researches on software reliability allocation limited to series parallel models, an effective optimization algorithm is applied to large complex software reliability allocation in this paper. Estimation of distribution algorithm (EDA) has fast convergence rate and strong global search capability, but is easily trapped in local optimization. Differential evolution (DE) has good local search capability with slower convergence speed. To address the issue, a new penalty guided hybrid estimation of distribution and self-adaptive crossover differential evolution algorithm (PHEDA-SCDE) is proposed in this paper. PHEDA-SCDE has fast convergence rate and strong global search capability. Also, it is not easily trapped in local optimization. In addition, software reliability is estimated based on four specific architecture styles-sequential, parallel, circulation and fault tolerant. To demonstrate the generality of the algorithm, experiments are carried out on three numerical examples including single-input/single-output system, single-input/multiple-output system and multiple-input/multiple-output system. The experimental results show that the PHEDA-SCDE is significantly feasible and efficient in reliability allocation compared with similar algorithms.

    • Objective Space Division Based Adaptive Multiobjective Optimization Algorithm

      2018, 29(9):2649-2663. DOI: 10.13328/j.cnki.jos.005278 CSTR:

      Abstract (4542) HTML (1819) PDF 1.55 M (5023) Comment (0) Favorites

      Abstract:Currently, multiobjective evolutionary algorithm has been applied widely in various fields, and become one of the most attractive topics in the optimization area. This paper analyzes the deficiency of traditional multiobjective evolutionary algorithms in maintaining population diversity, and further proposes an objective space division based adaptive multiobjective evolutionary algorithm (SDA-MOEA) to solve multiobjective optimization problems. The proposed algorithm divides the objective space of a multiobjective optimization problem into a series of subspaces. During the evolution process, each subspace in SDA-MOEA can maintain a set of non-dominated solutions to guarantee the population diversity. Besides, SDA-MOEA self-adaptively distributes the evolutionary opportunities for each subspace according to its forward distance, which can promote the population convergence. Finally, 14 multiobjective problems of three groups are selected to measure the performance of SDA-MOEA. By comparing with five existing multiobjective evolutionary algorithms, the experimental results demonstrate that SDA-MOEA shows obvious superiority over these existing algorithms on 10 problems.

    • Single Pass Bayesian Fuzzy Clustering

      2018, 29(9):2664-2680. DOI: 10.13328/j.cnki.jos.005265 CSTR:

      Abstract (4248) HTML (1851) PDF 2.93 M (4694) Comment (0) Favorites

      Abstract:Based on the maximum a posteriori (MAP) principle and Bayesian framework, the Bayesian fuzzy clustering (BFC) method recently proposed exhibits promising characteristics in estimating the number of clusters and finding the globally optimal clustering solution, for the method effectively combines the advantages of both probability theory and fuzzy theory. However, since it suffers from its high computational burden, BFC becomes impractical for large-scale datasets. In this paper, in order to circumvent this drawback of BFC, a weighted Bayesian fuzzy clustering (WBFC) algorithm is first proposed by introducing weighting mechanism in BFC. Then, a fast single pass Bayesian fuzzy clustering (SPBFC) algorithm is developed by combining WBFC with a single pass clustering framework. Theoretical analysis on convergence and time complexity is also discussed. The experimental results show that SPBFC not only inherits the promising characteristics, but also has a fast convergence speed for large-scale datasets.

    • Co-Regularized Matrix Factorization Recommendation Algorithm

      2018, 29(9):2681-2696. DOI: 10.13328/j.cnki.jos.005274 CSTR:

      Abstract (5072) HTML (1829) PDF 1.66 M (6591) Comment (0) Favorites

      Abstract:Recommender systems have been successfully adopted as an effective tool to alleviate information overload and assist users to make decisions. Recently, it has been demonstrated that incorporating social relationships into recommender models can enhance recommendation performance. Despite its remarkable progress, a majority of social recommendation models have overlooked the item relations-a key factor that can also significantly influence recommendation performance. In this paper, a approach is first proposed to acquire item relations by measuring correlations among items. Then, a co-regularized recommendation model is put forward to integrate the item relations with social relationships by introducing co-regularization term in the matrix factorization model. Meanwhile, that the co-regularization term is a case of weighted atomic norm is illustrated. Finally, based on the proposed model a recommendation algorithm named CRMF is constructed. CRMF is compared with existing state-of-the-art recommendation algorithms based on the evaluations over four real-world data sets. The experimental results demonstrate that CRMF is able to not only effectively alleviate the user cold-start problem, but also help obtain more accurate rating predictions of various users.

    • Study on Structure Evolution for Service Processes Base on Logic Petri Net

      2018, 29(9):2697-2715. DOI: 10.13328/j.cnki.jos.005279 CSTR:

      Abstract (4209) HTML (1740) PDF 2.22 M (4012) Comment (0) Favorites

      Abstract:Structure evolution is an efficient way for service processes to perform process reconstruction. It can make good use of the existing service processes to build a new value-added service process. However, the traditional research methods of service evolution focus more on compatible substitution of the partial component services or interface parameters in the service processes. Meanwhile, the operations of structure evolution are too simple in the existing theoretical methods and fail to cope with the complex evolutionary requirements. To solve these problems, a formal method is proposed to achieve structure evolution for service processes based on logic Petri net in this paper. The service process is modeled as a service net based on logic Petri net. Several structure evolution operations are defined to deal with different evolutionary requirements. Structure normal form is introduced to evaluate the soundness of service processes. Property preservation based on the soundness of service process is also investigated by using the structure analysis and validation methods of Petri net. Finally, a framework for customization of service processes based on structure evolution is proposed and the simulation platform where evolution operations can be performed is also designed to illustrate the effectiveness of the proposed method.

    • Experimental Study on Prototype System of Central Bank Digital Currency

      2018, 29(9):2716-2732. DOI: 10.13328/j.cnki.jos.005595 CSTR:

      Abstract (5851) HTML (1739) PDF 3.73 M (10953) Comment (0) Favorites

      Abstract:The emergence of digital currency is seen as another major revolution in the form of currency and is expected to become the main currency and important financial infrastructure in the era of digital economy. It is imperative for central banks to promote the issuance of central bank digital currency (CBDC). Based on initial achievements of research on digital fiat currency (DFC) conducted by the People's Bank of China, this paper explores a closed loop which encompasses DFC issuance, transfer and return in the "central bank-commercial banks" binary model. Moreover, the paper designs the encrypted character string of CBDC, the issuance/return mechanism based on a 1:1 exchange ratio between deposit reserve and DFC, and the conversion mechanism of CBDC during its transfer. The paper goes on to explore the overall architecture, system architecture and technical architecture of the prototype and discusses possible ways to improve the usability of distributed ledger technology (DLT) based on partial application of DLT. Finally, this paper explores data analysis of CBDC under the prerequisite of consumer privacy protection, and concludes that consensus of distributed ledger and SM2 algorithm are key factors influencing the performance of CBDC transfer. Some suggestions for further improvement are also offered.

    • >Review Articles
    • Software-Defined Wireless Sensor Networks: A Research Survey

      2018, 29(9):2733-2752. DOI: 10.13328/j.cnki.jos.005605 CSTR:

      Abstract (4629) HTML (3581) PDF 2.10 M (6912) Comment (0) Favorites

      Abstract:In this paper, the problems of distributed wireless sensor networks in heterogeneous interconnection and resource management are studied, and the necessity of combining software-defined networking with wireless sensor networks is analyzed. After summarizing a large number of software-defined wireless sensor networks architectures, a general architecture, in which the application plane, control plane and data plane can be described in detail, is proposed. In addition, the existing challenges and key technologies are laid out from four aspects:heterogeneous interconnection, resource management, reliable control and network security. Furthermore, advantages and prospects of the software-defined wireless sensor networks are illustrated via case comparison, and some future research work is outlined.

    • Survey of Oblivious RAM

      2018, 29(9):2753-2777. DOI: 10.13328/j.cnki.jos.005591 CSTR:

      Abstract (4869) HTML (4384) PDF 2.58 M (7721) Comment (0) Favorites

      Abstract:With the development of cloud computing and big data technology, privacy protection draws more people's attention. Data encryption is a common way to protect data privacy, but solely using encryption cannot resist all types of attacks. Adversary can observe the access pattern on how users access to the data, to infer the private information including the importance of the data, the relevance between the data and even the plaintext of encrypted data. Oblivious RAM (ORAM) is an important method to protect the access pattern, including access operations and access locations, by obscuring an actual access, which makes adversary unable to distinguish it from a random one. ORAM makes an important role in designing secure cloud storage systems and secure computation. ORAM can reduce the possibility of the adversary inferring the private information through the access pattern and reduce the attack surface of the system, so as to provide a safer and more complete service. This paper summarizes the researches and application settings of the ORAM, mainly introducing the relevant concepts of the model as well as design methods with focus placed on analyzing and summarizing common strategies to optimize the model and their advantages and disadvantages, as well as optimizations for amortized and worst-case bandwidth between client and server, storage overheads reduction and round-trips reduction. Moreover, this paper discusses general issues of the ORAM used for secure storage system designing including data integrity and concurrent accesses for multi-clients. The paper also discusses some issues of the ORAM used for secure computation, including secure computation protocols designing and oblivious data structure designing, and finally makes a conclusion for the future research directions of the ORAM.

    • Analysis and Research on SGX Technology

      2018, 29(9):2778-2798. DOI: 10.13328/j.cnki.jos.005594 CSTR:

      Abstract (5581) HTML (4100) PDF 2.61 M (12757) Comment (0) Favorites

      Abstract:Security is an essential requirement for cloud computing. However, how to protect critical applications and data in cloud computing and prevent platform administrators from violating user privacy is still an unsolved problem. In 2013, Intel proposed SGX, a new processor security technology which can provide trust zones on a computing platform to ensure the confidentiality and integrity of key user code and data. As a major research progress in the field of system security, SGX has a very important significance for system security, especially the security protection of cloud computing. In this paper, the mechanisms and properties of SGX are introduced, the key principle and technology are analyzed, and the side-channel attack and defense against the SGX technology are presented. Meanwhile, the paper surveys the state of the art of SGX and compares it with other trusted computing technologies. Finally, the research challenges and the future application requirements of SGX are suggested.

    • Survey on Attack Surface Dynamic Transfer Technology Based on Moving Target Defense

      2018, 29(9):2799-2820. DOI: 10.13328/j.cnki.jos.005597 CSTR:

      Abstract (4797) HTML (4816) PDF 2.11 M (7886) Comment (0) Favorites

      Abstract:As a dynamic and active defense technology, moving target defense can defeat the attacker's attack by constantly shifting the attack surface and reducing the static, isomorphic and deterministic nature of the system. With the continuous development and changes of network attacks, in-depth study of moving target defense is of great significance to China's cyberspace security. As a key problem in moving target defense field, attack surface dynamic transfer technology has attracted wide attention of researchers. The dynamic transfer technology takes advantage of uncertainty, dynamicity and randomness, can realize dynamic defense of the information system and effectively overcome the certainty, static and isomorphism of traditional defense. In this paper, the basic concept of the attack surface is first laid out, and the formal definitions of the attack surface and attack surface transfer are explained. Then, the attack surface dynamic transfer technologies are introduced from four aspects, including data attack surface, software attack surface, network attack surface and platform attack face. Furthermore, different dynamic transfer techniques, are analyzed and compared, and their advantages and shortcomings are pointed out. Finally, the future possible research directions of attack surface dynamic transfer technology are discussed with emphasis on the multi-level attack surface dynamic transfer technology integration, comprehensive evaluation method of attack surface dynamic transfer, dynamic transfer method of attack surface based on perception and attack surface transfer decision-making based on the three-party game model.

    • Improved Differential Attack on 23-Round SMS4

      2018, 29(9):2821-2828. DOI: 10.13328/j.cnki.jos.005271 CSTR:

      Abstract (3883) HTML (1663) PDF 937.74 K (5047) Comment (0) Favorites

      Abstract:For years, many cryptanalysts have been devoted to working on analyzing the security of block ciphers against differential attacks and linear attacks. Thus, there are copious methods to cryptanalyze a block cipher with differential and linear cryptanalyses. An original method proposed by Achiya Bar-On et al. enables attackers to analyze more rounds of a partial SPN network in differential and linear cryptanalyses. The method involves two auxiliary matrices, which makes it possible that more constraints on differences can be exploited to sieve the inappropriate pairs. In the paper, the method is implemented to SMS4 in the setting of a multiple differential cryptanalysis. By utilizing the 214 existing 19-round differential characteristics, the paper carries out a 23-round key-recovery attack on SMS4, which leads to a lower data and memory complexities than previous multiple differential attack results on 23-round SMS4, namely,2113.5 chosen plaintexts and 217 bytes at a success possibility of 0.9. The attack presented in the paper can recover 128-bit key within 2126.7 equivalent 23-round encryptions.

    • Trust-Based Security Routing Decision Method for Opportunistic Networks

      2018, 29(9):2829-2843. DOI: 10.13328/j.cnki.jos.005273 CSTR:

      Abstract (4499) HTML (1178) PDF 2.48 M (4420) Comment (0) Favorites

      Abstract:This paper proposes a security opportunistic routing decision method based on trust mechanism (TOR). In this scheme, every node locally maintains a trust vector to record trust degree of other nodes, which indicates their ability of carry and forward messages. Using layered coin model and digital signature mechanism, the forwarding evidences of relay node signature are bound dynamically on message packet during the relay process, and the message carries evidence chain to the destination node. The node broadcasts periodically the trust vector with signature and time-stamp to network by flooding. Through multi-iteration, the read-only trust routing table (TRT) with multidimensional row vectors is built on every node, which will become the key-player of selecting the next-hop relay node and dividing copy number. The node with greater trust degree is taken as the next-hop relay node. Therefore, the message can be delivered to the destination along the direction of trust gradient increment. Simulation results show that compared with existing algorithms, TOR algorithm can resist the network destruction behavior of malicious nodes and selfish nodes with higher probability of delivery and lower average delivery delay, and it only needs very small buffer and computing ability of node.

    • Channel Split and Allocation Method for Data Broadcast Scheduling

      2018, 29(9):2844-2860. DOI: 10.13328/j.cnki.jos.005277 CSTR:

      Abstract (3480) HTML (1234) PDF 1.80 M (4365) Comment (0) Favorites

      Abstract:With the rapid development of mobile networks and a great increase in the computing ability of mobile devices, a huge number of people tend to obtain information through mobile networks, which poses some new challenges for real-time on-demand data broadcasting:(1) The data types and sizes are diverse; (2) The real-time characteristics and demand diversity of the user requests greatly increase the volume of hot-spot data (the most access data) and the volume of broadcast data; (3) The users' demands for high service quality become stronger. Current research has been focusing on the fixed-channel models and algorithms and ignoring the changes of real-time data broadcast environments. The problems of fixed-channel models are as follows:(1) They are limited to specific network with fixed channel-models which lack generality; (2) The size and number of channels cannot be adjusted with the changing of broadcast environments automatically. This paper studies the possibility of an automatic channel split and allocation method that can adapt to the environment, and proposes an optimized channel split method (OCSM), which can adjust the bandwidth and number of broadcast channels to the different characteristics of real-time requests. The method includes the following algorithms:(1) A weight average and size cluster algorithm (WASC) for data characteristics mining; (2) A weight evaluating algorithm (R×W/SL) for evaluating the priority of data item; (3) A channel split algorithm (CSA) for channel split. The experiments undertaken in this study include two aspects:(1) Determining the different strategies under different data size distributions and deadline distributions; (2) Verifying the validity of OCSM by validating the effectiveness in different situations through a series of experiments. The results reveal that significantly better performance can be obtained by using the OCSM rather than other state-of-the-art scheduling algorithms.

    • Research on Differential Diffusion Property of MORUS in Fault Model

      2018, 29(9):2861-2873. DOI: 10.13328/j.cnki.jos.005282 CSTR:

      Abstract (3679) HTML (1437) PDF 1.37 M (4116) Comment (0) Favorites

      Abstract:MORUS is a third-round CAESAR candidate of authenticated cipher designed by H. Wu et al. With a fault model, the diffusion property of MORUS is analyzed in this paper. By using a bit-oriented random fault model, the search algorithm for the differential chain of MORUS is improved with the usage of differential analysis and meet-in-the-middle technique. Through this algorithm, a 5-step differential chain is discovered with a probability of 2-85. The differential-distinguish attack on the initialization of 5-step reduced version of MORUS-640-128 is proposed with the data complexity of 289 and the distinguishing advantage of 0.99965. By using differential fault analysis method, the forgery attack on 3-step authentication of MORUS-640-128 is formed.

    • Research on Trusted Virtual Platform Remote Attestation Method in Cloud Computing

      2018, 29(9):2874-2895. DOI: 10.13328/j.cnki.jos.005264 CSTR:

      Abstract (3651) HTML (2441) PDF 2.75 M (5817) Comment (0) Favorites

      Abstract:In cloud computing, how to prove the trust of a virtual platform is a hot problem. A virtual platform includes the virtual machine manager that runs on the physical platform and the virtual machines that are different logical entities with hierarchy and dynamics. Existing trusted computing remote attestation schemes, such as the privacy certification authority (PCA) scheme and the direct anonymous attestation (DAA) scheme, cannot be directly used for trusted virtual platform. Moreover, the remote attestation scheme of trusted virtual platform in virtualized trusted platform architecture specification of TCG is only a framework without concrete implementation plan. To address these issues, this paper proposes a top-down remote attestation project, called TVP-PCA, for trusted virtual platform. This project designs and implements an attestation agent in the top-level virtual machine and an attestation service in the underlying virtual machine manager. With this approach, a challenger can first use the top-level agent to prove that the virtual machine is trusted, and then use the underlying service to prove that the virtual machine manager can be trusted, both attestations together ensure the credibility of the entire virtual platform. This paper solves the identity problem of the top-level attestation and the underlying attestation effectively. Experiments show that this project can not only prove the trust of the virtual machine, but also prove the trust of the virtual machine manager and the physical platform, thus establishing that the virtual platform of the cloud computing is trusted.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063