• Volume 24,Issue 12,2013 Table of Contents
    Select All
    Display Type: |
    • Statically Detect and Run-Time Check Integer-Based Vulnerabilities with Information Flow

      2013, 24(12):2767-2781. DOI: 10.3724/SP.J.1001.2013.04385 CSTR:

      Abstract (4306) HTML (0) PDF 800.60 K (5632) Comment (0) Favorites

      Abstract:An approach to detecting integer-based vulnerabilities is proposed based on information-flow analysis in order to improve the run-time performance. In this approach, only the unsafe integer operations on tainted information flow paths, which can be controlled by users and involved in sensitive operations, need to be instrumented with run-time check code, so that both the density of static instrumentation and performance overhead are reduced. Based on this approach, a prototype system called DRIVER (detect and run-time check integer-based vulnerabilities with information flow) is implemented as an extension to the GCC compiler and tested on a number of real-world applications. The experimental results show that this approach is effective, scalable, light-weight and capable of locating the root cause.

    • Support for Multi-Level Parallelism on Heterogeneous Multi-Core and Performance Optimization

      2013, 24(12):2782-2796. DOI: 10.3724/SP.J.1001.2013.04386 CSTR:

      Abstract (3722) HTML (0) PDF 833.59 K (6363) Comment (0) Favorites

      Abstract:Due to its lower power consumption and cost, heterogeneous multi-core makes up a major computing resource in the current supercomputers. However, heterogeneous multi-core processor features high bandwidth and loose memory consistency, programmers pay attention to hardware details to get ideal memory and computation performance. This paper introduces CellMLP, a multi-level parallelism model for Cell BE heterogeneous multi-core processor. Through extending compiler directives based on C, CellMLP supports data parallelism, task parallelism and pipeline parallelism programming model, and improves the programming productivity. In addition, runtime optimizations are used to improve the performance. Parallel SPEs data transfer and double-buffer mechanisms are used to improve memory bandwidth. A novel hybrid task queue is used in task parallelism to support asynchronous work stealing, reduce the contention between SPE threads and increase the scalability of task parallelism. For the pipeline parallelism, low-overhead synchronization operations are firstly implemented utilizing signal channels in Cell BE. Experiments are conducted on Stream, NAS Benchmark, BOTS and other typical irregular applications. Results show that CellMLP can support different typical parallel applications efficiently. Compared with similar programming model SARC and CellSs, CellMLP has obvious advantages in terms of practical data transfer bandwidth as well as the support of irregular applications.

    • Evolution Analysis of Data Flow Oriented Internetware Service

      2013, 24(12):2797-2813. DOI: 10.3724/SP.J.1001.2013.04396 CSTR:

      Abstract (3722) HTML (0) PDF 1004.20 K (4849) Comment (0) Favorites

      Abstract:There is the need to combine internetware with various heterogeneous services and to adapt to the dynamic changing network environment to achieve uninterrupted service and online dynamic evolution. In order to explicitly draw the data flow into dynamic evolution, a data flow and control flow oriented internetware service model based on colored Petri nets (CPN) is put forward in this paper. Along with analyzing the data flow errors caused by five kinds of dynamic evolution operation, and in order to escape the data flow errors effectively, two data flow oriented service instance migratability criterions are given first. And then, a service instance migratability criterion about the cross dependencies between data flow and control flow is proposed to comprehensively describe the constraint attributes of service instance dynamic migration. The experiment results show that the methods provided in this paper are feasible and applicable to internetware service.

    • Software Networks Nodes Impact Analysis of Complex Software Systems

      2013, 24(12):2814-2829. DOI: 10.3724/SP.J.1001.2013.04397 CSTR:

      Abstract (4091) HTML (0) PDF 799.25 K (4948) Comment (0) Favorites

      Abstract:The complex network theory has been used to reveal some typical features of software networks. It provides a new way for us to understand the software structure from the system view. However, there exist some gaps between the theoretical results and the practical performance of software systems. This paper aims to reveal some essential causes for the above difference by analyzing the characteristics of software network nodes. This paper proposes a novel weighted network model to much more accurately describe the dependencies among the software network nodes. Based on this model, this paper analyzes the actual dependencies of the software network nodes and several statistical characteristics. Also, this paper further analyzes the relationships between these statistical characteristics and the nodes impact. Furthermore, this paper introduces the concept of the key node and four fundamental hypotheses. Finally, this paper verifies the effectiveness of the above four hypotheses by designing the experiments on two software systems. This study provides a guide to research in defects propagation, software reliability and software integration testing.

    • Approach for GUI Testing Based on Event Handler Function

      2013, 24(12):2830-2842. DOI: 10.3724/SP.J.1001.2013.04399 CSTR:

      Abstract (3653) HTML (0) PDF 667.95 K (5530) Comment (0) Favorites

      Abstract:EHF (event handler function) implements functionality of software and responds to user's operation on GUI (graphic user interface) element. GUI testing focuses on the conformance between specification and EHF as well as relations among EHFs. In order to solve the problems of large-scale of test cases and generation of invalid test cases, this paper proposes a new GUI test model named EHG based on event handler function. Using the model and the features of event handler functions, two test coverage criteria is constructed. Based on the criteria, a feedback-directed GUI test case generation is implemented. Experimental results show that the new approach not only effectively controls the scale of test case while eliminating invalid test cases, but also improves coverage of code structure.

    • Data Decomposition Algorithm Based on Array Life Cycle

      2013, 24(12):2843-2858. DOI: 10.3724/SP.J.1001.2013.04405 CSTR:

      Abstract (3208) HTML (0) PDF 826.09 K (4711) Comment (0) Favorites

      Abstract:Partition is a compiler technique that maps computation and data onto different processors, and is the key issue of automatic parallelization on distributed memory architecture. Array's life cycle has been less considered by previous researches on data decomposition, despite of the fact that the inconsistency of decomposition in different array life cycles often results in communication redundancy. This paper proposes a new data decomposition algorithm which represents data flow information of array by define-use graph, and creates own decomposition for each life cycle of array. The experimental results on Matrix-Inversion and other eight applications show that compared with automatic data decomposition methods that does not distinguish the life cycle of array, the proposed algorithm not only makes more accurate assessment of parallel benefits, but also reduce communication redundancy and rise up the speedup.

    • Cooperative and Forwarding Hybrid Routing Algorithm for Network Lifetime Maximization in Wireless Sensor Network

      2013, 24(12):2859-2870. DOI: 10.3724/SP.J.1001.2013.04380 CSTR:

      Abstract (3430) HTML (0) PDF 706.95 K (4385) Comment (0) Favorites

      Abstract:Periodic monitoring and many-to-one data transmission is one of the representative scenarios in wireless sensor networks where inherent uneven energy consumption problem exists, that is nodes with different distances away from sink have different energy consumptions and it seriously reduces network lifetime. A routing algorithm is proposed based on cooperative and forwarding hybrid transmission modes, which utilizes the complementary characteristic in energy consumption of cooperative and forwarding transmission over long and short distance, and balances nodes' energy consumption through tuning the traffic ratio transmit by non-cooperative mode (referred to as relay ratio). Network lifetime maximization (NLM) is modeled as optimization of relay ratio vector, which is a high order non-linear optimization problem of multiple variables. To solve this problem, theoretical analysis is carried out on node energy consumption when network lifetime is maximized and an important conclusion is reached: if bit-energy-consumption of forwarding mode is lower than that of cooperative mode of sink's one-hop neighbor, all nodes have equal energy consumption when network lifetime is maximized; otherwise, only nodes whose bit-energy-consumption of cooperative mode is higher than that of forwarding mode have equal energy consumptions. As a result, NLM, a high order non-linear optimization problem, is simplified into an optimization about single variable. A distributed optimal relay ratio based routing algorithm (DORRCR) is designed based on the theoretical analysis. Simulation shows that, DORRCR prolongs network lifetime greatly compared with pure cooperative and non-cooperative energy balance routing protocols and evidently balances energy balance over the whole network.

    • Self-Organizing Semantic Integration Framework for Dynamic and Distributed Contents in the Network Environment

      2013, 24(12):2871-2882. DOI: 10.3724/SP.J.1001.2013.04431 CSTR:

      Abstract (3332) HTML (0) PDF 793.77 K (4637) Comment (0) Favorites

      Abstract:With the development of network and multimedia technologies, many new networks are emerging and converging through the Internet, making the network more ubiquitous, open and dynamic. And nowadays the mismatch between the explosive growth of dynamic and distributed content and the personalized needs of users in the network is increasing. Therefore, giving the large-scale contents in the network environment, how to provide personalized, intelligent content services has become the common issues in research community and industry. The purpose of this study is to combine semantic technology and network communication mechanism to support personalized and semantic integration services in the open, dynamic network environment. This paper proposes a self-organizing semantic integration framework with key techniques of realization mechanism for the dynamic, distributed contents, which includes the relational routing model, the technology framework and implementation process of self-organizing semantic integration. The framework supports building the personalized, intelligent integration applications suited to the dynamic and distributed contents, and provides the engineering design and technical realization methods for semantically integrating the dynamic and distributed contents in network environments.

    • Data Anonymization Approach for Incomplete Microdata

      2013, 24(12):2883-2896. DOI: 10.3724/SP.J.1001.2013.04411 CSTR:

      Abstract (3622) HTML (0) PDF 763.15 K (4487) Comment (0) Favorites

      Abstract:To protect privacy against linking attacks, quasi-identifier attributes of microdata should be anonymized in privacy preserving data publishing. Although lots of algorithms have been proposed in this area, few of them can handle incomplete microdata. Most existing algorithms simply delete records with missing values, causing large information loss. This paper proposes a novel data anonymization approach called KAIM (k-anonymity for incomplete microdata), for incomplete microdata based on k-member algorithm and information entropy distance. Instead of deleting any records, KAIM effectively clusters records with similar characteristics together to minimize information loss, and then generalizes all records with local recording scheme. Results of extensive experiments base on real dataset show that KAIM causes only 43.8% information loss compared with previous algorithms for incomplete microdata, validating that KAIM performs much better than existing algorithms on the utility of anonymized dataset.

    • Virtual Machine Anonymous Attestation in Cloud Computing

      2013, 24(12):2897-2908. DOI: 10.3724/SP.J.1001.2013.04389 CSTR:

      Abstract (4260) HTML (0) PDF 696.72 K (5755) Comment (0) Favorites

      Abstract:Being a vital constituent of cloud environment, the identity management and authentication of the system virtual machine is of great importance to cloud computing security. Multiple identity authorities exist in the large-scale, wide-distributed cloud environment and information regarding the authorities is publicly exposed in the current identity management and authentication scheme. Such deficiency of the scheme in the application of cloud environment poses the danger of revealing the organization and location of the users' platforms, violating the nature of cloud environment, i.e. organization transparency and location i.e. This paper presents the issuer-anonymous attestation scheme which, without altering the level of authentication and creditability, protects the anonymity of the platforms and the privacy of the issuers in the process of identity certification, effectively preventing information revelation in accordance with the transparency and independence nature of cloud environment. Furthermore, the proposed scheme realizes the attestation of the attributes of the platforms independently, requiring no participation of the identity authorities, and therefore, excludes the possibility of conspiracy attack between the inspector and the identity authorities and enhances the practicability and security of the scheme.

    • Universal Steganalysis Based on Differential Zero Coefficients and Index Co-Occurrence Matrix

      2013, 24(12):2909-2920. DOI: 10.3724/SP.J.1001.2013.04402 CSTR:

      Abstract (3112) HTML (0) PDF 610.18 K (4128) Comment (0) Favorites

      Abstract:To improve the security and reliability of Internet communications, a steganalysis algorithm for graphics interchange format (GIF) images is proposed in this paper. 36-dimensional statistical features of GIF image, which are sensitive to the color correlation between adjacent pixels and the breaking of image texture, are extracted based on differential zero coefficients (DZC) and index co-occurrence matrix (ICM). Support vector machine (SVM) technique takes the 36-dimensional statistical features to detect hidden message in GIF images effectively. Experimental results indicate that the proposed algorithm has better detection performance and higher time efficiency comparing with other similar steganalysis algorithms for typical steganographic algorithms including optimum parity assignment (OPA), sum of components (SoC), multibit assignment steganography (MBA) and steganographic tools which are popular in the Internet, such as EzStego, S-Tools4 and Gif-it-up. Furthermore, the proposed algorithm has the ability of universal steganalysis.

    • Video Clip Identification Algorithm Based on Spatio-Temporal Ordinal Measures

      2013, 24(12):2921-2936. DOI: 10.3724/SP.J.1001.2013.04415 CSTR:

      Abstract (3460) HTML (0) PDF 1004.37 K (4960) Comment (0) Favorites

      Abstract:Many state-of-the-art video clip identification algorithms are based on ordinal measures. However, they still have two problems: The weak uniqueness of video signature makes the precision decreases quickly as recall increases high enough; Quadratic-time complexity makes the response time too long and sensitive to the length of query video. To address these two problems, this paper proposes a video clip identification algorithm based on spatiao-temproal ordinal measures. The key steps are: (1) Before the accurate identification starts, it employs a linear-time complexity real-time filtration method based on spatio-temporal binary pattern histogram (STBPH) and a fast filtration method based on binary temporal ordinal measure (BTOM) to filter out most candidate video clips in target video; (2) During the accurate identification process, it utilizes joint spatio-temporal ordinal measure (JSTOM) which is more unique and robust in improving the precision. Experimental results show that the approach improves the precision significantly and is very efficient and insensitive to the length of query video.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063