• Volume 24,Issue 8,2013 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Survey of Test Case Prioritization Techniques for Regression Testing

      2013, 24(8):1695-1712. DOI: 10.3724/SP.J.1001.2013.04420 CSTR:

      Abstract (9358) HTML (0) PDF 1.13 M (12386) Comment (0) Favorites

      Abstract:Test case prioritization (TCP) issue is a hot research topic in regression testing research. This method tries to optimize the execution schedule based on a specific prioritization criterion. The purpose of the TCP techniques is to maximize a specific prioritization objective, such as the early fault detection rate of the original test suite. This technique is especially applied to some testing scenarios, for example testing resource is limited for executing all the test cases. This paper first describes the issue of TCP and classifies the existing TCP techniques into three categories: source code, requirement, and model. The paper secondly formulates a specific TCP issue (i.e., resource-aware TCP issue) and summarizes its research work. The paper finally summarizes commonly-used evaluation metrics and subjects in experimental studies, and empirical result affection of different fault injection types. The paper fourthly summarizes the application of TCP in some specific testing domains, such as combinatorial testing, event-driven applications testing, fault localization, and Web services testing and discusses some future work of the TCP issue.

    • Approach to Supporting On-Demand Remote Execution of the Computations in a Java Application

      2013, 24(8):1713-1730. DOI: 10.3724/SP.J.1001.2013.04344 CSTR:

      Abstract (4045) HTML (0) PDF 1.22 M (5914) Comment (0) Favorites

      Abstract:On-Demand remote executing is an important way to enable an application occupy resource on-demand to guarantee performance as well as improve resource utilization. This paper proposes an automatic program transformation approach for the on-demand remote execution of the computations in a Java application. The core of the approach is a design pattern supporting on-demand remote execution of computations. The research presents the technical challenges of and solutions for transformation, and gives out the DPartner transformation system. Comparing with previous work, DPartner has two major characteristics: first, transformation is carried out automatically; second, the transformed application is able to execute remotely on-demand, so its performance can be improved and the resource utilization can be increased. Additionally, DPartner is designed to be a practicable tool, as it can transform legacy applications with only Java bytecode.

    • Modeling and Maintaining Runtime Software Architectures

      2013, 24(8):1731-1745. DOI: 10.3724/SP.J.1001.2013.04360 CSTR:

      Abstract (4050) HTML (0) PDF 1003.73 K (5850) Comment (0) Favorites

      Abstract:Runtime software architecture is a dynamic and structural abstract of the running system, which describes the elements of current system, the state of these elements, and the relation between them. Runtime architecture has a causal connection with the running system, in order for system administrators to monitor and control the system through reading and editing the architecture. The key to construct a runtime architecture is to develop the infrastructure between the target architecture and system. This is done to maintain the causal connection between them. However, because of the diversity of target systems and architectures, and the complexity of the causal connection maintaining logic between them, the development of such infrastructures is tedious, error-prone, and hard to reuse or evolve. This paper presents a model-driven approach to constructing runtime architectures. Developers describe the target system, the architecture, and the relation between them as declarative models, and the supporting framework automatically generates the runtime architecture infrastructures. The research designs the runtime architecture modeling language based on the extension of the standard MOF and QVT languages, and implements the supporting framework based on a set of general synchronization techniques between the system and architecture. A set of case studies illustrate that this approach applies to a wide range of systems and architectures and improves the efficiency and reusability of the construction of runtime models.

    • Power Consumption Prediction Model of General-Purpose Computing GPU with Static Program Slicing

      2013, 24(8):1746-1760. DOI: 10.3724/SP.J.1001.2013.04361 CSTR:

      Abstract (4023) HTML (0) PDF 1.77 M (7197) Comment (0) Favorites

      Abstract:With the development of general-purpose computing of GPUs (graphics processing units), power consumption measurements and optimization have become an essential issue in the green computing field. The current power consumption of GPUs is mainly measured by the hardware. However the programmers have had difficulty understanding the power consumption profile of the applications used to optimize and refactor before the compile phase. To solve this issue, power consumption models were proposed for GPU applications with regard to sparseness- branch and denseness-branch programs based on program slicing, respectively. The program slicing is fine-grained level that lies between the function and the instruction levels and has good feasibility and accuracy in the power consumption estimation. The power consumption prediction models for program slicing were proposed through no-linear regression and wavelet neural networks. To specific GPUs, the power prediction model based on no-linear regression is more precise than the prediction model based on wavelet neural networks. However the wavelet neural networks model has better generality to various kinds of GPUs. After analyzing the structure of the applications, the weighted power model for sparseness-branch programs was provided to achieve better effectiveness. The probability slicing power model for denseness-branch programs was also proposed to improve the accuracy that is based on the probability of the execution paths. The results indicate that the two different models can effectively predict the power consumption. And the average relative error between the predicted value and the measured value is less than 6%.

    • Profile-Guided Optimization of System Energy Consumption for High-Performance Operational Applications

      2013, 24(8):1761-1774. DOI: 10.3724/SP.J.1001.2013.04363 CSTR:

      Abstract (3623) HTML (0) PDF 852.54 K (4804) Comment (0) Favorites

      Abstract:Currently many high-performance computers are used to finish operational numerical computing cyclically. The main maintenance cost originates from the large amount of electric energy, and reducing energy consumption can reduce the maintenance cost significantly. The core units for operational systems are microprocessors, and the current microprocessors prevalently support the low power technique of the dynamic voltage and frequency scaling (DVFS). DVFS reduces the energy consumption by decreasing the supply voltage and execution frequency, which generally leads to performance reduction. This paper models energy consumption of operational applications confined by time constraints, and present energy optimization techniques by DVFS in the operational systems. Differing on the way to obtain the program execution information, two energy optimization models, SEOM and CEOM are evolved. The execution information of SEOM is obtained directly from testing, and the execution information of CEOM is obtained from compiler-directed program profile. The models have been investigated in representative computer platforms, and the results show that they can save 12% the largest reduction of energy consumption.

    • Detecting and Treatment Algorithm of Implicit Synchronization Based on Dependence Analysis in SPMD Program

      2013, 24(8):1775-1785. DOI: 10.3724/SP.J.1001.2013.04343 CSTR:

      Abstract (3533) HTML (0) PDF 716.87 K (4913) Comment (0) Favorites

      Abstract:SPMD translation compiles programs of one SPMD-threaded programming model to multi devices. The current researches base on the supposition that different threads are independent except in communication with explicit synchronizations. However, the data dependence relation between threads such as implicit synchronizations results in the correctness pitfalls in SPMD translation. In order to deal with implicit synchronizations, the implicit synchronizations in fine-grained SPMD programming model CUDA are analyzed systematically. The correctness pitfalls in existing SPMD translation from CUDA to Multi-core are revealed in which this paper proposes a method of detecting implicit synchronizations based on dependence analysis. On the basis of implicit synchronizations detecting, an optimized treatment algorithm is designed to treat explicit and implicit synchronizations synthetically by the loop reorder. The experimental results show that compared with existing SPMD translation, the detecting and optimized algorithm could treat kinds of implicit synchronizations in fine grained SPMD translation correctly and quickly by small expense, which helps compiler produces correct and efficient result.

    • >Review Articles
    • Survey on NoSQL for Management of Big Data

      2013, 24(8):1786-1803. DOI: 10.3724/SP.J.1001.2013.04416 CSTR:

      Abstract (14123) HTML (0) PDF 1.04 M (19293) Comment (0) Favorites

      Abstract:Many specific application oriented NoSQL database systems are developed for satisfying the new requirement of big data management. This paper surveys researches on typical NoSQL database based on key-value data model. First, the characteristics of big data, and the key technique issues supporting big data management are introduced. Then frontier efforts and research challenges are given, including system architecture, data model, access mode, index, transaction, system elasticity, load balance, replica strategy, data consistency, flash cache, MapReduce based data process and new generation data management system etc. Finally, research prospects are given.

    • Mining Sequential Patterns with Wildcards and the One-Off Condition

      2013, 24(8):1804-1815. DOI: 10.3724/SP.J.1001.2013.04422 CSTR:

      Abstract (6329) HTML (0) PDF 1.05 M (7559) Comment (0) Favorites

      Abstract:There is a huge wealth of sequence data available in real-world applications. The task of sequential pattern mining serves to mine important patterns from the sequence data. Given a sequence S, a certain threshold, and gap constraints, this paper aims to discover frequent patterns whose supports in S are no less than the given threshold value. There are flexible wildcards in pattern P, and the number of the wildcards between any two successive elements of P fulfills the user-specified gap constraints. The study designs an efficient mining algorithm: One-Off Mining, whose mining process satisfies the One-Off condition under which each character in the given sequence can be used at most once in all occurrences of a pattern. Experiments on DNA sequences show that this method performs better in time and completeness than the related sequential pattern mining algorithms.

    • Discovery of Hot Region in Trajectory Databases

      2013, 24(8):1816-1835. DOI: 10.3724/SP.J.1001.2013.04340 CSTR:

      Abstract (4085) HTML (0) PDF 1.85 M (7742) Comment (0) Favorites

      Abstract:Mining of the enclosed regions that are visited frequently by moving objects (i.e. hot region) is a critical premise for the discovery of movement patterns from trajectory databases, and restricting their coverage is the key to promote precision and efficiency for representation of trajectory patterns. Given a trajectory database, this paper studies how to discover these hot regions and how to constraint their size. A definition of hot region query with coverage constraints is presented with a filter-refinement framework to construct them. In the filter step, the study introduces a grid-based approximate schema to construction the dense regions efficiently; and in the refinement step, the study proposes two trend-based and dissimilarity-based measures, and designs corresponding algorithms and heuristic parameter selection method to rationally reconstruct the regions under the coverage constraints. Experiments on practical datasets validate the effectiveness of this work.

    • Algorithm for Processing k-Nearest Join Based on R-Tree in MapReduce

      2013, 24(8):1836-1851. DOI: 10.3724/SP.J.1001.2013.04377 CSTR:

      Abstract (4786) HTML (0) PDF 1.02 M (8012) Comment (0) Favorites

      Abstract:To accelerate the k-nearest neighbor join (knnJ) query for large scale spatial data, the study presents a knnJ based on R-tree in MapReduce. First, the research uses the formalization of independent parallelism and sequential synchronization (IPSS) computation to abstract MapReduce parallel program model. Next, based on this parallel model abstraction, this paper proposes efficient algorithms for bulk building R-tree and performing knnJ query based on the constructed R-tree respectively. In the process of bulk building R-tree, a sampling algorithm is provided to determine the spatial partition function rapidly, which make the process of building R-tree conform to IPSS model and can be expressed easily in MapReduce. In the process of knnJ query, the knn expanded bounding box is introduced to limit the knn query range and partition data, and then the generated R-tree is used to execute knnJ query in parallel fashion, achieving high performance. This paper analyzes the communication and computation cost in theory. Experimental results and analysis in large real spatial data demonstrate that the algorithm can efficiently resolve the large scale knnJ spatial query in MapReduce environment, and has a good practical application.

    • Fast Clustering-Based Anonymization Algorithm for Data Streams

      2013, 24(8):1852-1867. DOI: 10.3724/SP.J.1001.2013.04330 CSTR:

      Abstract (3662) HTML (0) PDF 1.00 M (5157) Comment (0) Favorites

      Abstract:In order to prevent the disclosure of sensitive information and protect users’ privacy, the generalization and suppression of technology is often used to anonymize the quasi-identifiers of the data before its sharing. Data streams are inherently infinite and highly dynamic which are very different from static datasets, so that the anonymization of data streams needs to be capable of solving more complicated problems. The methods for anonymizing static datasets cannot be applied to data streams directly. In this paper, an anonymization approach for data streams is proposed with the analysis of the published anonymization methods for data streams. This approach scans the data only once to recognize and reuse the clusters that satisfy the anonymization requirements for speeding up the anonymization process. Experimental results on the real dataset show that the proposed method can reduce the information loss that is caused by generalization and suppression and also satisfies the anonymization requirements and has low time and space complexity.

    • Collaborative Filtering Model Fusing Singularity and Diffusion Process

      2013, 24(8):1868-1884. DOI: 10.3724/SP.J.1001.2013.04350 CSTR:

      Abstract (5692) HTML (0) PDF 1.09 M (6573) Comment (0) Favorites

      Abstract:As a key solution to the problem of information overload, the recommender system can filter a large deal of information according to user’s preference and provide personalized recommendations for the user. However, traditional collaborative filtering models with excellent performance haven’t made full use of the contextual information in the process of recommendation, which to some extent confronts the system with the performance bottleneck. In order to improve the system performance further, this paper starts with the contextual information on ratings, and proposes a collaborative filtering model fusing singularity and diffusion process (CFSDP) by taking advantage of ratings’ singularities obtained from the classified statistics of ratings and referring to the similarity model of multi-channel diffusion which regards recommender system as a user-item bipartite network. To demonstrate the superiority of the proposed model, the study provides comparative experimental results based on the MovieLens, NetFlix and Jester data sets. Finally, the results show that the model not only has better extensibility, but also can observably improve the prediction and recommendation quality of system with a reasonable time cost.

    • Optimizing Lighting Design for Adaptive Enhancement Rendering

      2013, 24(8):1885-1897. DOI: 10.3724/SP.J.1001.2013.04348 CSTR:

      Abstract (3338) HTML (0) PDF 3.05 M (4358) Comment (0) Favorites

      Abstract:Enhancement rendering highlights geometric features of models for the viewer to understand them easily. In regrads to this, existing methods usually enhances contrast on the features by modifying surface normal or adjusting colors according to curvatures. However, such a treatment may distort the appearances of the features, or lead noises in the rendered images, and therefore, impedes the viewer’s understanding of the model. This paper presents a method by optimizing the lighting design for enhancement rendering. It avoids adjusting any geometry information of the model, and can adaptively highlight features by the observation requirements with a hierarchical construction of the model, which also efficiently eliminates the rendering noises from detailed features. Therefore, the rendered images can clearly and exactly highlight features for efficient model understanding.

    • Virtual Resource Evaluation Model Based on Entropy Optimized and Dynamic Weighted in Cloud Computing

      2013, 24(8):1898-1908. DOI: 10.3724/SP.J.1001.2013.04409 CSTR:

      Abstract (3820) HTML (0) PDF 731.20 K (5511) Comment (0) Favorites

      Abstract:The development of the computing power and sensing ability of mobile devices allows them to provide various contextadapted services to users. The on-body position of mobile devices, which is one kind of important context information, affects the recognition of other human activities and the adaptability of many mobile applications. The study provides a method to recognize the on-body positions of mobile devices, inspired by the analysis that different positions on the body have distinguishable rotation patterns. The research then fuses the data sensed by the accelerometer and the gyroscope to calculate the rotation radius, the magnitude of the angular velocity as well as the gravity acceleration and then extract a set of features. A classifier based on the Random Forest is used for classification and compared with the solution based on the support vector machine. To evaluate the method, the paper conducts an experiment on a public dataset with 3 types of positions and 13 types of activities. Results show that the method achieved an accuracy of 95.39% on average in the cross validation and indicate that when rotation is the main component in the movement and the direction of the gravity acceleration is stable, the information about rotation variation and the ensemble classifier are useful to improve the classification accuracy. Compared to previous works, it is able to classify the positions more precisely and has more generalization ability for new users and new activities.

    • LBP Texture Feature Based on Haar Characteristics

      2013, 24(8):1909-1926. DOI: 10.3724/SP.J.1001.2013.04277 CSTR:

      Abstract (4263) HTML (0) PDF 3.03 M (8156) Comment (0) Favorites

      Abstract:The image texture feature reflects some characteristics of the degree of gray distribution, contrast, spatial distribution and changes in the intrinsic properties of image. Under the premise of lower computational complexity, it is a difficult problem for effective feature extraction of deep level image texture. Aiming to solve this problem, this paper, from the analysis of statistical characteristics of adjacent regions, proposes an image texture features extraction method, which is based on Haar local binary pattern (HLBP). In view of simple and quick operating of Haar-like features, effective and reliable to local features statistic, Haar is inducted into LBP. This method first shows eight groups of Haar feature encoding models, which calculate the local texture features of image in accordance with local binary pattern (LBP). Through this method, it can reduce the noise impact effectively. Then, in order to further enhance the effective representations of the image texture features, the method combines with Gabor wavelet filters in different directions and different scales of gray-level image feature extraction, which intends to enhance the effective performance of the texture feature extraction. Finally, through four comparing experiments, this method has proven to be a feasible tool for analyzing image texture features.

    • Cerebrovascular Segmentation Based on Region Growing and Local Adaptive C-V Model

      2013, 24(8):1927-1936. DOI: 10.3724/SP.J.1001.2013.04394 CSTR:

      Abstract (3872) HTML (0) PDF 1.17 M (5939) Comment (0) Favorites

      Abstract:This paper presents an effective approach to extract cerebrovascular tree from time-of-flight (TOF) magnetic resonance angiography (MRA) images. The approach consists of two segmentation stages. In the first stage, Gaussian filtering is implemented for the 3D volumetric field. By virtue of the maximum intensity projection (MIP) image segmented by the two dimensional OTSU algorithm, 3D vessel seeds are obtained. The region growing rule is defined by combining the global information with the local information, and then the rough segmentation is implemented by the region growing algorithm. In second stage, the original volume data is filtered by an anisotropic filtering based on Catt diffusion. A local adaptive C-V model is proposed, and the initial contour of the model is set by employing the first segmented vessels. Then the accurate segmentation is realized by the contour evolution. Experimental results show that the proposed algorithm is not only able to effectively segment the thick vessel, but also able to accurately extract the thinner vessels with weak boundaries.

    • Virtual Resource Evaluation Model Based on Entropy Optimized and Dynamic Weighted in Cloud Computing

      2013, 24(8):1937-1946. DOI: 10.3724/SP.J.1001.2013.04364 CSTR:

      Abstract (3987) HTML (0) PDF 702.58 K (6711) Comment (0) Favorites

      Abstract:The dynamic and uncertainty of cloud resource makes resource allocation and task scheduling more difficult. In order to retrieve accurate resource information about dynamic loads and available capacity, this study proposes a resource evaluation model based on entropy optimization and dynamic weighting. The entropy optimization filters the resources that satisfy user QoS and system maximization by goal function and constraints of maximum entropy and the entropy increase principle, which achieves optimal scheduling and satisfied user QoS. Then the evaluation model evaluates the load of having filtered resources by dynamic weighted algorithm. In order to reduce energy consumption, achieve load balancing and improve system utilization, the study allows the migration or release the resources which overload and unavailable for a long time. Experimental results show the effect of entropy optimization on user QoS and system maximization, and dynamic weighted algorithm benefits load balancing and system utilization. The experimental results prove that the evaluation model achieves multi-objective optimization such as satisfying user QOS, reducing energy assumption, balancing load, improving system utilization and so on.

    • Scheme for Cooperative Caching in ICN

      2013, 24(8):1947-1962. DOI: 10.3724/SP.J.1001.2013.04378 CSTR:

      Abstract (4402) HTML (0) PDF 1.03 M (7175) Comment (0) Favorites

      Abstract:With the emergence of information-centric networking (ICN) in which in-networking caching distinguishes it from other Internet architecture, efficient caching becomes increasingly attractive, but remains of great challenge as current Web caching. This paper proposes a distributed scheme that is embedded with a locally central model, called APDR (i.e., content-aware placement, discovery and replacement). In APDR, according to the information carried on Interest message, the destination of the Interest makes caching decision for the nodes along the path, including different time that the requested content will be cached in different nodes. Finally, the study evaluates the proposed scheme through extensive simulation experiments in terms of a wide range of performance metrics. The experiment results show that the scheme can yield a significant performance improvement over diverse operating environments, such as, cache hit ratio, average access cost, number of replacement, forwarding efficiency and cache robustness etc. At the same time, additional overhead of APDR is small.

    • Benefit-Aware On-Demand Provisioning Approach for Virtual Resources

      2013, 24(8):1963-1980. DOI: 10.3724/SP.J.1001.2013.04388 CSTR:

      Abstract (4012) HTML (0) PDF 1.19 M (5899) Comment (0) Favorites

      Abstract:It is a challenging problem to provide QoS (quality of service) in the virtualization-based cloud computing environment. Existing efforts have addressed this challenge based on either Cost-Oblivious approaches, or Cost-Aware approaches. However, both approaches may suffer frequent QoS violations under typical flash crowd workload. For instance, both approaches ignore the benefit gained after configuration changes. In this paper, a benefit-aware approach according to the profit rate maximization principle is introduced to address this problem. Here, the benefit means the satisfaction percept of the duration that application continuously guarantees the QoS in the new configuration. Experimental results based on TPC-W benchmark show that this benefit-aware approach can save the costs of VM resources as much as 25% and can effectively reduce the QoS violations compared through a cost-aware approach.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063