• Volume 26,Issue 8,2015 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Model Abstraction for Stochastic Model Checking

      2015, 26(8):1853-1870. DOI: 10.13328/j.cnki.jos.004838 CSTR:

      Abstract (5546) HTML (2976) PDF 990.70 K (6824) Comment (0) Favorites

      Abstract:Stochastic model checking is a recent extension and generalization of the classical model checking. Stochastic model checking combines the classical model checking algorithms and linear equation solving or linear programming algorithms, moreover, it processes the probability vector instead of the bit vector. Consequently, the state explosion problem is more severe in stochastic model checking than classical model checking. Abstraction is an important means to tackle the state explosion problem, and it has made some progress in applying to the field of stochastic models testing. This study focus on model abstraction for stochastic model checking. First, the problem of model abstraction is formally presented. Then, the advances in the research area are classified and summarized according to the construction technology of abstraction model. At last, the various abstraction technologies are compared in regard to the effectiveness of solving the model abstraction problem, and the future research topics for improvement in solving the model abstraction problem are pointed out.

    • GUI Test Suite Reduction Techniques Based on Event Handler Functions

      2015, 26(8):1871-1885. DOI: 10.13328/j.cnki.jos.004711 CSTR:

      Abstract (3469) HTML (1374) PDF 1.15 M (5020) Comment (0) Favorites

      Abstract:GUI test suite reduction is an effective approach to reduce test cost. Due to the mechanics of message loop and the event- driven characteristic of GUI software, it is difficult to directly apply traditional test suite reduction techniques, such as control-flow based technique and data-flow based technique, to GUI test suite reduction. How to eliminate more redundant test cases without loss of the ability of finding errors is still a great challenge. Combining control flow technique and data flow technique, this paper proposes three test reduction techniques based on source code structure of event handler functions and the data dependencies among them. Experimental results show that two of the techniques that based on the data dependency among event handler functions achieve good results.

    • Efficient Service Selection Approach Based on Functionality Folding

      2015, 26(8):1886-1906. DOI: 10.13328/j.cnki.jos.004598 CSTR:

      Abstract (3397) HTML (1014) PDF 1.43 M (5220) Comment (0) Favorites

      Abstract:Service is becoming the main carrier of the software development because of its flexible application mechanism. Since the emergence of a large number of services with the same or similar functionalities but different QoS (quality of service), how to quickly and accurately locate the right services user need remains a very challenging task. In this paper, a service selection approach based on the folding of service functionality is proposed. Abstract services with associated functions are folded into a coarse-grained service-level abstract service, based on which the service discovery and combinatorial optimization are performed. The efficiency of the algorithm is improved dramatically because of the reduced number of both abstract services and candidate services. Experimental results show that the presented approach is more efficient than other traditional heuristic algorithms, and exhibits better scalability on the scale of abstract services and candidate services.

    • Multi-User Server Program Self-Recovery System

      2015, 26(8):1907-1924. DOI: 10.13328/j.cnki.jos.004685 CSTR:

      Abstract (3989) HTML (1000) PDF 1.45 M (5106) Comment (0) Favorites

      Abstract:Long running multi-user server system may encounter frequent errors resulting in running disruptions due to its complexity of program, operating environments and user operations. This poses the need of self-recovery of system. Rollback and checkpoint scheme is a popular self-recovery strategy in current research and application, but has no obvious effects in multi-user system. In this paper, a VMM-based self-recovery system named VMSRS (virtual machine monitor-self recovery of service program) is designed according to the characteristics of multi-user server programs. The main idea of VMSRS is regarding VMM as major component of recovery, taking advantage of VM as independent underlying system and hardware resource monitor, and strictly maintaining the consistency and security of user data and atomicity of data operation. As an improved SRS (self recovery of service program), VMSRS controls errors to avert affecting normal users in case of system crash instead of committing rollback, allowing users and servers to proceed as if no crash happens. Rollback is avoided by taking advantage of self-cleansing mechanism of system and VMSRS. The issues addressed by VMSRS design include crash suppression module, demand driven restoration module, monitor module, and storage management module. The experiment results from analyzing basic function, basic performance and integral function validate that VMSRS can provide favorable security and consistency of user data while guaranteeing performance and committing no rollback. It recovers multi-thread programs excellently with no limit to threads. Meanwhile, this exploratory study also takes part in current research of self-recovery system utilizing virtualization technology.

    • Dominant Statement(s) Selection in Testing Programs with Flag Variables

      2015, 26(8):1925-1936. DOI: 10.13328/j.cnki.jos.004671 CSTR:

      Abstract (3105) HTML (1046) PDF 803.03 K (4367) Comment (0) Favorites

      Abstract:Testability transformation based on dominant relationship, which transforms the problem of covering the target statement into the problem of covering the dominant statement(s) preceding the target statement, can test programs with flag variables. When more than one dominant statement exist, however, there have been no effective methods for selecting a statement subset with best coverage as the new target(s), which limits the scope of applying the testability transformation method. The problem of selecting dominant statement(s) is investigated in this paper, and a method of choosing dominant statement(s) is presented based on the coverage difficulty. First, four indicators for evaluating the coverage difficulty of a statement are presented, and the approaches to calculating them are provided. Then, the dominant statement(s) with best coverage is (are) chosen using Topsis sorting based on the above indicators. Finally, the proposed method is applied to test several benchmarks and industrial programs, and the experimental results show that coverage from the dominant statement(s) selected by the proposed method can greatly improve the efficiency of generating test data.

    • Method of Solving Abductive Reasoning Problem via Hitting Set

      2015, 26(8):1937-1945. DOI: 10.13328/j.cnki.jos.004694 CSTR:

      Abstract (3137) HTML (1773) PDF 730.12 K (4946) Comment (0) Favorites

      Abstract:As important type of reasoning besides induction and deduction, abduction has been widely used in many areas, such as AI. Generally speaking, abductive reasoning is the process of inferring causes from the observations. Unlike past research, which uses prime implicate and prime implicant, this study proves that seeking minimal interpretation of abductive problem for propositional logic and propositional modal logic system S5 can be reduced to the problem of seeking minimal hitting set in the corresponding set, and presents a new method for solving abductive problems.

    • C2C E-Commerce Trust Algorithm Based on Trust Network

      2015, 26(8):1946-1959. DOI: 10.13328/j.cnki.jos.004690 CSTR:

      Abstract (2860) HTML (1075) PDF 1.20 M (5345) Comment (0) Favorites

      Abstract:In C2C e-commerce systems, transactions are anonymous, random and dynamic. Since the transaction information is exchanged between the partners by the virtual network, the partners lack the basic trust foundations and there exist high risks in the process of the transactions. One of the efficient ways to reduce the transaction risk is to evaluate the seller's trustworthiness and help the buyer make scientific decision by trust models. From the buyer's perspective, this paper presents a C2C dynamic trust algorithm (CDTA) for the e-commerce environment. The algorithm takes into account the attributes of trust and trust network, such as the time sensitiveness, the asymmetry of the trust, and the transitivity and selectivity of the trust propagation paths. First, the direct trustworthiness of the buyer to the seller is computed by the transaction experience between them. Second, the reference trustworthiness is computed from the buyer's friends in the trust network according to the recommendation confidence. Finally, the trust of the buyer to the seller is acquired through the integration of the direct trustworthiness and the reference trustworthiness with the trust adjusting factor. The experiments show that the granularity of the trust evaluation is more fine-grained and the evaluation result is more objective than existing work. On the other hand, the similarity review can help the buyer sift out the reference nodes meeting with the buyer's preference, make the reference trustworthiness more credible, and resist the attacks from malicious nodes.

    • Fast Complex Valued Dictionary Learning Method for Sparse Representation

      2015, 26(8):1960-1967. DOI: 10.13328/j.cnki.jos.004677 CSTR:

      Abstract (2414) HTML (922) PDF 1.22 M (5132) Comment (0) Favorites

      Abstract:Sparse representation is widely used in signal processing. The best representation is based on the adaptive dictionary that trained from the processing data. This paper proposes a new complex valued dictionary learning method which turns the dictionary learning into an optimization problem and performs the optimization on the dictionary atoms and coding alternately. An online training method with memory is used in the optimization on the dictionary atoms, and an insurance of alternated direction method of multipliers is solved in the optimization on the coding. The proposed algorithm is proved to be of high efficiency, minimizing the training time while converging to the optimized value. The presented method is also robust to the noise in the training set.

    • Symbolic Model Checker for Propositional Projection Temporal Logic

      2015, 26(8):1968-1982. DOI: 10.13328/j.cnki.jos.004689 CSTR:

      Abstract (2560) HTML (979) PDF 1.13 M (5499) Comment (0) Favorites

      Abstract:The formal specification languages for existing model checking tools such as computation tree logic (CTL) and linear temporal logic (LTL) are not poωerful enough to describe ω-regular properties, in that those properties cannot be verified ωith them. In this study, a design and implementation procedure of propositional projection temporal logic (PPTL) symbolic model checker (PLSMC) is developed by implementing the symbolic model checking algorithm for PPTL from author's previous ωork based on the acclaimed symbolic model checking system NuSMV. As PPTL has the expressive poωer of full-regular expressions, both qualitative and quantitative properties can be verified ωith PLSMC. Moreover, PLSMC is an effective model checking tool to tackle the state space explosion problem. Finally, safety and iterative properties of a railωay and highωay crossing guardrail control system are checked ωith PLSMC. Experimental results shoω that the presented symbolic model checker for PPTL extends the validation functionality of the NuSMV system such that state sensitive, concurrent and periodic properties can be specified and verified ωith PPTL.

    • Energy Balance of Mixed Data Transmission in 1D Sensor Networks

      2015, 26(8):1983-2006. DOI: 10.13328/j.cnki.jos.004710 CSTR:

      Abstract (2777) HTML (1007) PDF 1.43 M (4449) Comment (0) Favorites

      Abstract:Energy hole is an inherent problem in multi-hop sensor networks. It may cause the early death of some nodes and result in a short network lifetime. Mixed data transmission, which randomly propagates data one-hop or two-hop away in each step, has been developed for energy balancing. The performance of this scheme depends heavily on the setting of transmission probabilities. However, no general rules have been proposed to guide the calculation of these probabilities, and little study has done on whether the energy of all nodes are able to be balanced by this scheme, especially under the constraints of limited communication ranges. This paper formulates the problem of energy balancing as an optimal transmission probability allocation problem. It reveals that the transmission probability is mainly determined by the locations of each node; however, the values of the probability become invalid if the network size exceeds a threshold. This work theoretically investigates the energy balance conditions and presents guidelines for allocating the transmission probabilities. It proves that the global energy balance can be achieved if and only if the network size is not greater than n0. It further reveals that n0 only depends on the communication profiles of the network. Such a profile is indicated by a newly discovered parameter, which is defined as the premium power ratio of the system. Finally, it extends the two-hop based mixed data transmission scheme to a general model and investigates the impact of the combinations of transmission ranges on energy balancing. Comprehensive simulations are conducted to validate the energy balance conditions. Both the numerical results and theoretical analysis confirm that the global energy balance can be achieved if transmission probabilities are allocated according to the proposed rules.

    • Formalized Descriptions of Dynamic Reorganizations of Multi-Subnet Composited Complex Network Based on Vector Space

      2015, 26(8):2007-2019. DOI: 10.13328/j.cnki.jos.004697 CSTR:

      Abstract (3096) HTML (999) PDF 961.52 K (4721) Comment (0) Favorites

      Abstract:Classical complex networks mainly describe same type of entities and one type of interrelations between the entities. Multi-subnet composited network is a model that describes different types of entities and multiple types of interrelations between the entities. Dynamic reorganization of this model provide two operations: Compounding (combine two subnets into a ‘bigger’ one) and reducing (obtain a ‘small’ network from a ‘big’ one). In this paper, a vector-composited network is defined by importing multi-dimensional space, which converts the interrelations between entities into multi-dimensional vector. Dynamic reorganization of networks is converted into base transformations in multi-dimensional space. Formalized descriptions of compounding and reducing are presented. Further, vector-composited network of passenger transport with high speed and low speed railways in mainland China is established by empirical data. Topological analysis of networks obtained by dynamic reorganizations illustrates the development of railway system in mainland China.

    • Model and Algorithm of Local and On-Demand Maintenance of Clusters in Sensing Layer of the Internet of Things

      2015, 26(8):2020-2040. DOI: 10.13328/j.cnki.jos.004675 CSTR:

      Abstract (2783) HTML (940) PDF 2.49 M (4421) Comment (0) Favorites

      Abstract:The sensing layer of the Internet of things based on wireless sensor network requires intensive maintenance over the involved nodes and beyond by the conventional whole-network and periodic model of cluster maintenance. It therefore results in some shortcomings such as high cost of maintenance, heavy waste of energy, full service interruption, and delayed response to incident. This paper proposes a new method, namely, local and on-demand maintenance of clusters (LDMC), to carry out the operations of maintenance only in a restricted range of time and space which is decided by the damaged clusters. LDMC resolves issues such as when to start, which type or who to be maintained by the triggers, pre-processing or different operations of maintenance. The presented scheme is helpful not only to make critical decision on the timing of cluster update, but also to provide in-time response to the change of topology or route of network with the disabled or new nodes. As a result, it allows to reduce the impact of incidents on the function of network, to improve the stability of network, and to cut down the cost of maintenance. Comparison tests are performed on consumption of energy, transmission of data, balance of load and response to incident based on NS2 simulation platform, and the results suggest that the proposed method is able to significantly reduce energy consumption in maintenance of clusters, to prolong the lifetime of network, and to increase the total amount of the transferred data packets.

    • Transmission Delay Prediction Based Data Allocation Scheme for Concurrent Multipath Transfer

      2015, 26(8):2041-2055. DOI: 10.13328/j.cnki.jos.004691 CSTR:

      Abstract (2730) HTML (1085) PDF 1.03 M (5156) Comment (0) Favorites

      Abstract:The performance of a CMT association degrades remarkably when the transmission capabilities of its paths are diverted. Based on the analysis of different network configurations, a transmission delay prediction based data allocation scheme is proposed to distribute data to different paths with a feasible delay measurement mechanism. To reduce the impact brought by out of order packet, the proposed scheme improves the data distribute process of inter and intra transmission round by accessing and predicting the arriving time of each packet in each path. The result of analysis and simulation reveal the performance of the presented scheme can achieve much better performance than the original round-robin scheme.

    • Data Placement Strategy for MapReduce Cluster Environment

      2015, 26(8):2056-2073. DOI: 10.13328/j.cnki.jos.004807 CSTR:

      Abstract (3874) HTML (1535) PDF 1.07 M (6664) Comment (0) Favorites

      Abstract:As an effective programming model for large-scale data-intensive applications, MapReduce has been widely and successfully applied in the field of parallel and distributed computing, and has the characteristics of good fault-tolerance and easy to implement and extend. Because MapReduce extends computing to the nodes of large-scale cluster system, reasonable placement of processing data has become one of the key factors affecting the performance of MapReduce cluster system, including energy efficiency, resource utilization, communications and I/O throughput, response time, and reliability. This study first analyzes characteristics of the default data placement strategy of Hadoop, which is a typical implementation of MapReduce programming model. Next, it investigates popular data placement strategies for MapReduce cluster computing environments. Finally, it presents future research directions in the area of data placement strategies for MapReduce-based cluster computing systems.

    • Approach of Quantifying Virtual Machine Performance Interference Based on Hardware Performance Counter

      2015, 26(8):2074-2090. DOI: 10.13328/j.cnki.jos.004709 CSTR:

      Abstract (3364) HTML (1025) PDF 1.41 M (4921) Comment (0) Favorites

      Abstract:In IaaS platforms, hardware infrastructures are sliced into multiple virtual machines (VMs) to provide computing capabilities for users. Virtualization greatly improves the resource utilization, however it introduces potential risk of variation in VM performance. VMs co-located together have a high probability of performance degradation when one of the VMs behaves as a noisy neighbor competing hardware resource with other victims. How to efficiently monitor and quantify this type of performance interference thus becomes a key challenge for IaaS providers. To address these challenges, this study presents an approach which transparently monitors and quantifies VMs interferences through low-level metrics with hardware performance counters (HPCs). The approach explores the information within HPC and LLC miss rates, builds performance prediction model and quantifies performance interference of different (CPU-bound and net-bound) VMs. Experimental results show that the proposed approach can predict the performance degradation effectively with an acceptable overhead that is lower than 10%.

    • Energy Consumption Optimization Data Placement Algorithm for MapReduce System

      2015, 26(8):2091-2110. DOI: 10.13328/j.cnki.jos.004802 CSTR:

      Abstract (3764) HTML (1570) PDF 1.26 M (4998) Comment (0) Favorites

      Abstract:Driven by big data and cloud computing techniques, the scale of the IT expenditure grows continuously and energy consumption problem has become more and more urgent. Study shows that the lower resource usage and the long idle time of network nodes are responsible for this problem in a large-scale distributed system. This paper studies the energy consumption optimization of MapReduce system. Traditional optimization approaches employ workload concentration, task live-immigration or dynamical power on-off methods. But in a MapReduce system, a node not only executes tasks but also provides data, therefore cannot be simply shut down for energy-saving while the tasks running on it are migrated. This paper presents an idea that a good data placement can optimize the energy consumption of a MapReduce system. Based on this idea, the target of data placement which optimizes the energy consumption is defined. Then the data placement algorithm achieving the target is proved efficient in theory. Finally, three MapReduce systems with different data placement algorithms are deployed on the heterogeneous MapReduce system. Comparing the energy consumption of three systems under the three typical CPU-intensive, I/O intensive and interactive jobs, the proposed data placement algorithm is proved to be able to optimize the energy consumption of a MapReduce system. The optimization efficiency of the proposed approach is proved both in theory and by experiment, demonstrating its ability to facilitate the applications of energy consumption computing and big data analysis.

    • Energy-Efficient Scheduling for Real-Time Tasks by Rolling-Horizon Optimization in Virtualized Clouds

      2015, 26(8):2111-2123. DOI: 10.13328/j.cnki.jos.004670 CSTR:

      Abstract (3557) HTML (1045) PDF 960.73 K (5402) Comment (0) Favorites

      Abstract:Nowadays, energy saving has become a focus in deploying clouds. Developing energy-aware cloud data centers can not only reduce power electricity cost but also improve system reliability. Existing scheduling algorithms developed for energy-aware cloud data centers are commonly lack of consideration of task level scheduling. To address this issue, this paper proposes a novel rolling-horizon scheduling architecture for real-time task scheduling, together with a detailed task energy consumption model. Based on the novel scheduling architecture, this work develops an energy-aware scheduling algorithm EARH (energy-aware scheduling algorithm) for real-time aperiodic tasks. EARH employs a rolling-horizon optimization policy and can be extended to integrate other energy-aware scheduling algorithms. Strategies for the resource scaling up and scaling down are also presented to make a good trade-off between task’s schedulability and energy saving. Extensive experiments are conducted to validate the superiority of EARH by comparing it with three baselines. The results show that EARH significantly improves the scheduling quality over the others and it is suitable for real-time task scheduling in virtulized cloud data centers.

    • Transparent Privacy Protection Based on Virtual Machine Monitor

      2015, 26(8):2124-2137. DOI: 10.13328/j.cnki.jos.004684 CSTR:

      Abstract (3906) HTML (1015) PDF 1.08 M (4628) Comment (0) Favorites

      Abstract:The vulnerabilities of OS kernel are usually exploited by attackers to execute arbitrary code with kernel privilege (i.e., return-to-user attacks, ret2user) and to steal other processes' private data. In this paper, a transparent OS kernel memory access mediator based on VMM (virtual machine monitor) is proposed, and a non-bypassable low performance overhead memory page tracker is provided to get the memory usage information in real-time. Combined with a safe loader, the new method guarantees the code integrity of dynamic shared objects during run-time. It also ensure that, even when the OS kernel is compromised, the application's memory private data is still safe. A prototype is implemented on the Linux OS, and the evaluation experiments show that it only incurs about 6%~10% performance overhead for most SPEC benchmark tests.

    • Approach to Performance Optimization of Mashup Operation

      2015, 26(8):2138-2154. DOI: 10.13328/j.cnki.jos.004733 CSTR:

      Abstract (3177) HTML (1222) PDF 1.11 M (4870) Comment (0) Favorites

      Abstract:In recent years, a new type of applications called “Mashup” has developed on the Internet. These applications can help end-users to collect and operate the distributed data sources on the Internet. However, the research on performance optimization of Mashup operation is lacking. To address this shortfall, this paper proposes a new approach, POMO, for performance optimization of Mashup operation. POMO engages progress in three areas. Firstly, it implements dynamic cache point selection through the tradeoff between operation cost and caching benefit. Secondly, it implements cache point reusing by B+ tree index. Thirdly, it implements cache point update through incremental transmission. Experimental results show that POMO reduces the cost of Mashup operation and improves the performance of Mashup operation in dynamic environment.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063