• Volume 24,Issue 7,2013 Table of Contents
    Select All
    Display Type: |
    • Rich Client Middleware for Runtime Self-Adaption

      2013, 24(7):1419-1435. DOI: 10.3724/SP.J.1001.2013.04319

      Abstract (3248) HTML (0) PDF 1006.83 K (5710) Comment (0) Favorites

      Abstract:As the Internet has rapidly grown, rich Internet application (RIA) has become the mainstream application since it uses local storage and computes resources more effectively and therefore has better user experience. The client-side of RIA, i.e. rich client, can be thought of a sub-application which adopts “model-view-controller” architecture style and runs on the client-side runtime. Along with the development of mobile device and web browser, there are large differences among different client-side runtimes. However, rich client cannot predict the characteristics of its runtime due to the open nature of the Internet, and therefore suffers from the heterogeneous runtimes. This paper proposes a rich client middleware for runtime self-adaption. The middleware provides a MVC component model and decompose the problem of heterogeneous runtimes into three sub-problems: The problem of heterogeneous storage runtimes, the problem of heterogeneous computation runtimes and the problem of heterogeneous display runtimes. The middleware encapsulates a series of common self-adaption services which solve the problems correspondingly.

    • Semantics-Configurable Model Transformation

      2013, 24(7):1436-1454. DOI: 10.3724/SP.J.1001.2013.04333

      Abstract (3375) HTML (0) PDF 895.75 K (5623) Comment (0) Favorites

      Abstract:Model transformation is a vital technique of MDA. In a complex model-driven development process, it is most likely capapble of employing multiple transformation languages, along with their corresponding tools, to develop a set of model transformations. This increases the learning costs, and also leads to some compatibility problems. The paper proposes a technique of semantics-configurable model transformation, which enables developers to solve different problems using one transformation language, by redefining the semantics of the language. First, a set of common primitive actions are proposed. Then, TSS, an OCL-based scripting language, is employed to specify the logic of a primitive action. Finally, the paper discusses the completeness, expressiveness, and complexity of this approach, and evaluates the approach with some case studies.

    • Automatic Breakpoint Generating Approach Based on Minimum Debugging Frontier Set

      2013, 24(7):1455-1468. DOI: 10.3724/SP.J.1001.2013.04310

      Abstract (3635) HTML (0) PDF 966.42 K (4980) Comment (0) Favorites

      Abstract:Until recently, debugging still takes almost 70% of the time in software engineering. The conventional debugging process, based on setting breakpoints and inspecting the states on them, remains the most common and useful way to detect faults. The efficiency of debugging differs a lot as both the selection and inspection of breakpoints are up to programmers. This paper presents a novel breakpoint generating approach based on a new concept named minimum debugging frontier sets (abbr. MDFS). A debugging frontier set describes a set of trace points, which have the ability of bug isolation, and a MDFS is the one with minimum size. Benefiting from the ability of bug isolation, the error suspicious domain will always be narrowed down to one side of the MDFS no matter the state of MDFS is proven correct or not. Breakpoints generated on the basis of MDFS also make the statement instances to be inspected at each breakpoint at the minimum. The paper also establishes a set of criterions to judge the quality of breakpoints. Empirical result indicates that breakpoints generated through this approach not only require low inspecting cost, but also have the ability to accelerate the efficiency of debugging. It also shows that this MDFS-based debugging prototype performs better than the state-of-art fault-localization techniques on the Siemens Suite.

    • Optimization of Configurable Greedy Algorithm for Covering Arrays Generation

      2013, 24(7):1469-1483. DOI: 10.3724/SP.J.1001.2013.04326

      Abstract (3556) HTML (0) PDF 1.08 M (5076) Comment (0) Favorites

      Abstract:Covering an array generation is one of the key issues in combinatorial testing, and algorithms are popular due to its ability to deliver smaller covering array in shorter time. People have proposed many greedy algorithms based on different strategies, and most of these can be integrated into a framework, which forms a configurable greedy algorithm. Many new algorithms can be developed within this framework, however, deploying and optimizing the framework affected by multiple factors to construct more efficient covering arrays is a new challenge. The paper designs three different experiments under the framework with six decisions, systematically explore the influence of each of the decisions and interactions among them, to find the best configuration for generating smaller covering array, and provide theoretical and practical guideline for the design and optimization of greedy algorithms.

    • Fault Localization Method Using Information Quantity

      2013, 24(7):1484-1494. DOI: 10.3724/SP.J.1001.2013.04294

      Abstract (4236) HTML (0) PDF 603.26 K (5960) Comment (0) Favorites

      Abstract:Fault localization is an important step in debugging. It makes use of the information from the source code and testing to locate bugs in software. This paper presents a method using the information quantity of testing events, called SIQ (suspiciousness based on information quantity). SIQ adjusts the fault localization procedure dynamically according to the testing events and the execution information of statements. In a series of experiments and analysis, SIQ takes on excellent stability in multiple data sets and has better localization accuracy compared to several existing methods.

    • Calculus Using Fitness Testing for Method Redirection

      2013, 24(7):1495-1511. DOI: 10.3724/SP.J.1001.2013.04269

      Abstract (3455) HTML (0) PDF 836.87 K (5017) Comment (0) Favorites

      Abstract:Some context-oriented programming languages use block-structured constructs to redirect method invocation. However, these constructs do not support dynamic layer addition and activation, which increases the binary size of the program. To address the problem, this paper proposes a new approach that uses fitness testing to support method redirection, and presents a runtime fitness testing calculus, an extension of featherweight Java calculus with context-based method resolution and object transformation, to describe the approach. Based on the calculus, the paper analyzes the influence approach as on the type safety of the program, imposes constraints on it, and formally proves that it preserves the type safety of the program when these constraints are satisfied, which validates the approach. The paper also shows the implementation of the approach and assesses the implementation, which indicates that the approach is feasible. The calculus and the implementation illustrate how to extend Java-like languages to support runtime behavior adjustment that is context-based and type-safe.

    • Performance Prediction Method for UML Software Architecture and Its Automation

      2013, 24(7):1512-1528. DOI: 10.3724/SP.J.1001.2013.04268

      Abstract (5521) HTML (0) PDF 924.32 K (7274) Comment (0) Favorites

      Abstract:The requirement of software performance as an important part of the software quality requirements is very concerning. The traditional software development methods that focus on the software performance issues later in the development process will bring high risks and high costs. If the performance of software architecture can be predicted at the early phases of the development cycle, the performance bottlenecks can be found in advance, and the possible optimization also can be worked out. In this paper, a model-based UML software architectures performance prediction method is introduced. This method selects and uses case diagrams, activity diagrams and component diagrams, and extends them to UML SPT (schedulability, performance and time) model by introducing the stereotypes and tagged values. It then transforms these UML SPT models into queueing network model through an algorithm which can handle the activity diagram with both branch nodes and confluence nodes. At last, uses the analysis theory of frequency domain to solve queuing network model to derive the performance parameters and performance bottlenecks. At the same time, the design of an automatic performance analysis tool for UML software architecture is introduced, and an instance of performance prediction is given.

    • Memetic Immune Algorithm for Multiobjective Optimization

      2013, 24(7):1529-1544. DOI: 10.3724/SP.J.1001.2013.04282

      Abstract (3688) HTML (0) PDF 1.49 M (7413) Comment (0) Favorites

      Abstract:A Memetic immune algorithm for multiobjective optimization (MIAMO) is proposed by introducing two types of local search operators. These operators are the Pareto dominance based descent operator and the differential evolution based operator. In MIAMO, the position and spatial relations between antibodies in the decision space are used to design the two heuristic local searching strategies with the assistance of which the efficiency of the immune multiobjective optimization algorithm can be improved. Experimental results indicate that, comparing with the other four efficient multiobjective optimization algorithms, the MIAMO performs better in approximation, uniformity, and coverage. It converges significantly faster than the immune multiobjective optimization algorithm.

    • Optimizations of Multi-Aspect Rating Inference Model

      2013, 24(7):1545-1556. DOI: 10.3724/SP.J.1001.2013.04278

      Abstract (3689) HTML (0) PDF 626.46 K (9959) Comment (0) Favorites

      Abstract:This paper addresses an issue of training optimization of multi-aspect rating inference. First, to address the issue of author inconsistency rating annotation, this paper proposes two simple approaches to improving the standard rating inference models by optimizing sample selection for training, including tolerance-based selection and ranking-loss-based selection methods. Second, to explore correlations between ratings across a set of aspects, this paper presents an aspect-oriented collaborative filtering technique to improve rating inference models. Experiments on two publicly available English and Chinese restaurant review data sets have demonstrated significant improvements over standard algorithms.

    • Conformant Planning Based on Reducing Belief States

      2013, 24(7):1557-1570. DOI: 10.3724/SP.J.1001.2013.04289

      Abstract (3270) HTML (0) PDF 696.95 K (4691) Comment (0) Favorites

      Abstract:Conformant planning is usually transformed into a search problem in the space of belief states. In this paper, a method which can improve efficiency of planning by reducing the nondeterministic degree of belief states is proposed. An enforced hill-climbing algorithm for reducing belief states is presented first. Then, the method of Conformant planning based on reducing belief states is proposed. A planner named CFF-Lite implements this idea and is designed. The planner includes two phases of enforced hill-climbing which are used to reduce belief states and search the goal respectively. Before the search phase, the initial belief state is reduced furthest to an intermediate state which is much more deterministic. Next, the precision of heuristic information is improved and the heuristic search phase is performed. Experimental results show that the CFF-Lite planner can decrease the difficulty of Conformant planning problems by reducing belief states and with most of the test problems this method outperforms Conformant-FF in both planning efficiency and planning quality.

    • Memory Enhanced Dynamic Multi-Objective Evolutionary Algorithm Based on Decomposition

      2013, 24(7):1571-1588. DOI: 10.3724/SP.J.1001.2013.04311

      Abstract (3824) HTML (0) PDF 1.74 M (7121) Comment (0) Favorites

      Abstract:In addition to the need for satisfying several objectives, many real-world problems are also dynamic and require the optimization algorithm to continuously track the time-varying Pareto optimal set over time. This paper proposes a memory enhanced dynamic multi-objective evolutionary algorithm based on decomposition (denoted by dMOEAD-M). Specifically, the dMOEAD-M decomposes a dynamic multi-objective optimization problem into a number of dynamic scalar optimization subproblems and optimizes them simultaneously. An improved environment detection operator is presented. Also, a subproblem-based bunchy memory scheme, which allows evolutionary algorithm to store good solutions from old environments and reuse them as necessary, is designed to respond to the environment change. Simulation results on eight benchmark problems show that the proposed dMOEAD-M not only runs at a faster speed, more memory capabilities, and a better robustness, but is also able to find a much better spread of solutions and converge better near the changing Pareto optimal front, compared with three other memory enhanced dynamic evolutionary multi-objective optimization algorithms.

    • Hybrid Heuristic Online Planning for POMDPs

      2013, 24(7):1589-1600. DOI: 10.3724/SP.J.1001.2013.04318

      Abstract (3432) HTML (0) PDF 662.44 K (5738) Comment (0) Favorites

      Abstract:Lots of planning tasks of autonomous robots under uncertain environments can be modeled as a partially observable Markov decision processes (POMDPs). Although researchers have made impressive progress in designing approximation techniques, developing an efficient planning algorithm for POMDPs is still considered as a challenging problem. Previous research has indicated that online planning approaches are promising approximate methods for handling large-scale POMDP domains efficiently as they make decisions “on demand”, instead of proactively for the entire state space. This paper aims to further speed up the POMDP online planning process by designing a novel hybrid heuristic function, which provides a feasible way to take full advantage of some ignored heuristics in current algorithms. The research implements a new method called hybrid heuristic online planning (HHOP). HHOP substantially outperformes state-of-the-art online heuristic search approaches on a suite of POMDP benchmark problems.

    • Analysis for Incremental and Decremental Standard Support Vector Machine

      2013, 24(7):1601-1613. DOI: 10.3724/SP.J.1001.2013.04327

      Abstract (4220) HTML (0) PDF 1014.95 K (5834) Comment (0) Favorites

      Abstract:Batch implementations of standard support vector machine (SVM) are inefficient on an online setting because they must be retrained from scratch every time the training set is modified (i.e., adding or removing some data samples). To solve this problem, Cauwenberghs and Poggio propose an incremental and decremental support vector classification algorithm (C&P algorithm). This paper proves the feasibility and finite convergence of the C&P algorithm through theoretical analysis. The feasibility ensures that each adjustment step in the C&P algorithm is reliable, and the finite convergence ensures that the C&P algorithm can converge to the optimal solution within finite steps. Finally, the conclusions of the theoretical analysis are verified by the experimental results.

    • Extending Action Languages for Intelligent Service Robot Task Planning

      2013, 24(7):1614-1625. DOI: 10.3724/SP.J.1001.2013.04331

      Abstract (3206) HTML (0) PDF 651.76 K (6184) Comment (0) Favorites

      Abstract:To improve the task planning module in intelligent service robots, an extension of the action language C+ is proposed and implemented by introducing composite actions as a sequential executions of other actions. The soundness and completeness of the extension is proved by relating the action description in the extended C+ to its corresponding transition system. In the domain of robotic task planning, a composite action can be treated as a “high-level” abstraction of the robot’s physical functions. Such an extension leads to a more intuitive and flexible representation of the robot’s task planning system, and the knowledge of composite actions can be added to the domain incrementally. The experimental results show that for large domains, the extension leads to a great improvement of the solving efficiency.

    • Faithful and Full Translations Between Logics

      2013, 24(7):1626-1637. DOI: 10.3724/SP.J.1001.2013.04285

      Abstract (3708) HTML (0) PDF 621.82 K (5031) Comment (0) Favorites

      Abstract:In computer science, an important application of translations includes comparing the expressive power among logics to achieve reasoning tasks of a logic in another defined logic. General properties of translations found in the literature do not give a comprehensive study on semantically translations and the preservation of the unsatisfiability. To preserve the satisfiability and the unsatisfiability of formulas, the definition of faithful and full translation is given, in this paper, and connections between the faithful and full translation, and other definitions of translations in the literature are discussed. Some properties of logics, such as the soundness, the completeness, the decidability, the compactness, the logical equivalence of formulas and the elementary equivalence of models, which are characterized by the existence of faithful and full translations between logics are also studied. By definition of faithful and full translation, the concept of synonymous logics is introduced and the proof for the equivalence of the synonymous relation is also given.

    • Spectrum Allocation Algorithm Based on User Allocation and Load

      2013, 24(7):1638-1649. DOI: 10.3724/SP.J.1001.2013.04312

      Abstract (3480) HTML (0) PDF 875.00 K (4935) Comment (0) Favorites

      Abstract:A spectrum allocation algorithm based on user allocation and load is proposed which includes two parts: User allocation sub-algorithm and spectrum allocation sub-algorithm. Based on the theory of cluster partitioning, an user allocation sub-algorithm is designed. This makes the users associate the same access point with similar signal noise ratios; therefore, this alleviates the popular near-far problem of wireless access network. Then, based on the user allocation results, a spectrum allocation sub-algorithm is designed for optimizing the spectrum allocation according to the load of each access point and the mean value of signal noise ratios of its associated users. The proposed algorithm is realizable, and has polynomial computation complexity and proportional fairness. The trace-driven simulations show that the system throughput is improved efficiently and the length of packet buffer is decreased profoundly by the proposed algorithm.

    • Memory Efficient Algorithm and Architecture for Multi-Pattern Matching

      2013, 24(7):1650-1665. DOI: 10.3724/SP.J.1001.2013.04314

      Abstract (3623) HTML (0) PDF 918.92 K (6281) Comment (0) Favorites

      Abstract:Pattern matching is the main part of content inspection based network security systems, and it is widely used in many other applications. In practice, pattern matching methods for large scale sets with stable performance are in great demand, especially the architecture for online real-time processing. This paper presents a memory efficient pattern matching algorithm and architecture for a large scale set. This paper first proposes cached deterministic finite automata, namely CDFA, in the view of basic theory. By classifying transitions in pattern DFA, a new algorithm, ACC, based on CDFA is addressed. This algorithm can dynamically generate cross transitions and save most of memory resources, so that it can support large scale pattern set. Further, an architecture based on this method is proposed. Experiments on real pattern sets show that the number of transition rules can be reduced 80%~95% than the current most optimized algorithms. At the same time, it can save 40.7% memory space, nearly 2 times memory efficiency. The corresponding architecture in single chip can achieve about 11Gbps matching performance.

    • MapReduce-Based Large-Scale Online Social Network Worm Simulation

      2013, 24(7):1666-1682. DOI: 10.3724/SP.J.1001.2013.04295

      Abstract (6103) HTML (0) PDF 960.22 K (7192) Comment (0) Favorites

      Abstract:This paper provides an approach for simulating the propagation of online social network worms, based on MapReduce, a key component of the cloud computing. In order to improve the simulation accuracy, the approach describes the phases of the worms’ propagation with OSN directed graph, in which each node owns its tunable attributes. Then, the phases are simulated by different map-functions and reduce-functions, which will finally run in the cloud environment. The experimental results on the real large network datasets show that the simulating approach is scalable and helpful in the research of online social network worms.

    • Research on Trace Drive Instruction Cache Timing Attack on RSA

      2013, 24(7):1683-1694. DOI: 10.3724/SP.J.1001.2013.04329

      Abstract (3445) HTML (0) PDF 844.70 K (5452) Comment (0) Favorites

      Abstract:The I-cache timing attack which exploits the instruction path of a cipher is one type of side channel attack. First, by analyzing the complications in the previous I-cache timing attacks on RSA algorithm because of how hard it has been to put them into practice, and how the number of the inferred bits is insufficient, this paper builds a new trace driven I-cache timing attack model via spying on the whole I-cache, instead targeting the instruction cache to which the special function mapped. Next, an improved analysis algorithm of the exponent based on the characteristic of the side of window in sliding window exponentiation (SWE) algorithm is proposed. Finally, an I-cache timing attack is implemented on RSA of OpenSSL v.0.9.8f in a practical environment, using a simultaneous multithreading processor to insure that the spy process and the cipher process can run in parallel. Experimental results show that the proposed attack model has strong applicability in real environments; the improved analysis algorithm of the exponent can further reduce the search space of the bits of the key, and improve the effectively of the trace driven I-cache timing attack. For a 512-bit exponent, it can recover about 50 bits of exponent more than the previous.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063