ZHAO Qi , LIU Xuan-Zhe , WANG Xu-Dong , HUANG Gang , MEI Hong
2013, 24(7):1419-1435. DOI: 10.3724/SP.J.1001.2013.04319 CSTR:
Abstract:As the Internet has rapidly grown, rich Internet application (RIA) has become the mainstream application since it uses local storage and computes resources more effectively and therefore has better user experience. The client-side of RIA, i.e. rich client, can be thought of a sub-application which adopts “model-view-controller” architecture style and runs on the client-side runtime. Along with the development of mobile device and web browser, there are large differences among different client-side runtimes. However, rich client cannot predict the characteristics of its runtime due to the open nature of the Internet, and therefore suffers from the heterogeneous runtimes. This paper proposes a rich client middleware for runtime self-adaption. The middleware provides a MVC component model and decompose the problem of heterogeneous runtimes into three sub-problems: The problem of heterogeneous storage runtimes, the problem of heterogeneous computation runtimes and the problem of heterogeneous display runtimes. The middleware encapsulates a series of common self-adaption services which solve the problems correspondingly.
HE Xiao , MA Zhi-Yi , WANG Rui-Chao , SHAO Wei-Zhong
2013, 24(7):1436-1454. DOI: 10.3724/SP.J.1001.2013.04333 CSTR:
Abstract:Model transformation is a vital technique of MDA. In a complex model-driven development process, it is most likely capapble of employing multiple transformation languages, along with their corresponding tools, to develop a set of model transformations. This increases the learning costs, and also leads to some compatibility problems. The paper proposes a technique of semantics-configurable model transformation, which enables developers to solve different problems using one transformation language, by redefining the semantics of the language. First, a set of common primitive actions are proposed. Then, TSS, an OCL-based scripting language, is employed to specify the logic of a primitive action. Finally, the paper discusses the completeness, expressiveness, and complexity of this approach, and evaluates the approach with some case studies.
LI Feng , HUO Wei , CHEN Cong-Ming , LI Long , ZHONG Lu-Jie , FENG Xiao-Bing
2013, 24(7):1455-1468. DOI: 10.3724/SP.J.1001.2013.04310 CSTR:
Abstract:Until recently, debugging still takes almost 70% of the time in software engineering. The conventional debugging process, based on setting breakpoints and inspecting the states on them, remains the most common and useful way to detect faults. The efficiency of debugging differs a lot as both the selection and inspection of breakpoints are up to programmers. This paper presents a novel breakpoint generating approach based on a new concept named minimum debugging frontier sets (abbr. MDFS). A debugging frontier set describes a set of trace points, which have the ability of bug isolation, and a MDFS is the one with minimum size. Benefiting from the ability of bug isolation, the error suspicious domain will always be narrowed down to one side of the MDFS no matter the state of MDFS is proven correct or not. Breakpoints generated on the basis of MDFS also make the statement instances to be inspected at each breakpoint at the minimum. The paper also establishes a set of criterions to judge the quality of breakpoints. Empirical result indicates that breakpoints generated through this approach not only require low inspecting cost, but also have the ability to accelerate the efficiency of debugging. It also shows that this MDFS-based debugging prototype performs better than the state-of-art fault-localization techniques on the Siemens Suite.
2013, 24(7):1469-1483. DOI: 10.3724/SP.J.1001.2013.04326 CSTR:
Abstract:Covering an array generation is one of the key issues in combinatorial testing, and algorithms are popular due to its ability to deliver smaller covering array in shorter time. People have proposed many greedy algorithms based on different strategies, and most of these can be integrated into a framework, which forms a configurable greedy algorithm. Many new algorithms can be developed within this framework, however, deploying and optimizing the framework affected by multiple factors to construct more efficient covering arrays is a new challenge. The paper designs three different experiments under the framework with six decisions, systematically explore the influence of each of the decisions and interactions among them, to find the best configuration for generating smaller covering array, and provide theoretical and practical guideline for the design and optimization of greedy algorithms.
DING Hui , CHEN Lin , QIAN Ju , XU Lei , XU Bao-Wen
2013, 24(7):1484-1494. DOI: 10.3724/SP.J.1001.2013.04294 CSTR:
Abstract:Fault localization is an important step in debugging. It makes use of the information from the source code and testing to locate bugs in software. This paper presents a method using the information quantity of testing events, called SIQ (suspiciousness based on information quantity). SIQ adjusts the fault localization procedure dynamically according to the testing events and the execution information of statements. In a series of experiments and analysis, SIQ takes on excellent stability in multiple data sets and has better localization accuracy compared to several existing methods.
ZHAO Yin-Liang , ZHU Chang-Peng , HAN Bo , ZENG Qing-Hua
2013, 24(7):1495-1511. DOI: 10.3724/SP.J.1001.2013.04269 CSTR:
Abstract:Some context-oriented programming languages use block-structured constructs to redirect method invocation. However, these constructs do not support dynamic layer addition and activation, which increases the binary size of the program. To address the problem, this paper proposes a new approach that uses fitness testing to support method redirection, and presents a runtime fitness testing calculus, an extension of featherweight Java calculus with context-based method resolution and object transformation, to describe the approach. Based on the calculus, the paper analyzes the influence approach as on the type safety of the program, imposes constraints on it, and formally proves that it preserves the type safety of the program when these constraints are satisfied, which validates the approach. The paper also shows the implementation of the approach and assesses the implementation, which indicates that the approach is feasible. The calculus and the implementation illustrate how to extend Java-like languages to support runtime behavior adjustment that is context-based and type-safe.
LI Chuan-Huang , WANG Wei-Ming , SHI Yin-Yan
2013, 24(7):1512-1528. DOI: 10.3724/SP.J.1001.2013.04268 CSTR:
Abstract:The requirement of software performance as an important part of the software quality requirements is very concerning. The traditional software development methods that focus on the software performance issues later in the development process will bring high risks and high costs. If the performance of software architecture can be predicted at the early phases of the development cycle, the performance bottlenecks can be found in advance, and the possible optimization also can be worked out. In this paper, a model-based UML software architectures performance prediction method is introduced. This method selects and uses case diagrams, activity diagrams and component diagrams, and extends them to UML SPT (schedulability, performance and time) model by introducing the stereotypes and tagged values. It then transforms these UML SPT models into queueing network model through an algorithm which can handle the activity diagram with both branch nodes and confluence nodes. At last, uses the analysis theory of frequency domain to solve queuing network model to derive the performance parameters and performance bottlenecks. At the same time, the design of an automatic performance analysis tool for UML software architecture is introduced, and an instance of performance prediction is given.
QI Yu-Tao , LIU Fang , CHANG Wei-Yuan , MA Xiao-Liang , JIAO Li-Cheng
2013, 24(7):1529-1544. DOI: 10.3724/SP.J.1001.2013.04282 CSTR:
Abstract:A Memetic immune algorithm for multiobjective optimization (MIAMO) is proposed by introducing two types of local search operators. These operators are the Pareto dominance based descent operator and the differential evolution based operator. In MIAMO, the position and spatial relations between antibodies in the decision space are used to design the two heuristic local searching strategies with the assistance of which the efficiency of the immune multiobjective optimization algorithm can be improved. Experimental results indicate that, comparing with the other four efficient multiobjective optimization algorithms, the MIAMO performs better in approximation, uniformity, and coverage. It converges significantly faster than the immune multiobjective optimization algorithm.
2013, 24(7):1545-1556. DOI: 10.3724/SP.J.1001.2013.04278 CSTR:
Abstract:This paper addresses an issue of training optimization of multi-aspect rating inference. First, to address the issue of author inconsistency rating annotation, this paper proposes two simple approaches to improving the standard rating inference models by optimizing sample selection for training, including tolerance-based selection and ranking-loss-based selection methods. Second, to explore correlations between ratings across a set of aspects, this paper presents an aspect-oriented collaborative filtering technique to improve rating inference models. Experiments on two publicly available English and Chinese restaurant review data sets have demonstrated significant improvements over standard algorithms.
WEI Wei , OUYANG Dan-Tong , LÜ Shuai
2013, 24(7):1557-1570. DOI: 10.3724/SP.J.1001.2013.04289 CSTR:
Abstract:Conformant planning is usually transformed into a search problem in the space of belief states. In this paper, a method which can improve efficiency of planning by reducing the nondeterministic degree of belief states is proposed. An enforced hill-climbing algorithm for reducing belief states is presented first. Then, the method of Conformant planning based on reducing belief states is proposed. A planner named CFF-Lite implements this idea and is designed. The planner includes two phases of enforced hill-climbing which are used to reduce belief states and search the goal respectively. Before the search phase, the initial belief state is reduced furthest to an intermediate state which is much more deterministic. Next, the precision of heuristic information is improved and the heuristic search phase is performed. Experimental results show that the CFF-Lite planner can decrease the difficulty of Conformant planning problems by reducing belief states and with most of the test problems this method outperforms Conformant-FF in both planning efficiency and planning quality.
2013, 24(7):1571-1588. DOI: 10.3724/SP.J.1001.2013.04311 CSTR:
Abstract:In addition to the need for satisfying several objectives, many real-world problems are also dynamic and require the optimization algorithm to continuously track the time-varying Pareto optimal set over time. This paper proposes a memory enhanced dynamic multi-objective evolutionary algorithm based on decomposition (denoted by dMOEAD-M). Specifically, the dMOEAD-M decomposes a dynamic multi-objective optimization problem into a number of dynamic scalar optimization subproblems and optimizes them simultaneously. An improved environment detection operator is presented. Also, a subproblem-based bunchy memory scheme, which allows evolutionary algorithm to store good solutions from old environments and reuse them as necessary, is designed to respond to the environment change. Simulation results on eight benchmark problems show that the proposed dMOEAD-M not only runs at a faster speed, more memory capabilities, and a better robustness, but is also able to find a much better spread of solutions and converge better near the changing Pareto optimal front, compared with three other memory enhanced dynamic evolutionary multi-objective optimization algorithms.
ZHANG Zong-Zhang , CHEN Xiao-Ping
2013, 24(7):1589-1600. DOI: 10.3724/SP.J.1001.2013.04318 CSTR:
Abstract:Lots of planning tasks of autonomous robots under uncertain environments can be modeled as a partially observable Markov decision processes (POMDPs). Although researchers have made impressive progress in designing approximation techniques, developing an efficient planning algorithm for POMDPs is still considered as a challenging problem. Previous research has indicated that online planning approaches are promising approximate methods for handling large-scale POMDP domains efficiently as they make decisions “on demand”, instead of proactively for the entire state space. This paper aims to further speed up the POMDP online planning process by designing a novel hybrid heuristic function, which provides a feasible way to take full advantage of some ignored heuristics in current algorithms. The research implements a new method called hybrid heuristic online planning (HHOP). HHOP substantially outperformes state-of-the-art online heuristic search approaches on a suite of POMDP benchmark problems.
GU Bin , ZHENG Guan-Sheng , WANG Jian-Dong
2013, 24(7):1601-1613. DOI: 10.3724/SP.J.1001.2013.04327 CSTR:
Abstract:Batch implementations of standard support vector machine (SVM) are inefficient on an online setting because they must be retrained from scratch every time the training set is modified (i.e., adding or removing some data samples). To solve this problem, Cauwenberghs and Poggio propose an incremental and decremental support vector classification algorithm (C&P algorithm). This paper proves the feasibility and finite convergence of the C&P algorithm through theoretical analysis. The feasibility ensures that each adjustment step in the C&P algorithm is reliable, and the finite convergence ensures that the C&P algorithm can converge to the optimal solution within finite steps. Finally, the conclusions of the theoretical analysis are verified by the experimental results.
JIN Guo-Qiang , CHEN Xiao-Ping
2013, 24(7):1614-1625. DOI: 10.3724/SP.J.1001.2013.04331 CSTR:
Abstract:To improve the task planning module in intelligent service robots, an extension of the action language C+ is proposed and implemented by introducing composite actions as a sequential executions of other actions. The soundness and completeness of the extension is proved by relating the action description in the extended C+ to its corresponding transition system. In the domain of robotic task planning, a composite action can be treated as a “high-level” abstraction of the robot’s physical functions. Such an extension leads to a more intuitive and flexible representation of the robot’s task planning system, and the knowledge of composite actions can be added to the domain incrementally. The experimental results show that for large domains, the extension leads to a great improvement of the solving efficiency.
SHEN Yu-Ming , MA Yue , CAO Cun-Gen , SUI Yue-Fei , WANG Ju
2013, 24(7):1626-1637. DOI: 10.3724/SP.J.1001.2013.04285 CSTR:
Abstract:In computer science, an important application of translations includes comparing the expressive power among logics to achieve reasoning tasks of a logic in another defined logic. General properties of translations found in the literature do not give a comprehensive study on semantically translations and the preservation of the unsatisfiability. To preserve the satisfiability and the unsatisfiability of formulas, the definition of faithful and full translation is given, in this paper, and connections between the faithful and full translation, and other definitions of translations in the literature are discussed. Some properties of logics, such as the soundness, the completeness, the decidability, the compactness, the logical equivalence of formulas and the elementary equivalence of models, which are characterized by the existence of faithful and full translations between logics are also studied. By definition of faithful and full translation, the concept of synonymous logics is introduced and the proof for the equivalence of the synonymous relation is also given.
CHEN Jian , WU Jian-Ping , LI He-Wu
2013, 24(7):1638-1649. DOI: 10.3724/SP.J.1001.2013.04312 CSTR:
Abstract:A spectrum allocation algorithm based on user allocation and load is proposed which includes two parts: User allocation sub-algorithm and spectrum allocation sub-algorithm. Based on the theory of cluster partitioning, an user allocation sub-algorithm is designed. This makes the users associate the same access point with similar signal noise ratios; therefore, this alleviates the popular near-far problem of wireless access network. Then, based on the user allocation results, a spectrum allocation sub-algorithm is designed for optimizing the spectrum allocation according to the load of each access point and the mean value of signal noise ratios of its associated users. The proposed algorithm is realizable, and has polynomial computation complexity and proportional fairness. The trace-driven simulations show that the system throughput is improved efficiently and the length of packet buffer is decreased profoundly by the proposed algorithm.
SONG Tian , LI Dong-Ni , WANG Dong-Sheng , XUE Yi-Bo
2013, 24(7):1650-1665. DOI: 10.3724/SP.J.1001.2013.04314 CSTR:
Abstract:Pattern matching is the main part of content inspection based network security systems, and it is widely used in many other applications. In practice, pattern matching methods for large scale sets with stable performance are in great demand, especially the architecture for online real-time processing. This paper presents a memory efficient pattern matching algorithm and architecture for a large scale set. This paper first proposes cached deterministic finite automata, namely CDFA, in the view of basic theory. By classifying transitions in pattern DFA, a new algorithm, ACC, based on CDFA is addressed. This algorithm can dynamically generate cross transitions and save most of memory resources, so that it can support large scale pattern set. Further, an architecture based on this method is proposed. Experiments on real pattern sets show that the number of transition rules can be reduced 80%~95% than the current most optimized algorithms. At the same time, it can save 40.7% memory space, nearly 2 times memory efficiency. The corresponding architecture in single chip can achieve about 11Gbps matching performance.
HE Liang , FENG Deng-Guo , WANG Rui , SU Pu-Rui , YING Ling-Yun
2013, 24(7):1666-1682. DOI: 10.3724/SP.J.1001.2013.04295 CSTR:
Abstract:This paper provides an approach for simulating the propagation of online social network worms, based on MapReduce, a key component of the cloud computing. In order to improve the simulation accuracy, the approach describes the phases of the worms’ propagation with OSN directed graph, in which each node owns its tunable attributes. Then, the phases are simulated by different map-functions and reduce-functions, which will finally run in the cloud environment. The experimental results on the real large network datasets show that the simulating approach is scalable and helpful in the research of online social network worms.
CHEN Cai-Sen , WANG Tao , GUO Shi-Ze , ZHOU Ping
2013, 24(7):1683-1694. DOI: 10.3724/SP.J.1001.2013.04329 CSTR:
Abstract:The I-cache timing attack which exploits the instruction path of a cipher is one type of side channel attack. First, by analyzing the complications in the previous I-cache timing attacks on RSA algorithm because of how hard it has been to put them into practice, and how the number of the inferred bits is insufficient, this paper builds a new trace driven I-cache timing attack model via spying on the whole I-cache, instead targeting the instruction cache to which the special function mapped. Next, an improved analysis algorithm of the exponent based on the characteristic of the side of window in sliding window exponentiation (SWE) algorithm is proposed. Finally, an I-cache timing attack is implemented on RSA of OpenSSL v.0.9.8f in a practical environment, using a simultaneous multithreading processor to insure that the spy process and the cipher process can run in parallel. Experimental results show that the proposed attack model has strong applicability in real environments; the improved analysis algorithm of the exponent can further reduce the search space of the bits of the key, and improve the effectively of the trace driven I-cache timing attack. For a 512-bit exponent, it can recover about 50 bits of exponent more than the previous.