ZHANG Na , YAO Lan , BAO Xiao-An , DONG Meng , GUI Ning
2015, 26(10):2451-2464. DOI: 10.13328/j.cnki.jos.004745 CSTR:
Abstract:In order to properly rank the priority of test cases from the requirement's perspective, this paper introduces three impact factors: Concerned-Requirements coverage, test case importance degree and test case failure rate. Meanwhile, three weight factors α,β and γ are introduced to balance the three impact factors. This paper designs on-line estimating methods and algorithms based on the concerned-requirements coverage and the test case failure rate. Based on those metrics, a multi-objective optimization based test case prioritization on-line adjustment strategy is developed. The strategy is able to adjust the priorities of test cases dynamically using the feedback information collected in the test process, and thus can meet the coverage criteria earlier and cover test cases of importance and high failure rate. This strategy can also resolve the multi-object test case prioritization problem by detecting more severe bugs earlier. Experimental results show that, compared with random test, the traditional single-object test and the test with deterministic test case prioritization, the presented strategy can complete the test with equal quality by shorter time, thus improves the testing efficiency.
WANG Jin-Yong , WU Zhi-Bo , SHU Yan-Jun , ZHANG Zhan
2015, 26(10):2465-2484. DOI: 10.13328/j.cnki.jos.004746 CSTR:
Abstract:The traditional NHPP (non-homogeneous Poisson process) models are proved to be a success in a practical test. However, the model performance always suffers in the realistic software testing environment due to the ideal assumption which derived the traditional NHPP models, such as constant fault detection rate and smooth or regular changes. In this paper, an NHPP-based software reliability growth model is proposed considering an irregular fluctuation of a fault detection rate, which is more in line with the actual software testing process. The fitting and predictive power of the proposed model is validated using the related experiments. The experimental results show the proposed model has a better fitting and predicting performance than the traditional NHPP-based models using the real-world fault data. Meanwhile, the confidence interval is given for the confidence analyses of the proposed model.
ZHOU Xiao-Yu , GU Bin , ZHAO Jian-Hua , YANG Meng-Fei
2015, 26(10):2485-2503. DOI: 10.13328/j.cnki.jos.004790 CSTR:
Abstract:This paper proposes an approach to model and verify a certain class of interrupt-driven aerospace control systems. These interrupt-driven systems consist of interrupt handlers and system-scheduling tasks. When an interrupt event occurs, the corresponding interrupt-handler executes in response. An interrupt-handler may leave some post-processing works to the system tasks by setting some control variables to certain values. The operating system schedules a set of tasks periodically to deal with routine tasks and some post-processing of interrupt events. In this paper, timed automata labeled with interrupts are used to model interrupt events and task scheduling events. The execution processes of interrupts are modeled by pseudo-code of interrupt handlers and the interrupt vector. Control variables are used to model the interactions between interrupt processing and system tasks while the tasks perform post-processing of interrupts according to the values of control variables set by interrupt handlers. A bounded model checking algorithm is presented in this paper to check these models w.r.t some important timing properties. The algorithm explores all feasible paths in K steps using the depth-first searching method. During the exploring process, time constraints and time requirements in the specification are calculated by the SMT solver Z3.
ZHANG Gong-Jie , GONG Dun-Wei , YAO Xiang-Juan
2015, 26(10):2504-2520. DOI: 10.13328/j.cnki.jos.004749 CSTR:
Abstract:The high cost resulting from a large number of mutants hinders mutation testing in practical application. In order to greatly reduce mutants under weak mutation testing, a new approach to reducing mutants based on statistical dominance analysis is presented. In the proposed approach, mutant branches are first constructed by combining statements before and after mutation, and a new program is formed by integrating all mutant branches into the original program. Furthermore, the dominance relations among mutant branches in the new program are determined by statistical information of coverage after executing test cases. Finally, the non-dominated mutant branches are obtained corresponding to the mutants after reduction. The proposed approach is applied to test eight programs, and the experimental results demonstrate that it can reduce 90% mutants on average, therefore greatly improve the efficiency of mutation testing.
ZHAO Ling-Zhong , ZHAI Zhong-Yi , QIAN Jun-Yan , GUO Yun-Chuan
2015, 26(10):2521-2544. DOI: 10.13328/j.cnki.jos.004738 CSTR:
Abstract:Model checking is a mainstream method for formal verification of communicating sequential processes (CSP). Existing CSP model checkers are based on operational semantics, which need to translate processes into a label transition system, and to extract the semantic model based on the system. The conversion process is complex. Moreover, in most CSP model checkers, the properties to be verified are described by CSP, which is helpful for refinement checking, but at the same time leads to limited description power and weak generality. To address these issues, a new denotational semantic model of CSP, critical-trace model, is proposed and proved to be reliable for model checking CSP. Based on this model, a framework for model checking CSP is established to allow critical-trace model to be constructed inductively from the traces of its components, and properties to be specified by linear temporal logic (LTL), a universal property specification language. In addition, automatic mechanisms for generating critical-trace model and verifying LTL formulas are implemented with answer set programming (ASP). Finally, the two mechanisms are integrated into a CSP model checker T_ASP. Compared with the similar systems developed previously, experimental results indicate a higher ability of the proposed system to describe CSP processes and higher verification accuracy. Furthermore, T_ASP checks multiple properties in one execution of the system. When a property is not satisfied, the system also returns counterexamples for the property.
ZHANG Xuan , LI Tong , WANG Xu , YU Qian , YU Yong , ZHU Rui
2015, 26(10):2545-2566. DOI: 10.13328/j.cnki.jos.004813 CSTR:
Abstract:The trustworthiness of software is determined by both its functional requirements and non-functional requirements. Especially, the non-functional requirements are the determinants of the trustworthy software that show how it achieves the users' desired goals. Considering the importance of trustworthy software and the urgent needs for it, an approach to obtaining process strategies for trustworthy software in the early phase of requirements engineering is proposed. Firstly, the definition of trustworthy software requirements is defined as the combination of the trustworthiness requirements and the quality requirements. Trustworthiness requirements are defined as both functional requirements and trustworthiness concerns. Quality requirements are defined as soft goals. Then, based on fuzzy set theory and information entropy, acquisition method of trustworthiness concerns and soft goals is proposed. On this basis, process strategies for obtaining framework are proposed. Unlike the traditional early-phase requirements engineering which focuses on technical and design decisions, the aim of this study is to make process decisions to support trustworthy software development. In addition, to address the conflict relationships of the non-functional requirements, a reasoning method is developed for solving satisfiability problems of non-functional requirements in trustworthy software. Finally, through analyzing a trustworthy third-party certificate authority software case, feasibility of the proposed approach is described.
ZHAO Qiang-Li , JIANG Yan-Huang , LU Yu-Tong
2015, 26(10):2567-2580. DOI: 10.13328/j.cnki.jos.004747 CSTR:
Abstract:Using ensemble of classifiers on sequential chunks of training instances is a popular strategy for data stream mining with concept drifts. Aiming at the limitations of existing approaches, this paper introduces human recalling and forgetting mechanisms into a data stream mining system, and proposes a memorizing based data stream mining (MDSM) model. The model considers base classifiers as learned knowledge. Through "recalling and forgetting" mechanism, most useful classifiers in the past will be reserved in a "memory repository", which improves the stability under random concept drifts. The best classifiers for the current data chunk are selected for prediction, which achieves high adaptability for different concept drifts. Based on MSDM, the paper puts forward a new algorithm MAE (memorizing based adaptive ensemble). MAE uses Ebbinghaus forgetting curve as forgetting mechanism and adopts ensemble pruning to emulate the "recalling" mechanism. Compared with four traditional data stream mining approaches, the results show that MAE achieves high and stable accuracy with moderate training time. The results also proved that MAE has good adaptability for different kinds of concept drifts, especially for the applications with recurring or complex concept drifts.
ZHANG Hu-Yin , ZHOU Jing-Cai , CHEN Yi-Bo , ZHA Wen-Liang
2015, 26(10):2581-2595. DOI: 10.13328/j.cnki.jos.004795 CSTR:
Abstract:By doing a lot of experiments, if two users have more cross-project then they will own more duplication data at a virtual desktop instrument system. So, according to this finding, this paper proposes a user-aware de-duplication algorithm. This algorithm breaks the rule of data locality and can work at the new rule of user locality. According to the new rule, it just need load one user's finger print data into memory for each user group. So it can reduce 5x~10x memory requirements than other algorithm and it can control the searching scope in a limited number for each checking besides. So this algorithm can avoid a lot of read I/O operations. Meanwhile, this algorithm can adjust the searching scope dynamically according to the current workload of VDI system. Because it always tries to get the best de-duplication rate but not affect the response time of VDI system. The prototype experimental results show that it can improve the performance of de-duplication algorithm, especially when it used in a massive data storage system. Compared with OpenDedup, the algorithm can reduce more than 200% read I/O operations and can accelerate the response time more than 3x fast when the finger print data is bigger than available memory.
DU Xiao-Kun , LI Guo-Hui , WANG Jiang-Qing , TIE Jun , LI Yan-Hong
2015, 26(10):2596-2613. DOI: 10.13328/j.cnki.jos.004798 CSTR:
Abstract:Structure information is one of the most important types of auxiliary information in schema matching. When a schema has multiple elements with same semantics, the structure information is the most effective information to get correct matching for these elements. This is especially important in big-scale schema. Existing schema matching methods have some weaknesses such as inaccurate structure information, lack of effective description form, and high time complexity in the utilization of structure information therefore greatly hindering the use of structure information. In order to fully use the structure information, a new schema matching method based on information unit(IU_Based) is proposed in this paper. In IU_Based, the elements are first grouped in different information units according to the entity described. Then, the similarity between information units is calculated and the matching relation between information units is obtained based on the similarity. Finally, the matching between elements is selected from the matched information units. Extensive simulation experiments are conducted and the results show that IU_Based method can solve the problems in the use of structure information and take full advantage of structure information to improve the accuracy of match result.
NI Zhi-Wei , WANG Chao , HU Tang-Lei , NI Li-Ping
2015, 26(10):2614-2630. DOI: 10.13328/j.cnki.jos.004806 CSTR:
Abstract:In the era of big data, data stream is a common data model with characteristics such as orderly, massive and time-varying. Fractal is an important feature of many complex systems, and is mainly represented by fractal dimension. Data stream can be viewed as a dynamic and complex system, and its fractal dimension should also have characteristics of dynamic, time-varying and multi-granularity. This paper presents a method of measuring multi-granularity and time-varying fractal dimension on a data stream based on discrete wavelet transform. The method can simultaneously measure the time-varying fractal dimension on a data stream by using the summary information from wavelet transforming of the data stream saved in a multi-granularity wavelet transforming tree in memory. This method has low computational complexity, and effectively reveals the evolution of a data stream. Experimental results show that it can effectively monitor the time-varying characteristic of fractal dimension on a data stream at different granularity.
XUE Zhong-Bin , ZHOU Xuan , WANG Shan
2015, 26(10):2631-2643. DOI: 10.13328/j.cnki.jos.004809 CSTR:
Abstract:With the development of location-aware mobile devices, communication technologies and GPS systems, location based queries have become an important research issue in the area of database. This paper studies the problem of snapshot based spatial range query which searches for the moving objects within a specific query range in a specific time interval. Range query is the building block of other types of spatial queries, such as k nearest neighbor query and reverse k nearest neighbor query. A series of algorithms have been proposed to process range queries of moving objects. However, these algorithms are either designed for fast response time or high update performance. They are not purposely designed for the situation of big data where throughput is more important as both queries and updates arrive at a very high rate. For the query stream and object update stream, a high throughput main memory algorithm—Dual Stream Join algorithm is proposed for moving object range query. DSJ uses a snapshot approach. In each snapshot, DSJ builds a new index structure based on the update of the moving objects, which avoids maintaining a sophisticate structures and gives full play to the performance of the hardware. DSJ executes a batch queries at each run, which increases the data locality and improves the efficiency of the algorithm. DSJ also employs the SIMD technology to accelerate the query processing and makes sure that the system has high throughput. A comprehensive performance evaluation of the proposed techniques is conducted using the German network generated data. The results show that DSJ is highly efficient.
LIU Xiao-Feng , ZHAO You-Jian , CHEN Guo
2015, 26(10):2644-2655. DOI: 10.13328/j.cnki.jos.004739 CSTR:
Abstract:A dispatching algorithm is always indispensable research topic for a switching system. In studying the future-generation high-speed switching and routing system, a concurrent round-robin dispatching algorithm with active grants, called CRRD-AG, is proposed in this paper. As a multistage switching architecture, Clos-network has been receiving increasing attention due to its scalability and modularity, and yet it is not much that the corresponding dispatching algorithm can be applied to the multistage switching fabric. So far, some well-known dispatching algorithms, such as concurrent dispatching (CD) and concurrent round-robin dispatching (CRRD), either have low throughput or can not handle various traffic. CRRD-AG algorithm based on CRRD improves the typical request- grant-accept (R-G-A) matching model used by CRRD to the active grant-accept (AG-A) matching model. Consequently, not only the iterative messages of the request phase can be reduced significantly, but also the bandwidth of the central stage can be utilized sufficiently. Therefore, the average delay is decreased and the throughput is increased. Also, simulations show that CRRD-AG achieves 100% throughput under uniform traffic and bursty traffic. More importantly, the average delay performance of the whole switching system is improved significantly without reducing the throughput of the switching system.
2015, 26(10):2656-2666. DOI: 10.13328/j.cnki.jos.004743 CSTR:
Abstract:This paper investigates the upper bound on the maximum differential probability for SPS structure with P-box using n-MDS (maximum distance separable) matrix over the finite field GF(2). First, an estimation formula of differential probability for every fixed differential pair is presented. Then, a new upper bound on the maximum differential probability for SPS structure is described. The experimental analysis shows that the resulting upper bound is tighter than the best known upper bound.
JIN Meng , CHEN Xiao-Jiang , FANG Ding-Yi , TANG Zhan-Yong , LIU Chen , XU Dan , WANG Wei
2015, 26(10):2667-2683. DOI: 10.13328/j.cnki.jos.004792 CSTR:
Abstract:The low-cost crystal oscillators in wireless sensor networks are prone to be affected by their working conditions such as temperature, voltage, and humidity. Such problem brings two key challenges for time synchronization in wireless sensor networks (WSNs): Excessive communication overhead and the trade-off between accuracy and cost. This paper introduces a novel environment-adaptive time synchronization approach that enables nodes to estimate their clock skew by exploiting temperature information. The approach can substantially reduce communication overhead since clock skew estimation mostly relies on local information. In addition, this work proposes an environmental-adaptive interval adjustment scheme for duty-cycled clock calibration, which provides a convenient trade-off between the timing accuracy and the energy efficiency.
2015, 26(10):2684-2695. DOI: 10.13328/j.cnki.jos.004805 CSTR:
Abstract:This paper presents a general structure of meet-in-the-middle attack by combing the advantages of Biclique and three sub-set meet-in- the-middle. Compared with the Biclique cryptanalysis proposed in Asiacrypt 2011, this attack model is more reasonable to be regarded as the security of one block cipher against meet-in-the-middle attack. Moreover, the study evaluates the security of TWINE against meet-in-the- middle attack and gives attacks on 18-round TWINE-80 and 22-round TWINE-128. Meanwhile, the data complexities of these attacks are the least among the precious attacks on TWINE.
2015, 26(10):2696-2719. DOI: 10.13328/j.cnki.jos.004808 CSTR:
Abstract:Constructing efficient and secured fully homomorphic encryption is still an open problem. By generalizing approximate GCD to approximate ideal lattice, a somewhat homomorphic encryption scheme is first presented based on partial approximate ideal lattice problem (PAILP) over the integers. The scheme is then converted it into a fully homomorphic encryption scheme (FHE) by applying Gentry's bootstrappable techniques. Next, the security of the somewhat homomorphic encryption scheme is reduced to solving a partial approximate ideal lattice problem. Furthermore, a PAILP-based batch FHE and an AILP-based FHE are constructed. Finally, the PAILP/AILP-based FHE is implemented, and the performance of the proposed scheme is demonstrated to be better than that of previous schemes by computational experimental.
HAN Hong-Lei , WANG Wen-Cheng , HUA Miao
2015, 26(10):2720-2732. DOI: 10.13328/j.cnki.jos.004742 CSTR:
Abstract:Upright orientation of a 3D object is conducive to applications such as model alignment and function recovery. However, state of art methods mainly focus on man-made models only, and some of them require much user interaction, thus lowering the processing efficiency. In this paper, a new method is proposed to effectively attain the upright orientation of an object, no matter whether it is man-made or natural. It is based on the observation that there are very few surface features on the base of the object, and so the views for watching the base will be given very low scores in view evaluation. Thus, using to the view scores, the base of the model can be found effectively to get the upright orientation of the model. To improve computation reliability, two metrics are further integrated into the method to measure the stability of model placement and the habitual viewing direction of human beings to effectively watch a model. Experimental results show that the presented method can effectively handle a variety of models, including some man-made models that cannot be well handled with existing methods, and it can run much faster than existing methods.
WANG Mei-Hua , LIANG Yun , LIU Fu-Ming , LUO Xiao-Nan
2015, 26(10):2733-2747. DOI: 10.13328/j.cnki.jos.004737 CSTR:
Abstract:Abstract: Dealing with factors such as overlap, blurs from quickly moving and severe deformation, accurate and stable object tracking has become a critical challenge in compute vision field. First, in this paper, superpixels are used as middle level visual clue to describe the components of object/background with the color histograms of components as their features. The initial appearance model is proposed by clustering the features of a component library. The locality and flexibility of components representations allow the appearance model to describe object/background much more accurately. Then, the Bayesian filter model is used to compute the initial state of target region, and an algorithm is proposed to check and deal with the disturbance introduced by similar objects to avoid drift and obtain more robust tracking result. Finally, to reduce the influences of deformation, overlap and blurs to better preserve the features of object, an online appearance model update algorithm is developed based on the complementary set of the features of components library to enable the appearance model to reflect the real-time variation of object/background by the changes of components. Many experiments on video sequences with different tracking challenges (totally about 12 sequences) show that, compared with the existing object tracking methods, the proposed tracking algorithm results in less error of center position and more successful frame, and therefore can track an object more accurately, stably and effectively.