2014, 25(11):2433-2451. DOI: 10.13328/j.cnki.jos.004523 CSTR:
Abstract:This paper mainly studies the axiomatization of higher-order process calculi with the mismatch operator. Firstly, it formulates the theory of open weak higher-order bisimulation, and shows important properties such as equivalence and congruence. Secondly, following the method on linearity, it builds up an axiom system for finite processes. Finally, based on the characterization of open weak higher-order bisimulation, it proves the completeness of the axiom system. The work of this paper provides the basis for designing and implementing an effective algorithm for checking the bisimulation equivalence of higher-order processes with the mismatch operator, and a theoretical reference for relevant applications of modeling with higher-order processes.
KAN Shuang-Long , HUANG Zhi-Qiu , CHEN Zhe , XU Bing-Feng
2014, 25(11):2452-2472. DOI: 10.13328/j.cnki.jos.004609 CSTR:
Abstract:In this paper, a technology is presented to use event automata to specify the safety properties of C programs and apply bounded model checking to verify whether a C program satisfies an event automaton property. An event automaton can specify a safety property which is based on the events generated by a program. It can also specify a property with infinite states. Since an event automaton separates from C programs, it will not change the structures of programs. The paper introduces the definition of an automaton reachability tree based on an event automaton. It then uses automaton reachability trees and the bounded model checking to build the SMT (satisfiability modulo theory) models of event automata and C programs. Finally, it supplies the SMT models to an SMT solver. An algorithm for generating counterexamples is obtained according to the results of the solver. The case studies and experimental results demonstrate that the presented approach can verify the event properties of software systems.
MA You , WANG Shang-Guang , SUN Qi-Bo , YANG Fang-Chun
2014, 25(11):2473-2485. DOI: 10.13328/j.cnki.jos.004508 CSTR:
Abstract:The existing methods for measuring Web service QoS (quality of service) are not accurate as they cannot precisely quantify the ambiguity of user preference while neglecting the characteristics of QoS data set. This paper presents a QoS measuring algorithm which employs subjective and objective weight. The new approach can automatically adapt to user preference with a subjective weight calculation method, and can evaluate service performance accurately with an objective weight calculation method. The algorithm takes a comprehensive consideration of both subjective and objective aspects to measure Web service QoS, therefore the measure results can conform to user preference and reflect the overall service performance accurately. The theoretical analysis and experimental results based on the QWS real data set show that the proposed algorithm can measure Web service QoS accurately.
LIU Peng , ZHAO Rong-Cai , PANG Jian-Min , YAO Yuan
2014, 25(11):2486-2498. DOI: 10.13328/j.cnki.jos.004596 CSTR:
Abstract:Pointer analysis is a key technology in data flow analysis, and the result of pointer analysis is the basis of compiler optimization and program transformation. Based on the inclusion-based pointer analysis algorithm research, this paper analyzes the problems of redundant constraints evaluation and computational overhead of priority evaluation model in Narse priority constraints evaluation algorithm. Candidate set of constraint evaluation is determined by points-to set updating information of pointers, and the prioritizing pointer analysis algorithm based on points-to updating is presented. Constraint dependency graph is built by pointer dereference dependence and pointer scalar dependence in constraint statements, and priority of constraint evaluation is determined by the dependencies. Prioritizing algorithm based on constraint dependency graph is presented to simplify the complex priority evaluation model in Narse algorithm, and the overall framework of the optimized algorithm is provided. The experimental results on SPEC 2000/SPEC 2006 benchmark show that the algorithm has a significant performance boost on the time overhead and storage overhead compared with Narse priority algorithm.
2014, 25(11):2499-2517. DOI: 10.13328/j.cnki.jos.004559 CSTR:
Abstract:Software changes constantly in its lifecycle to adapt to the changing requirements and environments. In order to predict whether each change will introduce any bugs timely, various bug prediction methods for software source code changes have been proposed by researchers. However, there are three deficiencies in existing methods: 1) The prediction granularities are limited at the coarse-grained levels (i.e. transaction or file levels); 2) As vector space model is used to represent software changes, abundant information in software repositories, such as program structure, natural language semantic and history information, can not be mined sufficiently; 3) Only short-time prediction is explored without considering the concept drift caused by new requirements, team restructuring or other external factors during the long time software evolution process. In order to overcome the shortcomings of existing methods, a bug prediction method for source code changes is proposed. It makes prediction for fine-grained (i.e. statement level) changes, which reduces the quality assurance cost effectively. By in-depth mining software repositories with static program analysis and natural language semantic inference technologies, feature sets of changes are constructed in four aspects (i.e. context, content, time, and developer) and key factors that lead to bug injection are revealed. Characteristics of concept drift in software evolution process are analyzed by using matrix of feature entropy difference, and an algorithm of adaptive window with concept reviewing is proposed to achieve stability of long-time prediction. Experiments on six famous open source projects demonstrate effectiveness of the proposed method.
2014, 25(11):2518-2527. DOI: 10.13328/j.cnki.jos.004525 CSTR:
Abstract:Information aggregation is one of the basic means for information processing. This paper mainly discusses about behavior of several aggregation operators. Firstly, from the perspective of generalized adjunction, the properties of generalized aggregation operators are discussed. Given one aggregation operator A, two different methods are adopted to construct new aggregation operators greater (less) than or equal to A. Furthermore, several aggregation operators, such as bipolar t-norms and bipolar implications, are explored, and the condition is provided for bipolar aggregation operators to be decomposed into two unipolar aggregation operators. Under this condition, the aggregation processing for positive as well as negative information is also presented.
HONG Yu , YAN Wei-Rong , CHE Ting-Ting , LIANG Ying-Hong , YAO Jian-Min , ZHU Qiao-Ming , ZHOU Guo-Dong
2014, 25(11):2528-2555. DOI: 10.13328/j.cnki.jos.004546 CSTR:
Abstract:Discourse is a literary form that consists of semantically-related and well-structured arguments. One of the key tasks of discourse analysis is to resolve semantic relationship between arguments. Explicit relation is easy to detect with an accuracy of nearly 90% because of its direct cues. In contrast, implicit relation is difficult to detect with only an accuracy of nearly 40% since it has no direct cues. To solve the problem, paper proposes a hypothesis that parallel arguments normally have the consistent semantic relations. Based on the hypothesis and by utilizing the characteristics that explicit relation is easy to detect, the paper implements a method of implicit relation detection which uses explicit relation to infer implicit relation among parallel arguments. We evaluate the method on the standard penn discourse Treebank (PDTB). The experimental results show an improvement of 17.26%.
2014, 25(11):2556-2574. DOI: 10.13328/j.cnki.jos.004561 CSTR:
Abstract:Along with the development of wireless communication technologies and smart mobile devices, location-based services (LBS), with its characteristics of mobility, practicality, momentary and personalization, has been widely applied in military, transportation, logistics etc, and it has become one of the most potential mobile value-added services. Based on a proposed framework of location-based network services recommendation, this paper first provides an approach to compute mobile users' preferences similarity from their geographic location, and proves that it satisfies the general properties of neighbor similarity measure. Then according with the concept of trust in sociology, a new method is presented for calculating trust value. By importing them into network services recommendation process, an approach of location-based network services recommendation is proposed, which effectively improves its accuracy and reliability, and mitigates data sparsity of users' similarity matrix and cold start users problem in recommendation process. Finally the proposed algorithm is proved to be more accurate and feasible in experiments by using the public dataset MIT.
WANG Jiang-Tao , LAI Wen-Yu , MENG Xiao-Feng
2014, 25(11):2575-2586. DOI: 10.13328/j.cnki.jos.004573 CSTR:
Abstract:Solid state driver (SSD) based on flash memory has been widely used in various types of applications such as mobile devices, PC machines, and servers. Compared with conventional disk, SSD enjoys faster access speed, better shock resistance, and lower power. However, it will not completely replace the disk as the secondary storage in the short run due to its inherent properties such as asymmetric read/write and high price of per gigabyte. Integrating SSD and magnetic disk together can benefit from different performance advantage to obtain good high performance and low cost. This paper proposes a flash-aware multi-level cache scheme (FAMC) which considers the significant discrepancy between MLC type and SLC type SSDs. FAMC uses two types of SSD as a cache layer between main memory and magnetic disk. Depending on the characteristics of data access in database application and file management, FAMC conditionally caches the page evicted by buffer manager to different types of SSD. FAMC considers the impact of the write pattern and type of workloads on the system performance, which adopts flash-aware algorithms and data structures to manage the data stored in SSD. Furthermore, in view of the asymmetry of the replacement cost, the study proposes a flash-friendly buffer replacement policy. The strategy is implemented on a simulation storage system based on SLC type SSDs and MLC type SSDs, and its performance is evaluated. The experimental results show that FAMC can significantly reduce system response time and disk I/O cost.
YE Xiao-Ping , TANG Yong , LIN Yan-Chong , CHEN Zhao-Ying , ZHANG Zhi-Bo
2014, 25(11):2587-2601. DOI: 10.13328/j.cnki.jos.004574 CSTR:
Abstract:Temporal index is one of key technologies for temporal data managements. This paper discusses a temporal data structure and its application for temporal data index. Most of existing temporal data methods are based on algebra framework. This paper presents a temporal data structure using quasi-order relation which can achieve multithreading data operations, and puts forward temporal index TQOindex supported by the structure. Firstly, it proposes the conception of quasi-order relationship of temporal data and studies the construct algorithm of linear order partition and its optimal property. Secondly, it dedicates application on temporal data structure with building TQOindex on expand quasi-order set which manages data on disks and can be used in common databases platform. TQOindex can dynamically index "big data" by its querying schema of "one time, one set" as well as incremental update mechanism. Finally, the simulations in the paper show the feasibility and validity for TQOindex. In conclusion, temporal quasi-order data structure pays attention to the mechanism of the processing and integration between time and application scene information for new kinds of data such as semantic data, XML data and moving objects data, and the works in this paper offers remarkable extendibility.
ZHANG Ying-Long , LI Cui-Ping , CHEN Hong
2014, 25(11):2602-2615. DOI: 10.13328/j.cnki.jos.004578 CSTR:
Abstract:Information networks are ubiquitous. These networks can be modeled by graphs where nodes refer to objects and edges are relationships between objects in the networks. Measuring nodes similarity in a graph is a fundamental problem of graph data management. There are many real-world applications based on similarity between objects, such as networks analyses, information retrieval and recommendation systems. A number of link-based similarity measures have been proposed. Both SimRank and personalized PageRank are the most popular and influential similarity measures. The two measures differ from each other because they depend on different paths. This paper proposes a similarity measure that takes advantages of both SimRank and personalized PageRank. The new measure strengthens the principle of SimRank: "Two objects are similar if they are referenced by similar objects". The paper also analyzes the new similarity measure in theory and proposes an optimization algorithm which reduces the time cost from O(kn4) to O(knl) where k is the number of iteration, n is the number of nodes and l is the number of edges. Experimental results demonstrate the effectiveness of the new similarity measure and efficiency of the optimization algorithm.
HAN Wei-Tao , YI Peng , ZHANG Xia
2014, 25(11):2616-2626. DOI: 10.13328/j.cnki.jos.004512 CSTR:
Abstract:Traditional packet classification algorithms based on space-decomposition usually use only one heuristic to split the rule space, and they don't adopt different heuristics according to the characteristics of each dimension. This paper proposes a hybrid intelligent cutting (HIC) scheme for packet classification. HIC firstly partitions the ruleset according to the IP prefix length. Then, taking into account the characteristics of current cutting dimension in each subruleset, HIC uses bit cuttings and precise projection point cuttings to cut the IP dimension and port dimension, respectively. At last, HIC builds the decision tree of hybrid cutting structures. Simulation results show that HIC has better scalability with different rulesets. Compared with EffiCuts, its time and space performance have increased by 46% and 74% respectively.
JI Jing , LIU Gui-Xiong , YU Wen-Sheng
2014, 25(11):2627-2635. DOI: 10.13328/j.cnki.jos.004544 CSTR:
Abstract:This paper studies perception layer scheduling problem for Internet of Things. In particular, it conducts research and analysis to real solution classification of three-dimensional space four-anchor node localization problem.. By employing inequality proving theory and the corresponding inequality proving analysis software DISCOVERER, the classification result of specific four-anchor localization is derived. The mathematical description of location problem is given at first and the nonlinear equations arising from traditional method are transformed into polynomial equations with inequality constraints. Inequality proving theory and tools are then used to explore the solution classification with some parameters fixed to give the complete distribution of solutions in this case. The solution classification criterion has important guiding role in practical applications and also can improve the performance of node layout and precise information perception.
SHI Ke , CHEN Hong-Sheng , ZHANG Ren-Tong
2014, 25(11):2636-2651. DOI: 10.13328/j.cnki.jos.004545 CSTR:
Abstract:The widespread of the 802.11-based wireless LAN technology brings a good opportunity for the development of the indoor positioning system based on 802.11. In this paper, a 802.11-based indoor positioning method using support vector regression (SVR) is presented. The method consists of two periods: offline training period and online location period. The accurate position prediction model is achieved in the offline training period by SVR, and the exact position is determined in the online location period according to the received signal strength (RSS) of the mobile devices. Due to the complex indoor environment, wireless channel congestion, obstructions and limitation of node communication range, the RSS is vulnerable and changeable. To address the above issues, corresponding data filtering rules obtained through statistical analysis are applied in offline training period to improve the quality of training sample, and thus improve the quality of prediction model. In the online location period, k-times continuous measurement is utilized to obtain the high quality input of the received signal strength, which guarantees the consistency with the training samples and improves the position accuracy of mobile devices. Performance evaluation and comprehensive analysis are done through intensive experiments, and the results show that the presented method has a higher positioning accuracy when compared with the probability positioning method and neutral network positioning method, and its demand for the storage capacity and computing power of the mobile devices is also low at the same time.
ZHOU Quan-Qiang , ZHANG Fu-Zhi , LIU Wen-Yuan
2014, 25(11):2652-2665. DOI: 10.13328/j.cnki.jos.004550 CSTR:
Abstract:The existing detection approaches can not detect unknown recommendation attacks effectively. Aiming at this problem, an approach based on bionic pattern recognition is proposed. Firstly, items are partitioned into different windows according to their popularity. The ratings given by users for the items in the windows are regarded as the occurrences of random events. Further, information entropy is used to extract features of rating distribution as genuine features for the detection of recommendation attacks. In addition, the technique of bionic pattern recognition is used to cover the samples of genuine profiles in the feature space. Test data outside the coverage are judged as recommendation attacks. The experimental results on the MovieLens dataset show that the proposed approach has high hit ratio and low false alarm ratio when detecting unknown recommendation attacks.
WEI Xiao-Long , LI Jian-Hai , XU Hao-Jun , YANG Hai-Dong
2014, 25(11):2666-2674. DOI: 10.13328/j.cnki.jos.004552 CSTR:
Abstract:A hybrid slot MAC protocol based on reservation mechanism for multi-density ad hoc network, short for RTV, is presented in this paper. The service is distinguished into the reservation and the contention-free in RTV for providing different qualities of access transportation service. The variable delay reservation algorithm is adapted to support different time-delay services, which solves the problem that TDMA isn't applicable to the multi-density network. The throughputs and slot-utilization rate of the RTV are obtained. The simulation results indicate that the RTV presents good performance on throughputs and slot-utilization rate in large multi-density network.
MA Wen-Ping , HUANG Yuan-Yuan , LI Hao , LI Xiao-Ting , JIAO Li-Cheng
2014, 25(11):2675-2689. DOI: 10.13328/j.cnki.jos.004519 CSTR:
Abstract:In this paper, a new method based on rough-fuzzy set and differential immune clone clustering algorithm (DICCA) for image segmentation is proposed. By replacing hard clustering with fuzzy clustering through incorporating rough-fuzzy set into DICCA, this algorithm can obtain more abundant clustering information. Specially, as the advantage of rough set is processing uncertain data, the proposed algorithm is more conducive to solve the uncertainty problem. In experiments, nine images are used for segmentation and four algorithms are chosen for comparison to validate the performance in the clustering stability. The experimental results show that the algorithm has higher segmentation accuracy and better segmentation results.
LU Ji-Yuan , HOU Fang , HUANG Cheng-Hui , LIU Yu-Xi , CHAO Hong-Yang
2014, 25(11):2690-2701. DOI: 10.13328/j.cnki.jos.004695 CSTR:
Abstract:As more and more flexible block modes are introduced into video coding, mode decision technology becomes an important coding tool. The performance of mode decision has great impact on both coding performance and computational complexity. This article proposes a complexity controllable multi-mode decision algorithm to attain optimized coding performance under different computational complexity constraints herein. Instead of speeding up multi-mode decision merely, the algorithm predicts the Lagrangian cost and complexity slope (J-C slope) of MD for each macroblock (MB) by exploiting their temporal and spatial correlations. The larger J-C slope is, the higher coding gain over each unit of computational cost will be. In the environment of limited computational resources, multi-mode decision should be performed in a cost effective order, i.e. the order of their J-C slopes, to achieve optimized coding performance. In addition, an adaptive method is proposed to adjust the computational complexity of the algorithm dynamically by discovering the relationship of J-C slope thresholds and their corresponding complexity. Experiments demonstrate the proposed algorithm can both precisely adjust the computational complexity and optimally perform mode decision under different computational constraints.
ZHANG Jian-Hua , ZHANG Wen-Bo , XU Ji-Wei , WEI Jun , ZHONG Hua , HUANG Tao
2014, 25(11):2702-2714. DOI: 10.13328/j.cnki.jos.004548 CSTR:
Abstract:With the development and popularization of virtualization technology, more and more enterprises will deploy their business-critical systems on virtualization platform. While reducing the company's hardware and management costs, virtualization also brings severe challenges for system reliability. While the runtime system state replication backup method can improve the failure recovery capabilities of system, it also introduces huge overhead. This paper presents a performance optimization method based on hidden Markov model for system failure recovery. It analyzes runtime states of the system, and calculates the probability of system running tendency. Business system optimization is achieved by dynamically adjusting resources allocation between the failure recovery function and normal business function to reduce the runtime overhead. Experimental results show that the presented approach can guarantee reliability of the system while effectively reducing performance overhead by up to 2/3.
MENG You , LUAN Zhong-Zhi , XIE Ming , QIAN De-Pei
2014, 25(11):2715-2730. DOI: 10.13328/j.cnki.jos.004549 CSTR:
Abstract:With the development of big-data computing, the system generated data becomes larger and more complex. Yet systems like fault monitoring, stock analyzing and health-care require processing these data in nearly real-time. The original data processing methods such as "save-query" and "publish-scribe" cannot handle the large volume of data in that speed. Complex event processing (CEP) is a data processing scheme that executes the user's real-time queries. It continually takes the high volume of raw data input and produces output for the corresponding data stream according to the queries. However in some practical environments, the data from system may generate many new patterns, and the CEP system cannot prepare for each of them. Consequently, an extendable CEP system is needed. Existing CEP work mainly focus on several special types of queries without a high level overview, therefore cannot easily guarantee the overall performances of the system. Though the NFA model poses a universal semantic, the scalability of the NFA model is still under discussed. To address these defects, an operator-based complex event processing model is proposed to support operator extension. In addition, a detailed analysis is conducted on time consumption of operator-based model and an optimal algorithm is presented. Finally, the correctness of optimization solutions is verified by experiments. Contrast experiments show that the optimized tree model is three times faster than open-source project Esper.