• Volume 25,Issue 11,2014 Table of Contents
    Select All
    Display Type: |
    • Higher-Order π-Calculus with the Mismatch Operator

      2014, 25(11):2433-2451. DOI: 10.13328/j.cnki.jos.004523 CSTR:

      Abstract (2809) HTML (1024) PDF 1.06 M (5106) Comment (0) Favorites

      Abstract:This paper mainly studies the axiomatization of higher-order process calculi with the mismatch operator. Firstly, it formulates the theory of open weak higher-order bisimulation, and shows important properties such as equivalence and congruence. Secondly, following the method on linearity, it builds up an axiom system for finite processes. Finally, based on the characterization of open weak higher-order bisimulation, it proves the completeness of the axiom system. The work of this paper provides the basis for designing and implementing an effective algorithm for checking the bisimulation equivalence of higher-order processes with the mismatch operator, and a theoretical reference for relevant applications of modeling with higher-order processes.

    • Bounded Model Checking of C Programs Using Event Automaton Specifications

      2014, 25(11):2452-2472. DOI: 10.13328/j.cnki.jos.004609 CSTR:

      Abstract (3460) HTML (1002) PDF 1.14 M (5164) Comment (0) Favorites

      Abstract:In this paper, a technology is presented to use event automata to specify the safety properties of C programs and apply bounded model checking to verify whether a C program satisfies an event automaton property. An event automaton can specify a safety property which is based on the events generated by a program. It can also specify a property with infinite states. Since an event automaton separates from C programs, it will not change the structures of programs. The paper introduces the definition of an automaton reachability tree based on an event automaton. It then uses automaton reachability trees and the bounded model checking to build the SMT (satisfiability modulo theory) models of event automata and C programs. Finally, it supplies the SMT models to an SMT solver. An algorithm for generating counterexamples is obtained according to the results of the solver. The case studies and experimental results demonstrate that the presented approach can verify the event properties of software systems.

    • Web Service Quality Metric Algorithm Employing Objective and Subjective Weight

      2014, 25(11):2473-2485. DOI: 10.13328/j.cnki.jos.004508 CSTR:

      Abstract (3689) HTML (1098) PDF 831.32 K (6244) Comment (0) Favorites

      Abstract:The existing methods for measuring Web service QoS (quality of service) are not accurate as they cannot precisely quantify the ambiguity of user preference while neglecting the characteristics of QoS data set. This paper presents a QoS measuring algorithm which employs subjective and objective weight. The new approach can automatically adapt to user preference with a subjective weight calculation method, and can evaluate service performance accurately with an objective weight calculation method. The algorithm takes a comprehensive consideration of both subjective and objective aspects to measure Web service QoS, therefore the measure results can conform to user preference and reflect the overall service performance accurately. The theoretical analysis and experimental results based on the QWS real data set show that the proposed algorithm can measure Web service QoS accurately.

    • Prioritizing Pointer Analysis Algorithm Based on Points-to Updating

      2014, 25(11):2486-2498. DOI: 10.13328/j.cnki.jos.004596 CSTR:

      Abstract (2825) HTML (1103) PDF 856.52 K (5142) Comment (0) Favorites

      Abstract:Pointer analysis is a key technology in data flow analysis, and the result of pointer analysis is the basis of compiler optimization and program transformation. Based on the inclusion-based pointer analysis algorithm research, this paper analyzes the problems of redundant constraints evaluation and computational overhead of priority evaluation model in Narse priority constraints evaluation algorithm. Candidate set of constraint evaluation is determined by points-to set updating information of pointers, and the prioritizing pointer analysis algorithm based on points-to updating is presented. Constraint dependency graph is built by pointer dereference dependence and pointer scalar dependence in constraint statements, and priority of constraint evaluation is determined by the dependencies. Prioritizing algorithm based on constraint dependency graph is presented to simplify the complex priority evaluation model in Narse algorithm, and the overall framework of the optimized algorithm is provided. The experimental results on SPEC 2000/SPEC 2006 benchmark show that the algorithm has a significant performance boost on the time overhead and storage overhead compared with Narse priority algorithm.

    • Bug Prediction Method for Fine-Grained Source Code Changes

      2014, 25(11):2499-2517. DOI: 10.13328/j.cnki.jos.004559 CSTR:

      Abstract (3812) HTML (1608) PDF 1.93 M (5649) Comment (0) Favorites

      Abstract:Software changes constantly in its lifecycle to adapt to the changing requirements and environments. In order to predict whether each change will introduce any bugs timely, various bug prediction methods for software source code changes have been proposed by researchers. However, there are three deficiencies in existing methods: 1) The prediction granularities are limited at the coarse-grained levels (i.e. transaction or file levels); 2) As vector space model is used to represent software changes, abundant information in software repositories, such as program structure, natural language semantic and history information, can not be mined sufficiently; 3) Only short-time prediction is explored without considering the concept drift caused by new requirements, team restructuring or other external factors during the long time software evolution process. In order to overcome the shortcomings of existing methods, a bug prediction method for source code changes is proposed. It makes prediction for fine-grained (i.e. statement level) changes, which reduces the quality assurance cost effectively. By in-depth mining software repositories with static program analysis and natural language semantic inference technologies, feature sets of changes are constructed in four aspects (i.e. context, content, time, and developer) and key factors that lead to bug injection are revealed. Characteristics of concept drift in software evolution process are analyzed by using matrix of feature entropy difference, and an algorithm of adaptive window with concept reviewing is proposed to achieve stability of long-time prediction. Experiments on six famous open source projects demonstrate effectiveness of the proposed method.

    • Aggregation and Decomposition of Bipolar Information

      2014, 25(11):2518-2527. DOI: 10.13328/j.cnki.jos.004525 CSTR:

      Abstract (2604) HTML (942) PDF 650.48 K (4787) Comment (0) Favorites

      Abstract:Information aggregation is one of the basic means for information processing. This paper mainly discusses about behavior of several aggregation operators. Firstly, from the perspective of generalized adjunction, the properties of generalized aggregation operators are discussed. Given one aggregation operator A, two different methods are adopted to construct new aggregation operators greater (less) than or equal to A. Furthermore, several aggregation operators, such as bipolar t-norms and bipolar implications, are explored, and the condition is provided for bipolar aggregation operators to be decomposed into two unipolar aggregation operators. Under this condition, the aggregation processing for positive as well as negative information is also presented.

    • Parallel Inference: A Method of Implicit Discourse Relation Detection

      2014, 25(11):2528-2555. DOI: 10.13328/j.cnki.jos.004546 CSTR:

      Abstract (3044) HTML (971) PDF 1.53 M (5382) Comment (0) Favorites

      Abstract:Discourse is a literary form that consists of semantically-related and well-structured arguments. One of the key tasks of discourse analysis is to resolve semantic relationship between arguments. Explicit relation is easy to detect with an accuracy of nearly 90% because of its direct cues. In contrast, implicit relation is difficult to detect with only an accuracy of nearly 40% since it has no direct cues. To solve the problem, paper proposes a hypothesis that parallel arguments normally have the consistent semantic relations. Based on the hypothesis and by utilizing the characteristics that explicit relation is easy to detect, the paper implements a method of implicit relation detection which uses explicit relation to infer implicit relation among parallel arguments. We evaluate the method on the standard penn discourse Treebank (PDTB). The experimental results show an improvement of 17.26%.

    • Approach to Network Services Recommendation Based on Mobile Users' Location

      2014, 25(11):2556-2574. DOI: 10.13328/j.cnki.jos.004561 CSTR:

      Abstract (4079) HTML (963) PDF 1.12 M (7155) Comment (0) Favorites

      Abstract:Along with the development of wireless communication technologies and smart mobile devices, location-based services (LBS), with its characteristics of mobility, practicality, momentary and personalization, has been widely applied in military, transportation, logistics etc, and it has become one of the most potential mobile value-added services. Based on a proposed framework of location-based network services recommendation, this paper first provides an approach to compute mobile users' preferences similarity from their geographic location, and proves that it satisfies the general properties of neighbor similarity measure. Then according with the concept of trust in sociology, a new method is presented for calculating trust value. By importing them into network services recommendation process, an approach of location-based network services recommendation is proposed, which effectively improves its accuracy and reliability, and mitigates data sparsity of users' similarity matrix and cold start users problem in recommendation process. Finally the proposed algorithm is proved to be more accurate and feasible in experiments by using the public dataset MIT.

    • Flash-Aware Multi-Level Cache Management Strategy

      2014, 25(11):2575-2586. DOI: 10.13328/j.cnki.jos.004573 CSTR:

      Abstract (2941) HTML (1134) PDF 820.94 K (5734) Comment (0) Favorites

      Abstract:Solid state driver (SSD) based on flash memory has been widely used in various types of applications such as mobile devices, PC machines, and servers. Compared with conventional disk, SSD enjoys faster access speed, better shock resistance, and lower power. However, it will not completely replace the disk as the secondary storage in the short run due to its inherent properties such as asymmetric read/write and high price of per gigabyte. Integrating SSD and magnetic disk together can benefit from different performance advantage to obtain good high performance and low cost. This paper proposes a flash-aware multi-level cache scheme (FAMC) which considers the significant discrepancy between MLC type and SLC type SSDs. FAMC uses two types of SSD as a cache layer between main memory and magnetic disk. Depending on the characteristics of data access in database application and file management, FAMC conditionally caches the page evicted by buffer manager to different types of SSD. FAMC considers the impact of the write pattern and type of workloads on the system performance, which adopts flash-aware algorithms and data structures to manage the data stored in SSD. Furthermore, in view of the asymmetry of the replacement cost, the study proposes a flash-friendly buffer replacement policy. The strategy is implemented on a simulation storage system based on SLC type SSDs and MLC type SSDs, and its performance is evaluated. The experimental results show that FAMC can significantly reduce system response time and disk I/O cost.

    • Study and Application of Temporal Quasi-Order Data Structure

      2014, 25(11):2587-2601. DOI: 10.13328/j.cnki.jos.004574 CSTR:

      Abstract (3405) HTML (978) PDF 991.15 K (5172) Comment (0) Favorites

      Abstract:Temporal index is one of key technologies for temporal data managements. This paper discusses a temporal data structure and its application for temporal data index. Most of existing temporal data methods are based on algebra framework. This paper presents a temporal data structure using quasi-order relation which can achieve multithreading data operations, and puts forward temporal index TQOindex supported by the structure. Firstly, it proposes the conception of quasi-order relationship of temporal data and studies the construct algorithm of linear order partition and its optimal property. Secondly, it dedicates application on temporal data structure with building TQOindex on expand quasi-order set which manages data on disks and can be used in common databases platform. TQOindex can dynamically index "big data" by its querying schema of "one time, one set" as well as incremental update mechanism. Finally, the simulations in the paper show the feasibility and validity for TQOindex. In conclusion, temporal quasi-order data structure pays attention to the mechanism of the processing and integration between time and application scene information for new kinds of data such as semantic data, XML data and moving objects data, and the works in this paper offers remarkable extendibility.

    • Effective Link-Based Measure of Node Similarity on Information Networks

      2014, 25(11):2602-2615. DOI: 10.13328/j.cnki.jos.004578 CSTR:

      Abstract (3198) HTML (1313) PDF 855.51 K (5764) Comment (0) Favorites

      Abstract:Information networks are ubiquitous. These networks can be modeled by graphs where nodes refer to objects and edges are relationships between objects in the networks. Measuring nodes similarity in a graph is a fundamental problem of graph data management. There are many real-world applications based on similarity between objects, such as networks analyses, information retrieval and recommendation systems. A number of link-based similarity measures have been proposed. Both SimRank and personalized PageRank are the most popular and influential similarity measures. The two measures differ from each other because they depend on different paths. This paper proposes a similarity measure that takes advantages of both SimRank and personalized PageRank. The new measure strengthens the principle of SimRank: "Two objects are similar if they are referenced by similar objects". The paper also analyzes the new similarity measure in theory and proposes an optimization algorithm which reduces the time cost from O(kn4) to O(knl) where k is the number of iteration, n is the number of nodes and l is the number of edges. Experimental results demonstrate the effectiveness of the new similarity measure and efficiency of the optimization algorithm.

    • Hybrid Cutting Algorithm for Packet Classification

      2014, 25(11):2616-2626. DOI: 10.13328/j.cnki.jos.004512 CSTR:

      Abstract (3055) HTML (973) PDF 764.76 K (4715) Comment (0) Favorites

      Abstract:Traditional packet classification algorithms based on space-decomposition usually use only one heuristic to split the rule space, and they don't adopt different heuristics according to the characteristics of each dimension. This paper proposes a hybrid intelligent cutting (HIC) scheme for packet classification. HIC firstly partitions the ruleset according to the IP prefix length. Then, taking into account the characteristics of current cutting dimension in each subruleset, HIC uses bit cuttings and precise projection point cuttings to cut the IP dimension and port dimension, respectively. At last, HIC builds the decision tree of hybrid cutting structures. Simulation results show that HIC has better scalability with different rulesets. Compared with EffiCuts, its time and space performance have increased by 46% and 74% respectively.

    • Analysis of Real Solutions Number by Four-Anchor Node Localization for Sensor Networks

      2014, 25(11):2627-2635. DOI: 10.13328/j.cnki.jos.004544 CSTR:

      Abstract (2665) HTML (1102) PDF 658.68 K (6806) Comment (0) Favorites

      Abstract:This paper studies perception layer scheduling problem for Internet of Things. In particular, it conducts research and analysis to real solution classification of three-dimensional space four-anchor node localization problem.. By employing inequality proving theory and the corresponding inequality proving analysis software DISCOVERER, the classification result of specific four-anchor localization is derived. The mathematical description of location problem is given at first and the nonlinear equations arising from traditional method are transformed into polynomial equations with inequality constraints. Inequality proving theory and tools are then used to explore the solution classification with some parameters fixed to give the complete distribution of solutions in this case. The solution classification criterion has important guiding role in practical applications and also can improve the performance of node layout and precise information perception.

    • Indoor Location Method Based on Support Vector Regression in 802.11 Wireless Environments

      2014, 25(11):2636-2651. DOI: 10.13328/j.cnki.jos.004545 CSTR:

      Abstract (3188) HTML (878) PDF 1.53 M (5069) Comment (0) Favorites

      Abstract:The widespread of the 802.11-based wireless LAN technology brings a good opportunity for the development of the indoor positioning system based on 802.11. In this paper, a 802.11-based indoor positioning method using support vector regression (SVR) is presented. The method consists of two periods: offline training period and online location period. The accurate position prediction model is achieved in the offline training period by SVR, and the exact position is determined in the online location period according to the received signal strength (RSS) of the mobile devices. Due to the complex indoor environment, wireless channel congestion, obstructions and limitation of node communication range, the RSS is vulnerable and changeable. To address the above issues, corresponding data filtering rules obtained through statistical analysis are applied in offline training period to improve the quality of training sample, and thus improve the quality of prediction model. In the online location period, k-times continuous measurement is utilized to obtain the high quality input of the received signal strength, which guarantees the consistency with the training samples and improves the position accuracy of mobile devices. Performance evaluation and comprehensive analysis are done through intensive experiments, and the results show that the presented method has a higher positioning accuracy when compared with the probability positioning method and neutral network positioning method, and its demand for the storage capacity and computing power of the mobile devices is also low at the same time.

    • Detecting Unknown Recommendation Attacks Based on Bionic Pattern Recognition

      2014, 25(11):2652-2665. DOI: 10.13328/j.cnki.jos.004550 CSTR:

      Abstract (2955) HTML (966) PDF 904.70 K (5261) Comment (0) Favorites

      Abstract:The existing detection approaches can not detect unknown recommendation attacks effectively. Aiming at this problem, an approach based on bionic pattern recognition is proposed. Firstly, items are partitioned into different windows according to their popularity. The ratings given by users for the items in the windows are regarded as the occurrences of random events. Further, information entropy is used to extract features of rating distribution as genuine features for the detection of recommendation attacks. In addition, the technique of bionic pattern recognition is used to cover the samples of genuine profiles in the feature space. Test data outside the coverage are judged as recommendation attacks. The experimental results on the MovieLens dataset show that the proposed approach has high hit ratio and low false alarm ratio when detecting unknown recommendation attacks.

    • Design and Analysis of Kinematic Delay MAC for Non-Uniform Ad Hoc Network

      2014, 25(11):2666-2674. DOI: 10.13328/j.cnki.jos.004552 CSTR:

      Abstract (3271) HTML (1028) PDF 814.24 K (5335) Comment (0) Favorites

      Abstract:A hybrid slot MAC protocol based on reservation mechanism for multi-density ad hoc network, short for RTV, is presented in this paper. The service is distinguished into the reservation and the contention-free in RTV for providing different qualities of access transportation service. The variable delay reservation algorithm is adapted to support different time-delay services, which solves the problem that TDMA isn't applicable to the multi-density network. The throughputs and slot-utilization rate of the RTV are obtained. The simulation results indicate that the RTV presents good performance on throughputs and slot-utilization rate in large multi-density network.

    • Image Segmentation Based on Rough Set and Differential Immune Fuzzy Clustering Algorithm

      2014, 25(11):2675-2689. DOI: 10.13328/j.cnki.jos.004519 CSTR:

      Abstract (3346) HTML (984) PDF 1.68 M (5537) Comment (0) Favorites

      Abstract:In this paper, a new method based on rough-fuzzy set and differential immune clone clustering algorithm (DICCA) for image segmentation is proposed. By replacing hard clustering with fuzzy clustering through incorporating rough-fuzzy set into DICCA, this algorithm can obtain more abundant clustering information. Specially, as the advantage of rough set is processing uncertain data, the proposed algorithm is more conducive to solve the uncertainty problem. In experiments, nine images are used for segmentation and four algorithms are chosen for comparison to validate the performance in the clustering stability. The experimental results show that the algorithm has higher segmentation accuracy and better segmentation results.

    • Multi-Mode Decision Under Computational Resource Constraints for Video Coding

      2014, 25(11):2690-2701. DOI: 10.13328/j.cnki.jos.004695 CSTR:

      Abstract (3600) HTML (951) PDF 1.10 M (5382) Comment (0) Favorites

      Abstract:As more and more flexible block modes are introduced into video coding, mode decision technology becomes an important coding tool. The performance of mode decision has great impact on both coding performance and computational complexity. This article proposes a complexity controllable multi-mode decision algorithm to attain optimized coding performance under different computational complexity constraints herein. Instead of speeding up multi-mode decision merely, the algorithm predicts the Lagrangian cost and complexity slope (J-C slope) of MD for each macroblock (MB) by exploiting their temporal and spatial correlations. The larger J-C slope is, the higher coding gain over each unit of computational cost will be. In the environment of limited computational resources, multi-mode decision should be performed in a cost effective order, i.e. the order of their J-C slopes, to achieve optimized coding performance. In addition, an adaptive method is proposed to adjust the computational complexity of the algorithm dynamically by discovering the relationship of J-C slope thresholds and their corresponding complexity. Experiments demonstrate the proposed algorithm can both precisely adjust the computational complexity and optimally perform mode decision under different computational constraints.

    • Approach of Virtual Machine Failure Recovery Based on Hidden Markov Model

      2014, 25(11):2702-2714. DOI: 10.13328/j.cnki.jos.004548 CSTR:

      Abstract (3043) HTML (999) PDF 919.98 K (5391) Comment (0) Favorites

      Abstract:With the development and popularization of virtualization technology, more and more enterprises will deploy their business-critical systems on virtualization platform. While reducing the company's hardware and management costs, virtualization also brings severe challenges for system reliability. While the runtime system state replication backup method can improve the failure recovery capabilities of system, it also introduces huge overhead. This paper presents a performance optimization method based on hidden Markov model for system failure recovery. It analyzes runtime states of the system, and calculates the probability of system running tendency. Business system optimization is achieved by dynamically adjusting resources allocation between the failure recovery function and normal business function to reduce the runtime overhead. Experimental results show that the presented approach can guarantee reliability of the system while effectively reducing performance overhead by up to 2/3.

    • Operator-Based Extendable Complex Event Processing Model

      2014, 25(11):2715-2730. DOI: 10.13328/j.cnki.jos.004549 CSTR:

      Abstract (2885) HTML (917) PDF 1.30 M (5107) Comment (0) Favorites

      Abstract:With the development of big-data computing, the system generated data becomes larger and more complex. Yet systems like fault monitoring, stock analyzing and health-care require processing these data in nearly real-time. The original data processing methods such as "save-query" and "publish-scribe" cannot handle the large volume of data in that speed. Complex event processing (CEP) is a data processing scheme that executes the user's real-time queries. It continually takes the high volume of raw data input and produces output for the corresponding data stream according to the queries. However in some practical environments, the data from system may generate many new patterns, and the CEP system cannot prepare for each of them. Consequently, an extendable CEP system is needed. Existing CEP work mainly focus on several special types of queries without a high level overview, therefore cannot easily guarantee the overall performances of the system. Though the NFA model poses a universal semantic, the scalability of the NFA model is still under discussed. To address these defects, an operator-based complex event processing model is proposed to support operator extension. In addition, a detailed analysis is conducted on time consumption of operator-based model and an optimal algorithm is presented. Finally, the correctness of optimization solutions is verified by experiments. Contrast experiments show that the optimized tree model is three times faster than open-source project Esper.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063