• Volume 21,Issue 11,2010 Table of Contents
    Select All
    Display Type: |
    • Decomposition of Large Class Diagrams Generated by Reverse Engineering

      2010, 21(11):2701-2710. CSTR:

      Abstract (4712) HTML (0) PDF 385.33 K (7508) Comment (0) Favorites

      Abstract:This paper proposes an approach to decompose large class diagrams. It first collects metrics of coupling among classifiers (classes and interfaces). According to the principle of high cohering and low coupling, it breaks low coupling classifiers while showing high coupling classifiers in the same diagrams. To guarantee that the generated new class diagrams are readable, it confines sizes of new diagrams to a predefined scope. The results of its evaluations on industrial projects suggest that the approach is practical and valuable. The approach proposed in this paper helps to improve the readability of software models.

    • Weak Join for the Constraint-Based Polyehdra Abstract Domain

      2010, 21(11):2711-2724. CSTR:

      Abstract (5371) HTML (0) PDF 664.29 K (6300) Comment (0) Favorites

      Abstract:The main tractability problem of the constraint-based polyhedra abstract domain can be derived from the costly (strong) join operation, that is, the convex hull computation. This paper presents a series of cheap weak join operations as a sound substitution for the convex hull operation of the constraint-based polyhedra domain. To achieve a trade-off between efficiency and precision, a heuristic strategy is proposed which dynamically combines both strong join and weak join during program analysis. Experimental results show that the weak join operation can significantly improve the efficiency, scalability and robustness of the constraint-based polyhedra domain.

    • Identification of Centrally Managed Aggregations in Design Patterns Using Shape Analysis

      2010, 21(11):2725-2737. CSTR:

      Abstract (4799) HTML (0) PDF 568.66 K (5705) Comment (0) Favorites

      Abstract:In this paper, a shape-analysis based approach is proposed to automatically identify aggregations that are implemented using the commonly used implementation mechanism, pointers or references. First, this paper augments predicates of Sagiv’s three-valued logical structure to describe the semantic constraints for the central aggregation management operations on linked lists. Then, this paper presents a method to identify the aggregation management behavior by analyzing the changes of shape structures for linked lists along control flow paths. Finally, the effectiveness of the proposed 1-n aggregation identification approach is proposed using a case study from the open-source software JEdit.

    • Method for Generating Software Architecture Models from Formal Specifications

      2010, 21(11):2738-2751. CSTR:

      Abstract (5534) HTML (0) PDF 436.65 K (7094) Comment (0) Favorites

      Abstract:This paper uses LOTOS to describe the requirement specification of real-time systems and proposes a method for generating software model from formal specifications by establishing a mechanism that translates LOTOS specifications into UML-RT models. Finally, this paper illustrates how to use the method when modeling real-time software. The UML-RT models generated by this method can increase the reliability for designing the software for real-time systems.

    • Syntax and Semantics of Timed Property Sequence Chart

      2010, 21(11):2752-2767. CSTR:

      Abstract (5407) HTML (0) PDF 709.02 K (6617) Comment (0) Favorites

      Abstract:In this paper, in order to make property sequence chart have timed expressiveness, the property sequence chart is extended into a timed property sequence chart that gives the semantics of the timed property sequence chart in terms of timed Büchi automaton. Then, the expressive power of timed property sequence chart is measured with the use of a recently proposed real-time specification pattern. Finally, the use of timed property sequence chart is illustrated in a case study, which shows the extensive application prospect of a timed property sequence chart in real-time system.

    • Component Substitutability Analysis and Verification Based on Behavior Automata

      2010, 21(11):2768-2781. CSTR:

      Abstract (4877) HTML (0) PDF 593.25 K (5731) Comment (0) Favorites

      Abstract:This paper discusses component substitutability at the protocol level. Component behavior is modeled by Component behavior automaton (CBA), which is a special kind of nondeterministic finite automata (NFA). Based on CBA, a component substitutability analysis model is presented, which contains four substitutability types partitioned by two dimensions: component environment transparency and interaction similarity. This model can better ensure interaction compatibility than a traditional model based on subtype, and related verification algorithms are developed to automatically analyze component substitutability. In order to make component substitution more precise and increase component reuse, this model makes the behavior of component substituted for the actual interactive behavior that is expressed in the component environment. The reference behavior is formally defined by analyzing the actions by which the component substituted for is bound within the environment.

    • Covering Model of Granular Computing

      2010, 21(11):2782-2789. CSTR:

      Abstract (5216) HTML (0) PDF 329.62 K (5579) Comment (0) Favorites

      Abstract:To investigate the basic problems of granular computing, such as granulation, computing with granulars and so on, in a more general setting, a covering model of granular computing is introduced in this paper by relaxing the three conditions of equivalence relations, which generalizes the existing models. Under this model, Zoom-in and Zoom-out operators are defined, respectively. Different combinations of Zoom-in and Zoom-out operators form different rough approximations of the universe of discourse and granulated universe of discourse. This paper studies their propertis and establishes their relationship with topological space and Galois connection.

    • High Efficient Complex System Genetic Algorithm

      2010, 21(11):2790-2801. CSTR:

      Abstract (5368) HTML (0) PDF 453.83 K (7457) Comment (0) Favorites

      Abstract:In order to overcome the problems of genetic algorithm, such as the low efficiency, genetic algorithm is redesigned by the complex system theory in this paper. First, the selecting operator is rebuilt by the power law, which is considered to be the self-organized criticality of complex system and sound distribution system of energy. Second, the crossover operator is redesigned by the characteristic of a self-learning complex system. Third, the generation strategy is improved by the mechanism of feedback. Finally, the gene floating operator is added to the algorithm. Because all operators are balanced with each other and restrict each other, the newly designed algorithm, complex system genetic algorithm (CSGA), improves efficiency and premature markedly. At last, experiments show that the CSGA is capable of dealing with high dimensional global optimization problems.

    • Clustering Algorithm Based on the Distributions of Intrinsic Clusters

      2010, 21(11):2802-2813. CSTR:

      Abstract (4823) HTML (0) PDF 507.84 K (6119) Comment (0) Favorites

      Abstract:In finding a flexible approach to solve the model misfit problem, a clustering algorithm based on the distributions of intrinsic clusters (CADIC) is proposed, which implicitly integrates distribution characteristics into the clustering framework by applying rescaling operations. In the clustering process, a set of discriminative directions are chosen to construct the CADIC coordinate, under which the distribution characteristics are analyzed in order to design rescaling functions. Along every axis, rescaling functions are applied to implicitly normalize the data distribution such that more reasonable clustering decisions can be made. As a result, the reliability of clustering decisions is improved. The time complexity of CADIC remains the same as K-means by using a K-means-like iteration strategy. Experiments on well-known benchmark evaluation datasets show that the framework of CADIC is reasonable, and its performance in text clustering is comparable to that of state-of-the-art algorithms.

    • Semi-Supervised Cluster Ensemble Model Based on Bayesian Network

      2010, 21(11):2814-2825. CSTR:

      Abstract (6377) HTML (0) PDF 456.73 K (7083) Comment (0) Favorites

      Abstract:The existing algorithms are mostly unsupervised algorithms of a cluster ensemble, which cannot take advantages of known information of datasets. As a result, the precision, robustness, and stability of a cluster ensemble are degraded. To conquer these disadvantages, a semi-supervised cluster ensemble (SCE) model, based on both semi-supervised learning and ensemble learning technologies, is designed in this paper. There are three main works in this paper. The first is that SCE is proposed, and the variational inference oriented SCE is illustrated in detail. The second is based on the above work: an EM (expectation maximization) algorithm of SCE is illustrated step by step. The third is that some datasets are drawn from the UCI (University of California, Irvine) machine learning database for experiments which show that both SCE and its EM algorithm are good for semi-supervised cluster ensemble and outperforms NMFS (algorithm of nonnegative-matrix-factorization based semi-supervised), semi-supervised SVM (support vector machine), and LVCE (latent variable model for cluster ensemble). The Semi-Supervised Cluster Ensemble model is first stated in this paper, and this paper includes the advantages of both the semi-supervise learning and the cluster ensemble. Therefore, its result is better than the results of semi-learning clustering and cluster ensemble.

    • Possibilistic Extension Rules for Reasoning and Knowledge Compilation

      2010, 21(11):2826-2837. CSTR:

      Abstract (4545) HTML (0) PDF 397.10 K (5766) Comment (0) Favorites

      Abstract:First, the possibilistic extension, a possible manifestation of the extention rule, is proposed. A reasoning mechanic in possibilistic logic, using the possibilistic extension rule, is then introduced, using the concept of the complementary factor to estimate its complexity. The definitions of entailment tractable and satisfiability tractable are extended to introduce the concepts of possibilistic entailment tractable and consistent-computing tractable. Based on possibilistic extension rules, EPPCCCL (each pair of possibilistic clauses contains complementary literals) theory is introduced, which is proved to be possibilistic entailment tractable and consistent-computing tractable; therefore, the theory can be regarded as a target language of possibilistic knowledge compilation.

    • Classification Algorithm Combined GCM with CCM in Cancer Recognition

      2010, 21(11):2838-2851. CSTR:

      Abstract (4974) HTML (0) PDF 504.36 K (7587) Comment (0) Favorites

      Abstract:In this paper, two cancer recognition models, global component model (GCM) and cancer component model (CCM), are constructed. Due to the fact that GCM and CCM complement each other, a weighted voting strategy is applied, and an ensemble algorithm based on GCM and CCM for cancer recognition (EAGC) is proposed. Independent test experiments and cross validation experiments are conducted on Leukemia, Breast, Prostate, DLBCL, Colon, and Ovarian cancer dataset, respectively, and EAGC performed well on all datasets. The experimental results show that recognition, solution, and the generalization are strengthened by the combination of GCM and CCM.

    • Analyzing the Characteristics of Application Traffic Behavior Based on Chi-Square Statistics

      2010, 21(11):2852-2865. CSTR:

      Abstract (4907) HTML (0) PDF 470.09 K (6484) Comment (0) Favorites

      Abstract:Based on the Chi-Square Statistics and Test, this paper proposes a method named ABSA (application behavior significance assessment) to analyze the traffic behavior characteristics of applications. The ABSA method does not focus on any certain applications; in contrast, it aims at providing a quantitative standard for describing the behavior distribution differences among applications, so that the traffic behavior characteristics and their corresponding significances can be determined. The theoretical analysis and experiments results show that 1) ABSA can present the information about characteristics more precisely and copiously to improve the accuracy of application identification; 2) the significance of characteristic is independent of its proportion in sample totals; 3) ABSA can keep the relative significance sequence of behavior characteristics unchanged in a packet sampling environment, which is often used by NetFlow and many other flow information collecting systems to simplify the characteristic re-selecting process when sampling ratio is changed.

    • Performance Optimization for IEEE 802.11 with QoS Differentiation Supporting

      2010, 21(11):2866-2882. CSTR:

      Abstract (5074) HTML (0) PDF 671.35 K (6076) Comment (0) Favorites

      Abstract:In this paper, the extended application of an approximating optimization formula, which is first proposed to simplify the optimization of networks with a single traffic class in the literature, in QoS-supporting networks is analytically justified and numerically investigated. Based on the formula, an adaptive optimization scheme, named QATC (QoS-supporting adaptive transmission control), is proposed for IEEE 802.11 to optimize the system performance, while providing service differentiation among traffic classes. The proposed scheme works around the difficult station number estimation. It utilizes information acquired from channel sensing to adjust the packet transmitting, adaptively towards achieving an invariable system object for different network conditions. Furthermore, the scheme to optimize the multi-rate networks is also exploited. Simulation results show that QATC has an effective ability to optimize the operation of networks adaptively in various conditions, and it can greatly improve the system throughput compared with the standard IEEE 802.11 MAC and related enhancements. Moreover, the system throughput, achieved by the scheme, is very close to the theoretical optimal value.

    • Localization in Wireless Sensor Network Using Locality Preserving Canonical Correlation Analysis

      2010, 21(11):2883-2891. CSTR:

      Abstract (5282) HTML (0) PDF 377.47 K (6448) Comment (0) Favorites

      Abstract:In this paper, the deployment of wireless network is analyzed in accordance with the manifold distribution. Next, the previous Locality Preserving Canonical Correlation Analysis (LPCCA) algorithm is used to build a mapping from the signal-vector space to the physical location space and develop a location algorithm, Location Estimation-Locality Preserving Canonical Correlation Analysis (LE-LPCCA, for short), which sufficiently takes the local structure of the network topology into account. Finally, experimental results on benchmark show that this method achieves a higher accuracy and stability than other location algorithms.

    • Network Coding Method Using Heuristic Algorithms for Wireless Ad Hoc Networks

      2010, 21(11):2892-2905. CSTR:

      Abstract (4728) HTML (0) PDF 546.34 K (5803) Comment (0) Favorites

      Abstract:This paper first proves that the optimizing problems concerning network coding are NP-complete, and then puts forward a network coding method called COMP, using heuristic algorithms. This network coding method adopts the basic ideas of the set packing heuristic algorithm and the set cover heuristic algorithm with the aim of digging up as many network coding opportunities as many as possible. The results of the simulation carried out by the NS-2 simulator show that in scenarios where there are a large number of nodes, a small maximum transmission range, and a large number of sessions, this network coding method effectively alleviates the impact of session concurrency, improves the performance of the existing broadcast algorithm, and its performance improvement is greater than that of the existing network coding method.

    • >Online First
    • Prerequisite to 1-Connected Vehicular Ad Hoc Networks in Highway Scenarios

      2010, 21(11):2906-2919. CSTR:

      Abstract (6208) HTML (0) PDF 1001.99 K (7813) Comment (0) Favorites

      Abstract:Recent advances in micro-electromechanical systems (MEMS) technology, mobile computing, and wireless communications have enabled rapid deployment of a self-organizing, infrastructure-less, multi-hop, and distributed inter-vehicle communication (IVC) network based on pre-existing road layouts. Vehicular traffic scenarios pose great challenges to physical topology connectivity, which is a prerequisite to providing reliable applications to the users of a vehicular ad hoc network (VANET). This paper provides a probability analysis algorithm to calculate the necessary condition of 1-connected VANETs in highway scenarios. Extensive experiments were undertaken to verify the derived analytic expression via realistic mobility traces. Results demonstrate that the radio communication range of each node must be subject to Θ(|log(1-p1/n)|/n) in order to ensure that there is no isolated nodes within the entire network.

    • Forward-Secure Multi-Signature in the Standard Model: Security Model and Construction

      2010, 21(11):2920-2932. CSTR:

      Abstract (5512) HTML (0) PDF 411.77 K (6006) Comment (0) Favorites

      Abstract:The formal security model of forward-secure multi-signature is examined and a forward-secure multi-signature scheme with provable security is proposed. Even if the current secret keys of all the signers are exposed, all the signatures pertaining to previous periods are still valid in this scheme. The presented scheme has proven to be secure in the standard model.

    • Design of Double Rounds Clustering Protocol for Improving Energy Efficient in Wireless Sensor Networks

      2010, 21(11):2933-2943. CSTR:

      Abstract (5031) HTML (0) PDF 469.05 K (5799) Comment (0) Favorites

      Abstract:This paper focuses on the communication aspect to present a protocol for WSN to improve energy efficiency. It is called Energy Efficient Double Rounds Clustering Protocol (EEDRCP). It improves the LEACH protocol as follows: it adds a parameter of energy for the head node’s election arithmetic, combines flat topology, hierarchical topology and location-based routing to form a hybrid topology, uses double rounds clustering instead of every round clustering. Finally, it is simulated on Matlab, and the results are compared with LEACH and DIRECT protocol. The results show that it improves the energy performance of network by 33%~82%.

    • Critical Vulnerability Analysis and Exploitation Based on Exception Capture

      2010, 21(11):2944-2958. CSTR:

      Abstract (4208) HTML (0) PDF 434.38 K (5777) Comment (0) Favorites

      Abstract:In this paper, critical vulnerability is parsed from its essence, analysis and exploitation. First, this paper gives the definition of critical vulnerability, present necessary and sufficient condition of the existence for critical vulnerability, and proves that there are not any universal detecting procedures for critical vulnerability. Secondly, this paper proposes three basic conditions to judge if a procedure has critical vulnerability, examines the essential method to analyze critical vulnerability using the backtracking analysis, and proves that the time complexity of the backtracking analysis conforms with the exponential growth of at least O(2h). Lastly, this paper ascribes the critical vulnerability exploitation to solving critical vulnerability equation sets, and gives the algorithm for solving the critical vulnerability equation set by a generalized equation and VC factorization. Then, the paper analyzes and computes two critical vulnerabilities of the Office series software.

    • Real-Time Subsurface Scattering Rendering Method for Dynamic Objects

      2010, 21(11):2959-2970. CSTR:

      Abstract (4397) HTML (0) PDF 599.80 K (7365) Comment (0) Favorites

      Abstract:A method is set forth to model the heterogeneous translucent material. Image-space single scatter and multiple scatter methods are respectively proposed by decomposing the light propagation paths. It is also easy to deal with dynamic objects by applying deferred shading to the methods. Also, real-time subsurface scattering effects for layered heterogeneous translucent materials can also be obtained rather easily.

    • Near Duplicate Keyframes Identifying and Correlation Analyzing of News Video Stories

      2010, 21(11):2971-2984. CSTR:

      Abstract (4297) HTML (0) PDF 652.81 K (6341) Comment (0) Favorites

      Abstract:The quadratic complexity required for measuring the similarity of news stories makes it intractable in large-volume news videos. In this paper, an effective method is proposed to find a way to solve the problems. First, small partitions from the corpus and prune local keypoint are selected to accelerate matching speed. Then, a hierarchical approach for identifying near duplicate keyframes is proposed. Furthermore, this paper presents a method to identity correlation of stories based on near duplicate keyframes and transitivity of correlations. Finally, a method for calculating the similarity of news stories is presented based on near duplicate keyframes. Experimental results show that this approach greatly speeds up the matching speed and improves the matching accuracy. The similarity of stories is closer to users sensory.

    • Stereo Matching Algorithm Using Segment Geometric Constraint and Disparity Probability

      2010, 21(11):2985-2998. CSTR:

      Abstract (4515) HTML (0) PDF 605.19 K (6981) Comment (0) Favorites

      Abstract:To handle the slanted or curved surfaces in stereo matching, the adjacent segment geometric constraint and the statistical information of disparity are adopted in a proposed algorithm. The segment geometric constraint is incorporated in a new global energy function to acquire the optimal plane of each segment. The statistical information of disparity is adopted to find the reliable disparity pixel and a reliable segment. The constraints between the segments and pixels are used to estimate the disparity of the unreliable pixels. Experiments are performed on the classical images with a larger disparity range, more slanted surfaces, and surface with less texture and show the effectiveness of the proposed algorithm.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063