• Volume 16,Issue 12,2005 Table of Contents
    Select All
    Display Type: |
    • An Algorithm for Generating Concepts Based on Search Space Partition

      2005, 16(12):2029-2035. CSTR:

      Abstract (4050) HTML (0) PDF 526.23 K (5850) Comment (0) Favorites

      Abstract:Concept lattice, the core data structure in formal concept analysis, has been used widely in machine learning, data mining and knowledge discovery, information retrieval, etc. The main difficulty with concept lattice-based system comes from the lattice construction itself. This paper proposes a new algorithm called SSPCG (search space partition based concepts generation) based on the closures search space partition. The algorithm divides the closures search space into several subspaces in accordance with the criterions prescribed ahead, and introduces an efficient scheme to recognize the valid ones, which bounds searching just in these valid subspaces. An intermediate structure is employed to judge the validity of a subspace and compute closures more efficiently. Since the partition of the search space is recursive and the searching in subspaces is independent, a parallel version can be directly reached. The algorithm is experimental evaluated and compared with the famous NextClosure algorithm proposed by Ganter for random generated data, as well as for real application data. The results show that the algorithm performs much better than the later.

    • An Improved Particle Swarm Optimization Based on Self-Adaptive Escape Velocity

      2005, 16(12):2036-2044. CSTR:

      Abstract (4886) HTML (0) PDF 703.99 K (7767) Comment (0) Favorites

      Abstract:To deal with the problem of premature convergence and slow search speed, this paper proposes a novel particle swarm optimization (PSO) called self-adaptive escape PSO, which is guaranteed to converge to the global optimization solution with probability one. Considering that the organisms have the phenomena of escaping from the original cradle when they find the survival density is too high to live, this paper uses a special mutation –escape operator to make particles explore the search space more efficiently. The novel strategy produces a large speed value dynamically according to the variation of the speed, which makes the algorithm explore the local and global minima thoroughly at the same time. Experimental simulations show that the proposed method can not only significantly speed up the convergence, but also effectively solve the premature convergence problem.

    • Ontologies, Frames and Logical Theories in NKI

      2005, 16(12):2045-2053. CSTR:

      Abstract (4065) HTML (0) PDF 666.36 K (5421) Comment (0) Favorites

      Abstract:NKI (The National Knowledge Infrastructure) is a large-scale knowledge base, which uses frames to represent concepts in ontologies, and uses Horn logic programs as the automated reasoning. The formalizations of ontologies, frames and logical theories in NKI, and the transformations between the formalizations are given, and proved to be functors between ontologies, frames and logical theories if they are taken as categories in the theory of category. The result proved in this paper guarantees that in NKI, the inference based on the Horn logic programs is correct with respect to the knowledge base represented by ontologies and frames.

    • K-Nearest Neighbor Classification Based on Semantic Distance

      2005, 16(12):2054-2062. CSTR:

      Abstract (4024) HTML (0) PDF 533.81 K (6841) Comment (0) Favorites

      Abstract:Most research on distance metric of kNN classification is focused on how to integrate the differences caused by various attributes, and the semantic difference between values of the same attribute is ignored. In addition, classification accuracy of the traditional approaches is very sensitive to the incomplete data described on different abstract levels. In this paper, a novel kNN approach based on semantic distance——SDkNN (semantic distance based k-nearest neighbor) is presented, which solves the two problems mentioned above. This approach analyzes the semantic difference between values of an attribute and presents how to calculate the semantic distance based on domain ontologies, and the semantic distance is then used to improve the traditional kNN methods. Experiments on the UCI (University of California, Irvine) machine learning repository and real application datasets show that the overall performance of SDkNN outperforms the traditional one, especially when the data is incomplete. SDkNN also has the desirable application value in practice.

    • >Review Articles
    • XML Indices

      2005, 16(12):2063-2079. CSTR:

      Abstract (7910) HTML (0) PDF 1006.81 K (8066) Comment (0) Favorites

      Abstract:XML has become the de facto standards for data representation and exchange on Web applications, such as digital library, Web service, and electronic business, etc. Indexing technique is still significant for efficient XML data processing. This paper discusses the actualities of the recent researches on XML indexing. It classifies the techniques into two categories, node-record-style index with three subcategories, and structural-summary-style index. It analyzes the virtue and deficiency of the related schemes based on the considerations for query processing efficiency and data modification supporting. And hereby it proposes three issues for future XML indexing researches, including internal structure retrieval, multi-dimensional processing on node paths, efficient modification-validating support and the index amalgamation for satisfying both querying and IR on XML data.

    • Ontology-Based Preference Model in Digital Library

      2005, 16(12):2080-2088. CSTR:

      Abstract (3933) HTML (0) PDF 594.51 K (5234) Comment (0) Favorites

      Abstract:Personalization poses new challenges to digital library. How to describe users’ preferences and how to support preference queries in digital libraries are among the top demanding tasks. A strict partial order preference model has been proposed, and a series of preference generation methods have already been developed for relational data. However, the semi-structured data stored in digital libraries make it more complex to model users’ preferences than its counterpart in relational databases. The partial order preference model cannot suffice to support such complex preferences. A new ontology based preference model is proposed in this paper to overcome this difficulty. This model describes the documents along with the preferences regarding the documents in the digital libraries using ontology, which allows both the structure and the semantic of a user’s preference to be fully represented. A set of complex preference operations have also been provided in the model to support the personalized query and recommendation efficiently.

    • Algorithms for Storing and Aggregating Historical Streaming Data

      2005, 16(12):2089-2098. CSTR:

      Abstract (3883) HTML (0) PDF 630.09 K (5087) Comment (0) Favorites

      Abstract:The current research work over data streams is mainly focused on dealing with the arrival of recent data in memory, neglecting the analysis and management of historical streaming data. An approach is proposed to store and query historical streaming data by using multi-layer recursive sampling method and HDS-Tree structure, which indexes the aggregation of historical streaming data and supports all kinds of aggregation queries over historical streaming data. The time-complexity and the error of aggregation algorithms are also analyzed based on HDS-Tree. The analytical and experimental results show that the approach can be effectively used to store and analyze the historical streaming data.

    • A High-Speed Heuristic Algorithm for Mining Frequent Patterns in Data Stream

      2005, 16(12):2099-2105. CSTR:

      Abstract (4050) HTML (0) PDF 414.70 K (5469) Comment (0) Favorites

      Abstract:Of the current approaches to frequent pattern discovery in stream data, the batch approach requires enough data, while the heuristic approach can deal with stream data directly. Although the average speed of the batch approach is higher, it cannot response on time and the query granularity is rough. This paper proposes an improved Lexicographic tree, IL-TREE (improved lexicographic tree), and gives a novel heuristic algorithm, called FPIL-Stream (frequent pattern mining based on improved lexicographic tree), which locates the historical patterns rapidly in the stage of updating the patterns and generating the new ones. Moreover, a policy for the titled window is integrated into the algorithm for recording the historical information in details. With the promise of the processing stream data on time, the algorithm reduce the average processing time greatly and provides a finer granularity of query.

    • A Distributed Energy-Efficient Data Gathering and Aggregation Protocol for Wireless Sensor Networks

      2005, 16(12):2106-2116. CSTR:

      Abstract (5215) HTML (0) PDF 674.81 K (6532) Comment (0) Favorites

      Abstract:This paper proposes a distributed energy-efficient data gathering and aggregation protocol, in which a node, according to its residual energy and the strength of the signal received from its neighboring nodes, independently makes its decision to compete for becoming a cluster head. In addition, assume that the inter-cluster communication data in a multi-hop manner is sent to the designated node, it then sends the gathered data by the whole network to the base station. DEEG also proposes a simple approach to solve the cluster coverage problem. With the increase in node density, this approach produces a linear sensor network lifetime in the number of nodes. Experimental results have shown that compared with another two data gathering and aggregation protocols--- leach and PEGASIS, the DEEG algorithm, in the best case, can lead to the increase of sensor network lifetime by 1800% and 300% respectively. Moreover, since all the nodes in the sensor network die in the last 40 rounds (the last node dies) in DEEG protocol, the reliability of the sensing information in DEEG is higher than that in LEACH and PEGASIS.

    • Router Anomaly Traffic Detection Based on Modified-CUSUM Algorithms

      2005, 16(12):2117-2123. CSTR:

      Abstract (5056) HTML (0) PDF 411.53 K (6195) Comment (0) Favorites

      Abstract:The paper aims at the change of core routers ports’ ingress and egress traffic, employing a modified CUSUM (cumulative sum) algorithm to trace their statistics characteristic in real time and detect network flow abnormity. According to the characteristics of multi-ports in a router, the paper puts forward a matrix-based, multi-statistics modified CUSUM algorithm (M-CUSUM). M-CUSUM presents an adjustable parameter setup system to increase detecting accuracy. M-CUSUM algorithm can monitor changes of the equal value in real time through calculating the ratio between the subtracting and plus absolute value among ingress and egress ports traffic. Simulation experiments indicate that the algorithm has the higher detecting speed and accuracy to DOS/DDOS attacks, and spends less system resources. The algorithm has been used successfully in software routers.

    • TCP-Friendly Congestion Control Mechanism Based on Adaptive Weighted Average

      2005, 16(12):2124-2131. CSTR:

      Abstract (4582) HTML (0) PDF 449.87 K (5580) Comment (0) Favorites

      Abstract:TCP-friendly congestion control is a key technique to guarantee the large-scale deployment of real-time streaming and multicast service in the Internet. On the basis of TCP emulation at receivers (TEAR) scheme, a new congestion control mechanism, called adaptive TCP-friendly congestion control (ATFCC), is proposed. In this scheme, the parameters of weighted average are dynamically adapted according to the type of packet loss and the duration of current congestion epoch. Simulation results show that ATFCC scheme outperforms TCP-Friendly rate control (TFCC) protocol in terms of rate smoothness and fairness to TCP flows.

    • A Detection and Forecast Algorithm for Multi-Step Attack Based on Intrusion Intention

      2005, 16(12):2132-2138. CSTR:

      Abstract (4838) HTML (0) PDF 465.26 K (5529) Comment (0) Favorites

      Abstract:The multi-step attack is one of the primary forms of the current intrusions. How to detect these attacks is an important aspect of IDS research. The correlation research to intrusion detection performs mainly on the following aspects: (1) reducing the false positives and false negatives; (2) detecting unknown attacks; (3) attack forecasting. Especially the development of the third point perhaps improves the passive detection to the active protection. Through the study on patterns of the multi-step attack, a detection and forecast algorithm is designed for multi-step attack based on intrusion intention. In this algorithm, an extended directed graph is used to show attack types and their relations, while the correlation is performed according to the method of backwards matching and absent matching. Based on the weighted summation of correlation attack’s chain and the branch’s weights on the logic graph of attack, the probability of the next attack can be computed. The effect of this algorithm includes the detection of multi-step attack, attack forecasting, detecting unknown attacks, and reducing the false alarms. This paper also presents the process of experimental and analysis result for validity of the algorithm.

    • A Light Weight On-Board Distributed Routing Protocol for LEO Satellite Networks

      2005, 16(12):2139-2149. CSTR:

      Abstract (5141) HTML (0) PDF 670.39 K (5920) Comment (0) Favorites

      Abstract:In LEO satellite networks with inter-satellite links, the highly dynamic topology and the limited on-board resources pose special challenges to routing protocol design. In this paper, a light weight on-board distributed routing protocol is proposed to cope with these challenges. For ODRP, the single layer LEO satellite constellation is considered as double-layer constellation. A satellite at special geographical position is selected as the plane speaker according to the dynamic characteristics of inter-satellite links and the distribution of traffic load carried by the network, consequently the idea of distributed hierarchical routing is realized. Experimental results show that ODRP has the adaptive abilities to deal with the dynamic topology of LEO satellite networks and guarantees the path's optimality, and especially can decrease the packet loss probability efficiently in case of high traffic load. Furthermore, results from the implementation complexity analysis demonstrate that the proposed protocol has lower onboard computational, storage and signaling requirements than other on-board routing schemes.

    • Protocols Security Analysis Based on Ideal

      2005, 16(12):2150-2156. CSTR:

      Abstract (3570) HTML (0) PDF 424.56 K (5042) Comment (0) Favorites

      Abstract:Guttman et al. introduced a new formulation method, strand spaces theory as a new method for describing and analyzing cryptographic protocols in 1998 and Guttman introduced the ideal of information algebra and honesty to analyze the secrecy of Otway-Rees protocol in 1999. Because of the property of ideal, it can be used to show the relation between different information in some protocol. In this paper, the ideal will be appied to analyze some security properties of cryptographic protocols, such as secrecy, authentication, and so on.

    • Reuse Cost Optimization Oriented Component Refactoring Method

      2005, 16(12):2157-2165. CSTR:

      Abstract (3864) HTML (0) PDF 869.31 K (5306) Comment (0) Favorites

      Abstract:During the whole lifecycle of component reuse, it is necessary for designers to continuously re-design and refactor components to improve usability, e.g., reuse cost, reuse efficiency, etc, under the guarantee of high usefulness. For this purpose, this paper presents a reuse cost optimization oriented, locality principle and instance set decomposition based component refactoring method. A feature-oriented component model is firstly introduced, with emphasis on component reuse mechanism based on variation point. By analysis of the constituents of reuse cost, the optimization goal, i.e., increasing the proportion of fixed part of a component, is put forward. Then, the temporal and spatial locality in component reuse process is analyzed elaborately, and according to the reuse frequency of each component instance, those instances with higher reuse frequency are separated out to form (semi-)instantiated independent components, so as to decrease the instantiation cost and implementation cost during reuse process. A greedy strategy based component instance set decomposition algorithm is proposed and validated by a practical case. This method may bring some instantiation tasks forward to component design phase instead of in practical reuse phase and transform some variable features into fixed ones by decomposing the dependencies between components into dependencies between component instances, therefore avoiding multiple instantiations during frequent reuse to decrease reuse cost.

    • Perfect Ball in Nature-On Software Development Methodologies for Distributed Systems

      2005, 16(12):2166-2171. CSTR:

      Abstract (3612) HTML (0) PDF 360.03 K (4734) Comment (0) Favorites

      Abstract:Many implications and unavoidable imperfects in software practices, as the uncontrollable and the unknown parts, offend the foundation of most existed methodologies. Perfect ball is based on the duality and mappings. The uncertainty relation of distributed systems addresses that codes, as available products, and goals, as the announced features of the software products, cannot be determined simultaneously. A triangle relationship among address, thought and object is analyzed for clarifying the perfect point and non-zero area. It is intent to substitute the perfect ball for specific pre-fixed or dynamic modified goals and the step for reducing influences of the predestination and probability theories. The explanation of the linkage between software system and human body shows the concept-mapping between Qigong philosophy and software development. Software developments under perfect ball paradise include the vivid learning behaviors, rather than only the mechanical behaviors. Few applications of perfect ball are mentioned.

    • Microprocessor Architectural Automatic Test Program Generation

      2005, 16(12):2172-2180. CSTR:

      Abstract (3891) HTML (0) PDF 457.50 K (5283) Comment (0) Favorites

      Abstract:In this paper, a novel specification driven and constraints solving based method to automatically generate test programs from simple to complex ones for advanced microprocessors is presented, and its prototype sytem——MA2TG (microprocessor architectural automatic test program generator) is introduced. It can generate not only random test programs but also a sequence of instructions target to specific constraints. The proposed methodology makes three important contributions. First, it simplifies the microprocessor architecture modeling and eases adoption of architecture modification via Architecture Description Language (ADL) specification. Second, it generates test programs for specific constraints utilizing the power of state-of-art constraints solving techniques. Finally, the size of the test program for microprocessor verification and the verification time are dramatically reduced. MA2TG has been applied on DLX processor and the embedded microprocessor EStar to illustrate the usefulness of the approach.

    • Buffering High-Speed Packets with Tri-Stage Memory Array and Its Performance Analysis

      2005, 16(12):2181-2189. CSTR:

      Abstract (4482) HTML (0) PDF 463.19 K (5344) Comment (0) Favorites

      Abstract:High-Performance routers and switches need large throughput packet buffers to hold packets. However, the technique of commercially available memories is limited and can hardly fulfill this high throughput packet buffers. As a result, the development of networks is restricted severely. This paper presents a tri-stage memory array architecture to solve the problem, which can accomplish the arbitrary high-speed packet buffer theoretically. It is proved that the critical queue first algorithm can be applied as the memory management algorithm to get zero delay scheduling as well as minimum scale system. Furthermore, the design of hardware implementation architecture of the tri-stage memory array system is provided finally.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063