• Volume 15,Issue 9,2004 Table of Contents
    Select All
    Display Type: |
    • Research on Model-Checking Based on Petri Nets

      2004, 15(9):1265-1276.

      Abstract (5283) HTML (0) PDF 959.20 K (7050) Comment (0) Favorites

      Abstract:In this paper, the emerging P2P computing is first briefly introduced, including its distinct features, potential merits and applications, and the problems from which the existing P2P-based data sharing systems are suffering are further point out. To address these problems, the concept of P2P-based information retrieval is proposed, which can not only exploit the potential merits of P2P to overcome the problems of traditional information retrieval systems (e.g., lacking of scalability), but also achieve fully semantic retrieval and sharing in the context of P2P systems. Based on the ideology, PeerIS, a P2P-based information retrieval system is developed. Then, the architecture of PeerIS and its peers’ components are presented. The key issues of implementation are described, including communication, semantics-based self-reconfiguration, query processing and self-adaptive routing mechanisms, are also described. Finally, an experimental study is used to verify the advantages of PeerIS.

    • Evaluation Algorithms of a New Kind of Recursive Functions

      2004, 15(9):1277-1291.

      Abstract (3920) HTML (0) PDF 1.05 M (5692) Comment (0) Favorites

      Abstract:Recursive functions on context-free languages (CFRF) are a kind of new recursive functions proposed especially for describing non-numerical algorithms used on computers. An important research aspect of this kind of functions is the exploration of evaluation algorithms. The paper summarizes the author’s research on this issue. Beginning by a discussion on possible combinations of calculation and parsing, it presents a comprehensive introduction to the major algorithms in an order in which the applicable ranges of the algorithms increase (this is also the order that the algorithms were devised). The introduction emphasizes on a new, efficient, and general evaluation algorithm, i.e. the tree-oriented evaluation algorithm. The paper also explains the evaluation method for the many-sorted recursive functions extended from CFRF. The algorithms of CFRF have been realized on computers, and have been validated by practice.

    • A Decomposition Method for Object-Oriented Systems Based on Iterative Analysis of the Directed Weighted Graph

      2004, 15(9):1292-1300.

      Abstract (5304) HTML (0) PDF 753.49 K (5992) Comment (0) Favorites

      Abstract:Aiming at the problem of how to acquire components from existing systems, this paper proposes a decomposition method for object-oriented systems based on iterative analysis of the directed weighted graph. This method uses the directed weighted graph as the representation of object-oriented systems, and an iterative algorithm for analyzing the independence of sub-graphs at different granularity levels. Those highly independent ones are chosen as candidate components. Experimental results show that this method is effective and can improve the existing decomposition methods in terms of accuracy.

    • Axiomatic Assessment of Logic Coverage Software Testing Criteria

      2004, 15(9):1301-1310.

      Abstract (4434) HTML (0) PDF 785.05 K (6006) Comment (0) Favorites

      Abstract:Since formal specifications precisely describe the software requirements in a form that can be automatically manipulated, they can be used as the base for automatic test generation and software verification. Logic coverage criteria are the common criteria used in specification-based testing. The main problem of applying these criteria that test engineers face is how to appropriately select each criterion. Thus, the comparison and analysis of these criteria will give a guide to applying each criterion. Axiomatic assessment of test adequacy criteria is an approach to comparing test criteria. This approach defines the intuitive requirements of ideal test adequacy criteria as some axioms, then compares the test adequacy criteria by checking if they satisfy these axioms. This paper proposes some positive properties as the intuitive requirements of ideal logic coverage criteria, and gives a generating algorithm that is used to determine whether a logic coverage criterion is complete. These properties are formally defined as an axioms system. With these formal definitions, the relations among the logic coverage criteria are described as some theorems. Finally, the common logic coverage criteria are assessed against the axioms system. From the assessing result, testers can get some conclusions that help them apply these criteria in practice.

    • A Data Space Fusion Based Approach for Global Computation and Data Decompositions

      2004, 15(9):1311-1327.

      Abstract (3968) HTML (0) PDF 1.38 M (5098) Comment (0) Favorites

      Abstract:Computation and data decompositions are key factors of affecting the performance of parallel programs running on distributed memory multicomputers. This paper presents a theoretical framework of data space fusion and an effective global computation and data decomposition approach based on it, which can be used to solve computation and data decomposition problems on distributed memory multicomputers. The approach can exploit the parallelism of computation space as high as possible, use the technique of data space fusion to optimize data distribution, and search the optimizing global computation and data decompositions. The approach can also be integrated with data replication and offset alignment naturally, and therefore can make the communication overhead as low as possible. Experimental results show that the approach presented in the paper is effective.

    • Chinese Web Index Page Recommendation Based on Multi-Instance Learning

      2004, 15(9):1328-1335.

      Abstract (5972) HTML (0) PDF 1.50 M (5985) Comment (0) Favorites

      Abstract:Multi-Instance learning provides a new way to the mining of Chinese web pages. In this paper, a particular web mining task, i.e. Chinese web index page recommendation, is presented and then addressed through transforming it to a multi-instance learning problem. Experiments on the real world dataset show that the proposed method is an effective solution to the Chinese web index page recommendation problem.

    • Ordinal Regression in Content-Based Image Retrieval

      2004, 15(9):1336-1344.

      Abstract (4319) HTML (0) PDF 850.40 K (5264) Comment (0) Favorites

      Abstract:Relevance feedback, as a key component of content-based image retrieval, has attracted much research attention in the past few years, and a lot of algorithms have been proposed. Most current relevance feedback algorithms use dichotomy relevance measurement—relevance or non-relevance. To better identify the user’s needs and preferences, a refined relevance scale should be used to represent the degree of relevance. In this paper, relevance feedback with multilevel relevance measurement is studied. Relevance feedback is considered as an ordinal regression problem, and its properties and loss function are discussed. A new relevance feedback scheme is proposed based on a support vector learning algorithm for ordinal regression. Since the traditional retrieval performance measures, such as precision and recall, are not appropriate for retrieval with multilevel relevance measurement, a new performance measure is introduced, which is based on the preference relation between images. The proposed relevance feedback approach is tested on a real-world image database, and promising results are achieved.

    • A Compounded Genetic and Simulated Annealing Algorithm for Computing Minimal Diagnosis

      2004, 15(9):1345-1350.

      Abstract (4602) HTML (0) PDF 647.25 K (5838) Comment (0) Favorites

      Abstract:Model-Based diagnosis is an active branch of Artificial Intelligent. The method is a NP-Hard problem, resolving minimal hitting sets from minimal conflict sets. A compounded genetic and simulated annealing algorithm is put forward by mapping hitting sets problem to 0/1 integer programming problem. After providing the genetic simulated annealing (GSA) algorithm, the efficiency and accuracy of GSA algorithm is tested and compared. The GSA algorithm is not only far more efficient than the traditional one, but also can save 1/3 to 1/2 time than the GA algorithm when the number of conflict sets is more than 35. It can get 98% to 100% minimal diagnosis in most conditions.

    • An Incremental Clustering Algorithm for the Topology Adjustment of Location Databases

      2004, 15(9):1351-1360.

      Abstract (3871) HTML (0) PDF 875.33 K (5715) Comment (0) Favorites

      Abstract:How to effectively organize and store the profile of moving objects in a mobile environment, where then can effectively lower the paging and update cost, is an important problem in location management. Combining data mining into the mobile environment is a challenging research task, which has broad applications. Zone partition can effectively optimize the topology of location databases and efficiently reduce the cost of location paging and location update. But with the evolving time, the mobile users’ moving patterns may change, so the original partitions may not match the current moving patterns. Thus one of the important problems, which need to be solved, is how to partition the zones dynamically. Clustering method can solve the static zone partition well, but face with the dynamic zone partition problem. If the clustering method is still used to solve this problem, it means that the zones are partitioned again from scratch, which doesn’t utilize the original partitions and need great cost. In this paper an incremental clustering method is provided to solve the dynamic zone partition problem, which adjusts the original zone partitions with less cost and guarantees all the conditions needed for zone partition problem in the meanwhile.

    • Key Dimension Based High-Dimensional Data Partition Strategy

      2004, 15(9):1361-1374.

      Abstract (4247) HTML (0) PDF 1.16 M (5858) Comment (0) Favorites

      Abstract:Index is one of the core components of content based similarity search and the data partition is the key factor affecting the performance of index. This paper proposes a new data partition strategy—key dimension based partition strategy on the basis of the traditional distance based partition strategy, and the index technique accordingly. The key dimension based data partition eliminates the overlaps between twin nodes, and the filtering between twin nodes by key dimension enhances the filtering ability of index. The data partition strategy and index technique proposed can greatly improve the filtering ability of index. Experimental results show that key dimension can be used to improve the performance of index, which is of great significance for accelerating the content based similarity search.

    • PeerIS: A Peer-to-Peer Based Information Retrieval System

      2004, 15(9):1375-1384.

      Abstract (4712) HTML (0) PDF 834.07 K (6178) Comment (0) Favorites

      Abstract:In this paper, the emerging P2P computing is first briefly introduced, including its distinct features, potential merits and applications, and the problems from which the existing P2P-based data sharing systems are suffering are further point out. To address these problems, the concept of P2P-based information retrieval is proposed, which can not only exploit the potential merits of P2P to overcome the problems of traditional information retrieval systems (e.g., lacking of scalability), but also achieve fully semantic retrieval and sharing in the context of P2P systems. Based on the ideology, PeerIS, a P2P-based information retrieval system is developed. Then, the architecture of PeerIS and its peers’ components are presented. The key issues of implementation are described, including communication, semantics-based self-reconfiguration, query processing and self-adaptive routing mechanisms, are also described. Finally, an experimental study is used to verify the advantages of PeerIS.

    • Covet Channel Analysis on ANSHENG Secure Operating System

      2004, 15(9):1385-1392.

      Abstract (4399) HTML (0) PDF 644.85 K (6355) Comment (0) Favorites

      Abstract:ANSHENG operating system is a secure operating system with high security level based on Linux, which is independently developed with the security kernel, security framework and security models. In this paper, the approaches to analyzing covert channels in the underlying system are summarized, and it is the first time that the covert-channel-analysis results on a secure operating system based on Linux kernel have ever been reported. Some new covert channels have been found by the novel backward tracking approach. For the identified covert channels, accurate bandwidth computation and appropriate covert-channel handling have been performed.

    • A Web Site Representation and Mining Algorithm Using the Multiscale Tree Model

      2004, 15(9):1393-1404.

      Abstract (4422) HTML (0) PDF 1.03 M (4943) Comment (0) Favorites

      Abstract:With the exponential growth of both the amount and the diversity of the web information, web site mining is highly desirable for automatically discovering and classifying topic-specific web resources from the World Wide Web. Nevertheless, existing web site mining methods have not yet handled adequately how to make use of all the correlative contextual semantic clues and how to denoise the content of web sites effectually so as to obtain a better classification accuracy. This paper circumstantiates three issues to be solved for designing an effective and efficient web site mining algorithm, i.e., the sampling size, the analysis granularity, and the representation structure of web sites. On the basis, this paper proposes a novel multiscale tree representation model of web sites, and presents a multiscale web site mining approach that contains an HMT-based two-phase classification algorithm, a context-based interscale fusion algorithm, a two-stage text-based denoising procedure, and an entropy-base pruning strategy. The proposed model and algorithms may be used as a starting-point for further investigating some related issues of web sites, such as query optimization of multiple sites and web usage mining. Experiments also show that the approach achieves in average 16% improvement in classification accuracy and 34.5% reduction in processing time over the baseline system.

    • An Adaptive Forward Error Correction Algorithm for Streaming Video

      2004, 15(9):1405-1412.

      Abstract (4299) HTML (0) PDF 752.68 K (6052) Comment (0) Favorites

      Abstract:Video applications over network are often disturbed by data packet loss or errors as well as the insufficiency of network bandwidth. Some related studies have demonstrated that in many cases, the fluctuation of network bandwidth and packet loss rate are two key factors that influence the quality of video streaming. Therefore for guaranteeing video quality, FEC (forward error correction) can be adopted to improve the reliability of video data transmission; meanwhile, according to the current network state, the sender can adjust the sending rate of video data and optimally allocate the bandwidth resource between the video source data and FEC data. This paper analyzes the structure of video stream, and presents a frame decoding model that takes into account the frame types and the dependence among frames. On this basis, an optimal algorithm is proposed to allocate the bandwidth resource between the source video data and the FEC data. Experiments show that the model can effectively adapt to the fluctuation of network state, and optimally allocate network bandwidth so as to maximize the playable frame rate on receiver.

    • Analysis of Block Coding Strategies in Watermarking Channel

      2004, 15(9):1413-1422.

      Abstract (3778) HTML (0) PDF 853.66 K (5461) Comment (0) Favorites

      Abstract:Robustness is one of the important requirements of digital watermark. To improve the robustness of watermarks, some algorithms proposed to apply error correcting coding (ECC) in watermarking, intending to lower the detection error rate by correcting some errors. However, there is an important difference between the watermarking channel and the conventional communication channel. Owing to the constraints of imperceptibility, the redundancy introduced by ECC will result in a decrease of embedding strength for watermarking. In other words, the coding rate of ECC affects the amplitude of embedding signal. Hence, a question arises: For improving watermark robustness, how to choose the coding rate of ECC? In this paper, this problem is addressed both analytically and experimentally. By employing Plotkin Bound and Hamming Bound, the coding rate constraints of Q-symbol linear block codes are derived. The conclusions are demonstrated by the experiment results.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063