• Volume 15,Issue 5,2004 Table of Contents
    Select All
    Display Type: |
    • Research on Similarity of Stochastic Non-Stationary Time Series Based on Wavelet-Fractal

      2004, 15(5):633-640. CSTR:

      Abstract (4613) HTML (0) PDF 619.76 K (5130) Comment (0) Favorites

      Abstract:Traditional dimension reduction methods about similarity query introduce the smoothness to data series in some degree, but lead to the disappearance of the important features of time series about non-linearity and fractal.The matching method based on wavelet transformation measures the similarity by using the distance standard at some resolution level. But in the case of an unknown fractal dimension of non-stationary time series, the local error of similarity matching of series increases. The process of querying the similarity of curve figures will be affected to a certain degree. Stochastic non-stationary time series show the non-linear and fractal characters in the process of time-space kinetics evolution. The concept of series fractal time-varying dimension is presented. The original Fractal Brownian Motion model is reconstructed to be a stochastic process with local self-similarity. The Daubechies wavelet is used to deal with the local self-similarity process. An evaluation formula of the time-varying Hurst index is established. The algorithm of time-varying index is presented, and a new determinant standard of series similarity is also introduced. Similarity of the basic curve figures is queried and measured at some resolution ratio level,in the meantime, the fractal dimension in local similarity is matched. The effectiveness of the method is validated by means of the simulation example in the end.

    • Two-Dimensional Stack Generation and Block Merging Algorithms for Analog VLSI

      2004, 15(5):641-649. CSTR:

      Abstract (4698) HTML (0) PDF 858.18 K (4793) Comment (0) Favorites

      Abstract:In analog VLSI design, 2-dimensional symmetry stack and block merging are critical for mismatch minimization and parasitic control. In this paper, algorithms for analog VLSI 2-dimensional symmetry stack and block merging are described. Several theoretical results are obtained by studying symmetric Eulerian graph and symmetric Eulerian trail. Based on them, an O(n) algorithm for dummy transistor insertion, symmetric Eulerian trail construction and 2-dimensional symmetry stack construction is developed. The generated stacks are 2-dimensional symmetric and common-centroid. A block merging algorithm is described, which is essentially independent of the topological Eepresentation. Formula for calculating the maximum block merging distance is given. Experimental results show the effectiveness of the algorithms.

    • Further Study on the Smoothed Analysis of Condition Number of Matrix and Gaussian Algorithm

      2004, 15(5):650-659. CSTR:

      Abstract (4076) HTML (0) PDF 941.50 K (5695) Comment (0) Favorites

      Abstract:Smoothed analysis aims to explain the success of algorithms that work well in practice while performing poorly in worst-case analysis. High performance computers have been used extensively in solving large scale linear systems and decomposition of matrix. The simplest and most implemented method of solving linear systems is Gaussian elimination (Gaussian algorithm). Natural implementations of Gaussian elimination use O(n3) arithmetric operations to solve a system of n linear equations in n variables. If the coefficients of these equations are specified using m bits, in the worst case it suffices to perform the elimination using mn bits of precision, because the elimination may produce large intermediate entries. However, a large number of experiments in numerical analysis have indicated that this high precision is necessary rarely in practice. Huge condition number and growth factors of matrix are always the main roots that make the matrix ill-conditioned and lead to produce a large error in solution. Let A be the matrix of independent Gaussian random variables centered at A, each of the variances12s, K(A) be the condition number of matrix A. Sankar et al. performed a smoothed analysis of condition numbers and growth factors of matrices. They showed that for any matrixnnRAand a>0, ()saanAK)log(413.64])(Pr[+, and the smoothed complexity of bits of precision needed to solve Ax=b to m bits of accuracy using Gaussian algorithm is at most: .7loglog)/1(log3)(log72222++++nnms They made some mistake in their proof. The mistake is corrected in this paper, and the smoothed complexity of bits of precision is corrected as: m+71og2n+3log2(1/σ)+4√2+log2n+log2(1/σ)+7.367. Two inequalities on norm of the matrix and product of the two random variables respectively are presented, and the smoothed analysis of condition number of matrix is simplified to .26])(Pr[2saanAK The conjecture asOanAK])(Pr[ is solved to some extent. The smoothed complexity of bits of precision is improved to 7.)(1/3log8log22+++snm And the experimental results show that the new smoothed analysis results are much better.

    • A Fast Low-Cost Shortest Path Tree Algorithm

      2004, 15(5):660-665. CSTR:

      Abstract (5011) HTML (0) PDF 589.72 K (7116) Comment (0) Favorites

      Abstract:Low-Cost Shortest Path Tree is a commonly-used multicast tree type, which can minimize the end-to-end delay and at the same time reduce the bandwidth requirement if possible. Based on the low-cost shortest path tree algorithm DDSP (destination-driven shortest path) and through improving on the search procedure, a Fast Low-cost Shortest Path Tree (FLSPT) algorithm is presented in this paper. The Shortest Path Tree constructed by the FLSPT algorithm is the same as that constructed by the DDSP algorithm, but its computation complexity is lower than that of the DDSP algorithm. The simulation results with random network models show that FLSPT algorithm is more effective.

    • Metric Reconstruction Based on One Triplet of Corresponding Vanishing Line

      2004, 15(5):666-675. CSTR:

      Abstract (4289) HTML (0) PDF 1.21 M (5412) Comment (0) Favorites

      Abstract:A technique for metric reconstruction based on one triplet of corresponding vanishing line in three images is proposed in this paper. Firstly, the infinite homography is computed by using the vanishing line correspondence and the modulus constraint. Then the camera intrinsic parameters are determined linearly by using the property that the infinite homography preserves the image of the absolute conic. Finally a metric reconstruction is obtained. Experiments with simulated data as well as with real images show that the proposed method is workable and applicable in real applications.

    • Theory and Robust Algorithm of Trinocular Rectification

      2004, 15(5):676-688. CSTR:

      Abstract (4246) HTML (0) PDF 1.78 M (5017) Comment (0) Favorites

      Abstract:The main contributions are two-fold: Firstly, some theoretical analyses are carried out on trinocular rectification, including the relationship among the three rectified images and their three fundamental matrices, and an geometric interpretation of the 6 free parameters involved in the rectification process. Such results could be used as a theoretical guide to reduce the induced projective distortion. Secondly, under the RANSAC (random sampling consensus) paradigm, a robust trinocular rectification algorithm is proposed. Unlike the traditional ones where only the fundamental matrices are used to rectify images, this algorithm instead uses directly corresponding points for the rectification. The main advantage of this point-based approach is that on one hand, the computation of fundamental matrices is usually prone to noise; on the other hand, good fundamental matrices do not necessarily always produce good rectified images because the two processes have different evaluation criteria. Extensive simulation and experiments with real images show that the proposed rectification technique is resistant to noise as well as to outliers of the corresponding points, and fairly good rectification results can be obtained.

    • A Teamwork Protocol for Multi-Agent

      2004, 15(5):689-696. CSTR:

      Abstract (4313) HTML (0) PDF 726.11 K (5362) Comment (0) Favorites

      Abstract:Teamwork is an effective way of cooperative problem solving in dynamic and unpredictable application contexts. The concept of joint intention is the key of teamwork. How various speech actions can be used by software Agents to form, maintain and dissolve joint intention is a significant problem needed to be investigated. This paper aims to design a teamwork protocol based on FIPA (foundation for intelligent physical Agent) ACL (Agent communication language) which is a promising Agent communication language. To this aim, the sufficiency of the FIPA ACL in supporting Agents to form the required joint intention in teamwork is analyzed first. Specifically, the notions of the joint-request and delegation-request are distinguished and the insufficiency of the delegation-request in supporting teamwork is pointed out. Thus a new joint-request action is defined to extend the FIPA ACL. Some properties of the joint-request are also discussed. Based on the defined action, a teamwork protocol with a formal semantic description is proposed and its application is demonstrated finally. The teamwork protocol describes a new interaction pattern, which differs from those of the existing elementary request protocol, contract-net protocol and auction protocols within the FIPA Interaction Protocol Specifications. The proposed protocol can facilitate the design of interaction modules in multi-Agent teamwork.

    • An Applicable and Efficient Clustering Algorithm

      2004, 15(5):697-705. CSTR:

      Abstract (4790) HTML (0) PDF 734.76 K (6252) Comment (0) Favorites

      Abstract:In the research on IR (information retrieval), lots of clustering algorithms have been developed, and in most of them some parameters should be determined by hand. However, it is very difficult to determine them manually without any prior domain knowledge. To solve this problem, an applicable and efficient clustering algorithm is presented. It aims at avoiding any parameter to be determined by hand, and at the same time, improving the efficiency of clustering and the property of IR. The new clustering algorithm is analyzed on several facets and applied later to cluster Chinese documents. The results of the application confirm that the new clustering algorithm is very applicable and efficient.

    • An Integrated-Utility Based Optimization in Multi-Issues Negotiation

      2004, 15(5):706-711. CSTR:

      Abstract (4310) HTML (0) PDF 632.93 K (4906) Comment (0) Favorites

      Abstract:In time-limited multi-issues negotiation among multi-Agents, the worst result is no trade-off. One of the most important factors contributing to the failure of the negotiation is that no agreement can be made in one of the negotiation-related issues. A multi-issues integrated utility evaluation mechanism is proposed firstly, and then an equal-utility exchange on the Agent's negotiation-related issues with a reserved value vector is given. This exchange makes use of the relationship between the utility of the Agent's negotiation-related issues. In order to get rid of the deadlock in one of the negotiation-related issues, the reserved value of the issue is adjusted. On the other hand, there is a mechanism to ensure the integrated-utility of the multi-issues exchange. In this way, the efficiency and the probability to get a trade-off are improved. An example is given to testify the optimization.

    • Fast Image Search Using Vector Quantization

      2004, 15(5):712-719. CSTR:

      Abstract (4676) HTML (0) PDF 996.68 K (5623) Comment (0) Favorites

      Abstract:Traditional indexing methods face the difficulty of 慶urse of dimensionality?at high dimensionality. Accurate estimate of data distribution and efficient partition of data space are the key problems in high-dimensional indexing schemes. In this paper, a novel indexing method using vector quantization is proposed. It assumes a Gaussian mixture distribution which fits real-world image data reasonably well. After estimating this distribution through EM (expectation-maximization) method, this approach trains the optimized vector quantizers to partition the data space, which will gain from the dependency of dimensions and achieve more accurate vector approximation and less quantization distortion. Experiments on a large real-world dataset show a remarkable reduction of I/O overhead of the vector accesses which dominate the query time in the exact NN (nearest neighbor) searches. They also show an improvement on the indexing performance compared with the existing indexing schemes.

    • Structural Join of XML Based on Range Partitioning

      2004, 15(5):720-729. CSTR:

      Abstract (4749) HTML (0) PDF 889.99 K (6413) Comment (0) Favorites

      Abstract:Structural join is the core operation in XML query processing, and catches the research community抯 attention. Efficient algorithm is the key of efficient query processing. There have been a number of algorithms proposed for structural join, and most of them are based on one of the following assumptions: the input element sets either have indexes or are ordered. When these assumptions are not true, the performance of these algorithms will become very bad due to the cost of sorting input sets or building index on the fly. Motivated by this observation, a structural join algorithm based on range partitioning is proposed in this paper. Based on the idea of task decomposition, this algorithm takes advantage of the region numbering scheme to partition the input sets. The procedure of the algorithm is described in detail, and its I/O complexity is analyzed. The results of the extensive experiments show that this algorithm has good performance and is superior to the existing sort-merge algorithms when the input data are not sorted or indexed. It can provide query plans with more choices.

    • An Algorithm for Creating an Object Normal Form Based on Directed Graph

      2004, 15(5):730-740. CSTR:

      Abstract (3865) HTML (0) PDF 894.59 K (5244) Comment (0) Favorites

      Abstract:In this paper, the major ideas of the normalization theory for object-oriented data models proposed by Tari et al. is introduced, their methods for creating an object normal form are analyzed, and the problems in these methods are pointed out. In order to develop a new method for creating such an object normal form, the meaning of a vertex of the directed graph is extended, such that it is not only a simple vertex, but also a directed graph. Based on this extended directed graph, an algorithm for creating it is proposed, the time complexity analysis of the algorithm is given, and its correctness is proved.

    • A Cost Model for Data Placement and Access Path Selection Problem in FC-SAN

      2004, 15(5):741-751. CSTR:

      Abstract (3890) HTML (0) PDF 1.19 M (5018) Comment (0) Favorites

      Abstract:Network storage methods (e.g. FC-SAN) with virtual storage technology are becoming a powerful substitution of DAS in digital libraries and other massive storage applications. However the efficiency of FC-SAN virtual storage strongly depends on some attributes of the stored documents. In some cases, FC-SAN may even perform no better than the sharing data on LAN. This paper illustrates this point by a study on the data placement and access path selection issues in a network storage environment. The paper first presents a linear timeconsuming model of data access through the analysis of the virtual storage principle and then gives a decision-making method for data placement and access path selection problem. In the development of this method, the concept of equivalent of virtual storage cost is defined to evaluate the data placement cost in the FC-SAN virtual storage environment. Finally, the theoretical analysis, methods, and assumptions are proved by the experimental results based on a massive storage subsystem in a digital library prototype.

    • Security Flaws in a Confirmer Signature Scheme

      2004, 15(5):752-756. CSTR:

      Abstract (4296) HTML (0) PDF 516.66 K (5781) Comment (0) Favorites

      Abstract:Confirmer signatures are different from standard signatures in the sense that without the help and cooperation of a designated confirmer, a verifier cannot determine the validity of a confirmer signature. But except of the signer, anyone else (including the confirmer) can not generate a valid confirmer signature on behalf of the signer. At the same time, the confirmer cannot cheat verifiers once he is involved in the procedure of signature verification. Furthermore, if it is necessary, the confirmer could convert confirmer signatures into standard ones such that the validity of these converted signatures can be publicly validated. Wang et al. proposed an efficient new confirmer signature scheme based on DSA and RSA, and claimed that their scheme is secure. However, several serious security flaws in their scheme are identified so that their investigation does not succeed.

    • Distributed Group Rekeying Algorithms for Mobile Ad-Hoc Networks

      2004, 15(5):757-766. CSTR:

      Abstract (4741) HTML (0) PDF 735.07 K (6001) Comment (0) Favorites

      Abstract:Many emerging mobile wireless applications depend upon secure group communication, in which a secure and efficient group rekeying algorithm is very important. In this paper, a rekeying algorithm named DGR (distributed group rekeying algorithm) is proposed, which is based on DGKMF (distributed group key management framework). DGR algorithm generates a group key with local secrete information, and is suitable for mobile ad hoc networks. In order to further reduce the communication complexity, the DGR algorithm is improved on by generating a cluster dynamically in the rekeying process, and the CDGR (cluster distributed group rekeying algorithm) is proposed. The security, correctness, and completeness of the two algorithms are discussed in this paper, and their message complexity costs are evaluated. Simulation results demonstrate that the two algorithms are better than other algorithms and protocols such as CKD, GDH v.2 and BD in the group rekeying success ratio and delay, and the CDGR is better than GDR in the group rekeying delay because it uses the cluster in the rekeying process.

    • Cryptanalysis of a Variant of a Cellular Automata Based Cryptosystem

      2004, 15(5):767-771. CSTR:

      Abstract (3774) HTML (0) PDF 493.13 K (5161) Comment (0) Favorites

      Abstract:Subhayan Sen et al. have proposed a block cipher system CAC (cellular automata based cryptosystem) based on cellular automata theory, but they have not given a detailed description of some of its modules. From the point of view of application, one module of CAC is fixed, and the variant is called SMCAC (same major-CA CAC). The analysis of SMCAC is given, and the results show that the variant of this cipher is very insecure under chosen-plaintext attacks. Using cryptanalysis of the variant for reference, attacks on the cipher itself may be found when some of the design details of the cipher are known.

    • A QoS-Aware and Scalable Multicast Routing Protocol Based on Bounded Flooding Technique

      2004, 15(5):772-782. CSTR:

      Abstract (3835) HTML (0) PDF 876.19 K (5448) Comment (0) Favorites

      Abstract:The emergence of distributed multimedia applications such as teleconferencing, remote education and distributed interactive simulation etc. prompts the importance of multicast services. The QoS (quality of service) requirements of these applications drive the development of QoS-aware multicast routing. Recently, many QoS-aware multicast protocols have been proposed to meet these requirements. However, few of them can achieve high success ratio, high scalability, and low control message overhead. A new QoS-ware multicast routing protocol is proposed based on bounded flooding technique. It aims at alleviating the memory overhead of routers for setting up multicast trees and improving scalability of the protocol. In this scheme, every node has a two-level forwarding table which contains information about its immediate neighbors (routers reachable in one hop) and its second-degree neighbors (neighbors of an immediate one). By the information about the second-degree neighbors, a router can forward Join_Probe messages intelligently instead of blindly flooding them. This protocol also utilizes multi-path searching to increase the probability of finding feasible branches while connecting a new node to the multicast tree. The details of the data structures in the protocol and the algorithm of building a distribution tree are described. It demonstrates the effectiveness of the proposed protocol through simulation which evaluates its performance in terms of ACAR (average call acceptance ratio) and ACMO (average control message overhead).

    • Fault-Tolerant Routing Based on Safety Path Vectors in Hypercube System

      2004, 15(5):783-790. CSTR:

      Abstract (4366) HTML (0) PDF 726.19 K (5016) Comment (0) Favorites

      Abstract:Hypercube multicomputers system has good performances in parallel and distributed computation. With the increasing size of a multicomputers network system, the fault possibility of computers and their links increases. As a result, it becomes very important to seek for better fault-tolerant routing strategies for realizing more effective fault-tolerant routing when lots of faults occur in the multicomputers system. Many significant researches have been done on the fault-tolerant routing design for the hypercube multicomputers system. An innovative fault-tolerant routing algorithm is proposed, in which each node uses a safety path vector (SPV) to record the optimal paths to the other nodes. The safety path vector is an approximate measure of the number and distribution of the faults in the neighborhood and can be setup or updated through the n?1 rounds of information exchanges among neighboring nodes by consuming only n bits storage space. Compared with previous fault-tolerant routing algorithms such as the safety vectors (SVs), extended safety vectors (ESVs), optimal path matrices (OPMs) and extended optimal path matrices (EOPMs), SPVs have stronger ability in tracing optimal paths with equal or less storage cost. Analysis and simulation are given to show the merit of the SPVs.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063