• Volume 18,Issue 2,2007 Table of Contents
    Select All
    Display Type: |
    • A Compression Algorithm for Multi-Streams Based on Wavelets and Coincidence

      2007, 18(2):177-184.

      Abstract (4350) HTML (0) PDF 487.63 K (4883) Comment (0) Favorites

      Abstract:Methods based on Haar wavelets and coincidence characteristics are proposed to compress multi-streams. The main contributions include: (1) Energy conservation law of Haar wavelets transform is proved to compress data streams. (2) The relation between the coincidence measure and trend of streams is revealed as along with the invariability under parallel shift and the equivalence law over coincidence measure to approximately express data-streams by the wavelet coefficient of the characteristic stream and its energy. (3) Multi-Scales energy decomposition model is proposed to improve the compression precision. (4) The multi-scales compression algorithm and the energy conservation reconstruction algorithm are designed. (5) Extended experiments show that the compression ratio of the new methods is 2~4 times as the traditional method.

    • A Multiple Alignment Approach for DNA Sequences Based on the Maximum Weighted Path Algorithms

      2007, 18(2):185-195.

      Abstract (5003) HTML (0) PDF 966.10 K (7019) Comment (0) Favorites

      Abstract:For multiple sequences alignment problem in molecular biological sequence analysis, when the input sequence number is very large, many heuristic algorithms have been proposed to improve the computation speed and the quality of alignment. An approach called MWPAlign (maximum weighted path alignment) is presented to do global multiple alignment for DNA sequences. In this method, a de Bruijn graph is used to express the input sequences information, which is recorded in the edges of the graph. As a result, a consensus-finding problem can be transformed to a maximum weighted path problem of the graph. MWPAlign obtains almost linear computation speed of the multiple sequences alignment problem. Experimental results show that the proposed algorithm is feasible, and for a large number of sequences with mutation rate lower than 5.2%, MWPALign can obtain better alignment results and has lower computational time as compared to CLUSTALW (cluster alignments weight), T-Coffee and HMMT (hidden Markov model training).

    • A Study and Improvement of Minimum Sample Risk Methods for Language Modeling

      2007, 18(2):196-204.

      Abstract (4192) HTML (0) PDF 561.65 K (4984) Comment (0) Favorites

      Abstract:Most existing discriminative training methods adopt smooth loss functions that could be optimized directly. In natural language processing (NLP), however, many applications adopt evaluation metrics taking a form as a step function, such as character error rate (CER). To address the problem, a newly-proposed discriminative training method is analyzed, which is called minimum sample risk (MSR). Unlike other discriminative methods, MSR directly takes a step function as its loss function. MSR is firstly analyzed and improved in time/space complexity. Then an improved version MSR-II is proposed, which makes the computation of interference in the step of feature selection more stable. In addition, experiments on domain adaptation are conducted to investigate the robustness of MSR-II. Evaluations on the task of Japanese text input show that: (1) MSR/MSR-II significantly outperforms a traditional trigram model, reducing CER by 20.9%; (2) MSR/MSR-II is comparable to the other two state-of-the-art discriminative algorithms, Boosting and Perceptron; (3) MSR-II outperforms MSR not only in time/space complexity but also in the stability of feature selection; (4) Experimental results of domain adaptation show the robustness of MSR-II. In all, MSR/MSR-II is a quite effective algorithm. Given its step loss function, MSR/MSR-II could be widely applied to many fields of NLP, such as spelling check and machine translation.

    • Convergence Analysis of Mean Shift Algorithm

      2007, 18(2):205-212.

      Abstract (6461) HTML (0) PDF 576.38 K (7735) Comment (0) Favorites

      Abstract:The research of its convergence of Mean Shift algorithm is the foundation of its application. Comaniciu and Li Xiang-ru have respectively provided the proof for the convergence of Mean Shift but they both made a mistake in their proofs. In this paper, the imprecise proofs existing in some literatures are firstly pointed out. Then, the local convergence is proved in a new way and the condition of convergence to the local maximum point is offered. Finally, the geometrical counterexamples are provided for explanation about convergence of Mean Shift and the conclusion is further discussed. The results of this paper contribute to further theoretical study and extensive application for Mean Shift algorithm.

    • A Vertex Refinement Method for Graph Isomorphism

      2007, 18(2):213-219.

      Abstract (4654) HTML (0) PDF 450.09 K (5834) Comment (0) Favorites

      Abstract:In this paper, a vertex refinement method is proposed. The new vertex invariant is defined based on the number of the paths for a given length. A comparison between this vertex invariant and some other general vertex invariants has been made. It is proved that this method is as fine as other methods, and examples are given to show that this method is better than others in some case. This vertex refinement method can be used in graph isomorphism algorithms to reduce the number of mapping between the vertexes.

    • Scheduling with Resource Allocation for System-Level Synthesis

      2007, 18(2):220-228.

      Abstract (4740) HTML (0) PDF 629.02 K (4837) Comment (0) Favorites

      Abstract:In system-level synthesis, the allocation of resources is always decided by the designer or explored in the outer-most loop. In this paper, a heuristic scheduling algorithm is proposed to find the resource allocation during its running process. It determines the appropriate number of required resource instances based on the system partition in scheduling, and generates the corresponding resource allocation, scheduling and assignment solution. Such an algorithm can simplify the system-level design exploration to a procedure of system partitioning, scheduling and evaluation, and can improve the exploration efficiency. Experimental results show the feasibility and validity of the approach.

    • Exploring Load Balancing of a Parallel Switch with Input Queues

      2007, 18(2):229-235.

      Abstract (4181) HTML (0) PDF 432.12 K (4625) Comment (0) Favorites

      Abstract:Parallel switch is an emerging switch technology by which we can build a high capacity switching system (such as a terabit or higher switch) from many small switch fabrics. This paper refers to the parallel switch with input queues as the Buffered Parallel Switch (BPS) and address the open issue of load-balancing for switch fabrics working parallelly and independently. Two classes of definition which depict the load balancing in different ways are proposed. Then conditions for BPS load balancing are analyzed and a family of distributed scheduling algorithms is presented. At last, a simple and efficient scheduling algorithm which can satisfy both classes of definition in a BPS without speedup is developed. Simulation results show the validity and performance of the load-balancing algorithm. Practical implementation of the distributed scheduling algorithms is also discussed.

    • A Metadata Management Method Based on Directory Path

      2007, 18(2):236-245.

      Abstract (5248) HTML (0) PDF 558.98 K (6839) Comment (0) Favorites

      Abstract:A metadata management method dividing directory path attribute from directory object is proposed, which extends the present object storage architecture. This method avoids efficiently the large-scale metadata migration according to the updating directory attributes, improves the cache utilization and hit rate by reducing the overlap cache of prefix directory, reduces the disks I/O demands by reducing the overhead of traversing the directory path and exploiting directory locality, and avoids overloading a single metadata server by dynamic load balancing. Experimental results demonstrate that this method has obvious advantages in improving the throughput, scalability, balancing metadata distribution, and in reducing the metadata migration.

    • >Review Articles
    • Similarity Discovery Techniques in Temporal Data Mining

      2007, 18(2):246-258.

      Abstract (7767) HTML (0) PDF 685.38 K (11297) Comment (0) Favorites

      Abstract:Temporal data mining (TDM) has been attracting more and more interest from a vast range of domains, from engineering to finance. Similarity discovery technique concentrates on the evolution and development of data, attempting to discover the similarity regularity of dynamic data evolution. The most significant techniques developed in recent researches to deal with similarity discovery in TDM are analyzed. Firstly, the definitions of three categories of temporal data, time series, event sequence, and transaction sequence are presented, and then the current techniques and methods related to various sequences with similarity measures, representations, searching, and various mining tasks getting involved are classified and discussed. Finally, some future research trends on this area are discussed.

    • View Maintenance Strategy in Peer Data Management Systems

      2007, 18(2):259-267.

      Abstract (4303) HTML (0) PDF 534.03 K (5018) Comment (0) Favorites

      Abstract:In this paper, a strategy for view maintenance in peer data management systems (PDMSs) is proposed. First, a hybrid peer architecture is presented, in which peers prefer peer-to-peer architecture to super-peer based architecture. Then, if a view is reformulated along a semantic path, it can retrieve data from many data sources. So the peer view, local view and global view are proposed. A global view maintenance is turned into the maintenance of all related local views if join operations are confined in each local PDMS, and a query can gain the same result if it is posed over any peer in the same semantic path. Furthermore, a view may be a conjunctive query which relates to several tables in PDMSs. According to the application, Mork’s rules are extended. Based on the new rule system, the update data are propagated in PDMS. According to the new rule system, a view maintenance algorithm in PDMS is proposed. Finally, simulation experiments are carried out in the SPDMS (schema-mapping-based PDMS). The simulation results show that the proposed view maintenance strategy has better performance than that of Mork’s.

    • Scalable Processing of Incremental Continuous k-Nearest Neighbor Queries

      2007, 18(2):268-278.

      Abstract (4320) HTML (0) PDF 701.29 K (5605) Comment (0) Favorites

      Abstract:To evaluate the large collection of concurrent CKNN (continuous k-nearest neighbor) queries continuously, a scalable processing of the incremental continuous k-nearest neighbor (SI-CNN) framework is proposed by introducing searching region to filter the visiting TPR-tree (time-parameterized R-tree) nodes. SI-CNN framework exploits the incremental results table to buffer the candidate objects, flushes the objects into query results in bulk, efficiently processes the large number of CKNN queries concurrently, and has a perfect scalability. An incremental SI-CNN query update algorithm is presented, which evaluates incrementally based on the former query answers and supports the insertion or deletion of both query collection and moving objects. Experimental results and analysis show that SI-CNN algorithm based on SI-CNN framework can support a large set of concurrent CKNN queries perfectly, and has a good practical application.

    • A Probabilistic Model Based Predictive Spatio-Temporal Range Query Processing

      2007, 18(2):279-290.

      Abstract (4175) HTML (0) PDF 923.37 K (5839) Comment (0) Favorites

      Abstract:A probabilistic approach is proposed, which adopts filter-refinement framework for query processing. First, all objects that possibly satisfy a query are retrieved as candidate results. Then, probabilities that the candidates will satisfy the query are evaluated based on a probability model proposed in the paper. Finally, a user defined minimum probability threshold is used to filter unqualified candidates to get a final predictive result. The future location of a moving object is defined as a random variable in the probability model. Two modes are proposed to describe object’s movement status in spatio-temporal query range, and the corresponding methods are presented to compute the probability that an object will satisfy the query in the proposed modes. A trajectory analyzing algorithm is proposed to estimate the probability density functions (PDF) from the historical trajectories. An index structure is designed to efficiently support the storing and accessing of the PDFs. The experimental result shows that the proposed solution can effectively process the predictive spatio-temporal range query and improve the correctness of the predictive results. It is suitable for processing the query with small spatial range and long-term future time interval.

    • Fast Clustering of Data Streams Using Graphics Processors

      2007, 18(2):291-302.

      Abstract (4937) HTML (0) PDF 739.38 K (6279) Comment (0) Favorites

      Abstract:Clustering data stream basically requires fast processing speed as well as quality clustering results. In this paper, some novel approaches are presented for such a clustering task using graphics processing units (GPUs), e.g., K-means-based method, stream clustering method, and evolving data stream analysis method. The common characteristics of these methods are making use of the strong computational and pipeline power of GPUs. Different from the pervious clustering methods with individual framework, the methods share the same framework with multi-function, which provides a uniform platform for stream clustering. In stream clustering, the core operations are distance computing and comparison. These two operations could be implemented by using capabilities of GPUs on fragment vector processing. Extensive experiments are conducted in a PC with Pentium IV 3.4G CPU and NVIDIA GeForce 6800 GT graphic card. A comprehensive performance study is presented to prove the efficiency of the proposed algorithms. It is shown that these algorithms are about 7 times faster than the previous CPU-based algorithms. Therefore, they well support the applications of high speed data streams.

    • Self-Adaptive Estimation of View Change Frequency in Web Warehouses

      2007, 18(2):303-310.

      Abstract (4512) HTML (0) PDF 708.63 K (4661) Comment (0) Favorites

      Abstract:Refreshing materialized views is a main task of Web warehouse maintenance. As the refreshing scheme depends heavily on the base data change frequency, researchers have presented many corresponding algorithms and frequency estimators for it. Although these estimators really work, however, all of them have limitations. The bias that an estimator introduces will increase significantly when the estimated value is out of its applicable range. In this paper, a self-adaptive algorithm is presented based on Poisson process analysis, which can adjust the revisiting pattern and revisiting frequency according to the estimated change frequency. This algorithm can also tune the parameters so that the estimated value will fall into the best applicable range of the estimator. According to the experimental results, the proposed estimator is more accurate than the ones in the previous work.

    • A Data Classification Method Based on Concept Similarity

      2007, 18(2):311-322.

      Abstract (5017) HTML (0) PDF 791.79 K (6512) Comment (0) Favorites

      Abstract:In this paper, a method of classification is proposed based on the similar information of data properties. The new method assumes that data properties are basic vectors of m dimensions, and each of the data is viewed as a sum vector of all the property-vectors. It suggests a novel distance algorithm to get the distance of every pair of the property based on similar information of the basic property vectors. An algorithm of data classification is also presented based on correlation computing formula composed of property vectors and projections of each other. Efficiency of the new method is proved by extensive experiments.

    • A Technique to Generate and Optimize the Materialized Model for XML Data

      2007, 18(2):323-331.

      Abstract (4042) HTML (0) PDF 501.10 K (4912) Comment (0) Favorites

      Abstract:One way to improve the performance of XML (extensible markup language) management systems is to materialize part of the XML documents and store them aside in cache memory. In this paper, a method is presented to characterize query sets of XML data as schema graph, which is a technique to generate materialized plan based on the distribution of user’s queries. Experimental results demonstrate its performance gain in XML cache management.

    • Clustering Objects in a Road Network

      2007, 18(2):332-344.

      Abstract (4435) HTML (0) PDF 500.14 K (6692) Comment (0) Favorites

      Abstract:Most spatial clustering algorithms deal with the objects in Euclidean space. In many real applications, however, the accessibility of spatial objects is constrained by spatial networks (e.g. road network). It is therefore more realistic to work on clustering objects in a road network. The distance metric in such setting is redefined by the network distance, which has to be computed by the expensive shortest path distance over the network. The existing methods are not applicable to such cases. Therefore, by exploiting unique features of road networks, two new clustering algorithms are presented, which use the information of nodes and edges in the network to prune the search space and avoid some unnecessary distance computations. The experimental results indicate that the algorithms achieve high efficiency for clustering objects in real road network.

    • Global Timestamp Serialization in Multi-Level Multi-Version DBMS

      2007, 18(2):345-350.

      Abstract (4454) HTML (0) PDF 370.11 K (5061) Comment (0) Favorites

      Abstract:The concurrency control mechanism in the multi-level DBMS is required to promise the serializability of transactions and the multi-level security properties, avoid possible covert channels and the starving problem of high-level transactions. The multi-level multi-version timestamp ordering mechanisms satisfy these requirements but may cause transactions read old version data, and the scheduler is required to be a trusted process. This paper presents a multi-level multi-version global timestamp ordering (MLS_MVGTO) mechanism and the basic global timestamps generation steps based on the transaction's snapshot. This paper also presents two improvements according to the pre-knowledge of the read-only transactions. In addition it can be implemented as a set of untrusted schedulers. Given the pre-knowledge about transactions' operations, transactions are able to read more recent version.

    • A Chaos-Based Framework and Implementation for Software Watermarking Algorithm

      2007, 18(2):351-360.

      Abstract (5533) HTML (0) PDF 607.69 K (5715) Comment (0) Favorites

      Abstract:A chaos-based software watermarking framework against several limitations of the existing watermark algorithms is proposed in this paper, in which the anti-reverse engineering technique and chaotic system are combined with the idea of Easter Egg software watermarking. With chaotic system, global protection for the program is provided by dispersing watermark over all the whole code of the program; With the anti-reverse engineering technique, the resistance against reverse engineering is improved. The framework can be implemented under various software and hardware platforms. In this paper, the watermarking framework is implemented under the Intel i386 architecture and the Windows operating system. Then the implementation is taken as an example to analyze the robustness of the watermark framework and the performance degradation of the watermarked program. The results indicate that the watermarking can resist various semantics-preserving transformation attacks and be good tolerance for reverse engineering attacks. The robustness of the algorithm is at a high quality.

    • Semantic Cache Technology for Aggregate Queries

      2007, 18(2):361-371.

      Abstract (4230) HTML (0) PDF 681.85 K (5407) Comment (0) Favorites

      Abstract:To process aggregate queries in massive database application efficiently, semantic cache technology is extended in this paper, which is mostly used in small scale database applications at present. Firstly, a formal semantic cache model for aggregate queries is proposed. Based on this model, a semantic cache system called StarCache is built. The key technologies of StarCache about aggregate query processing, cache replacement and consistency maintenance etc. are also discussed in this paper. StarCache has been integrated in StarTP, which is a parallel database middleware developed by a team of the National University of Defense Technology, and has been applied to a large national project.

    • A Time Synchronization Mechanism and Algorithm Based on Phase Lock Loop

      2007, 18(2):372-380.

      Abstract (5387) HTML (0) PDF 692.41 K (6976) Comment (0) Favorites

      Abstract:In this paper, the analysis model of computer clock is discussed, and the characteristic of the existing synchronization mechanisms is summarized. Subsequently, a unidirectional reference broadcast synchronization mechanism with low power is developed, and this mechanism can achieve simultaneously the offset compensation and drift compensation. Its implementation algorithm is designed based on the principle of traditional phase locked loop (PLL). In order to avoid introducing the extra hardware, a simple digital PLL is constructed. Finally, the validation is done on the Mica2 experimental platform, and the performance is evaluated and compared with the typical algorithms.

    • A Generic Approach to Making P2P Overlay Network Topology-Aware

      2007, 18(2):381-390.

      Abstract (5285) HTML (0) PDF 645.23 K (7955) Comment (0) Favorites

      Abstract:With the help of distributed Hash table, the structured P2P (peer-to-peer) network has a short routing path and good extensibility. However, the mismatch between the overlay and physical network becomes the obstacle in the way of building an effective peer-to-peer system in a large-scale environment. In this paper, a generic, protocol-independent approach is proposed to solve this problem. This method is based on the swaps of peers. By discovering and performing the potential swaps that are beneficial to the match between overlay and physical network, it can reduce the average latency and improve the performance of the system. The experimental results show that the approach can greatly reduce the average latency of overlay networks. Moreover, the cost of overhead is controllable. Besides, if combining this approach with other protocol-dependent ones, the performance can be further improved.

    • A Large-Scale Live Video Streaming System Based on P2P Networks

      2007, 18(2):391-399.

      Abstract (5074) HTML (0) PDF 562.83 K (7278) Comment (0) Favorites

      Abstract:A P2P (peer-to-peer) network based large-scale live video streaming system called Gridmedia is presented in this paper. In this system, a gossip-based protocol is adopted to construct an unstructured application layer overlay. Each peer independently selects its own neighbors and uses a push-pull streaming method to fetch data from neighbors. Compared with the pure pull method of DONet, push-pull method greatly diminishes the accumulated latency observed at end users and efficiently reduces the control overhead of streaming system, both of which are evaluated by the experiments on PlanetLab. A prototype system of Gridmedia has been developed to broadcast the Spring Festival Gala Evening in 2005 over global Internet with 300Kbps video stream and attracted more than 500 000 users all over the world with the peak concurrent online users of 15 239 during the event.

    • An Efficient Adaptive Evolvement Protocol for Peer-to-Peer Topologies

      2007, 18(2):400-411.

      Abstract (4213) HTML (0) PDF 750.00 K (5308) Comment (0) Favorites

      Abstract:Current unstructured peer-to-peer (P2P) systems lack fair topology structures, and take no consideration for vicious action of peers. The mainly reason is that the topologies are not sensitive to peer’s trust, and take no consideration for the trust computation of different domains. First, a domain-based P2P trust model is presented in this paper. Then, based on the trust model, a peer-level protocol for forming adaptive topologies for unstructured P2P networks is proposed. The protocol aims at the topologies evolution of embodied domains, and makes good peers locate good position and bad peers locate bad position in the corresponding domains, which guarantees the impartiality of topology. On the other hand, the protocol can restrain the vicious action of peers effectively, and also has the incentive capacity, which encourages peers to provide more authentic services in order to get more return on services. Analysis and simulations show that, compared with the current topologies, the resulting topologies are more efficient and more robust on security problems.

    • A Coordinated Worm Detection Method Based on Local Nets

      2007, 18(2):412-421.

      Abstract (4529) HTML (0) PDF 815.60 K (6140) Comment (0) Favorites

      Abstract:There are several global detection methods, but they do not apply to local net. A new cooperative approach to automatic detection of worms using local nets is presented in this paper, which is called CWDMLN (coordinated worm detection method based on local nets). This algorithm focuses on scanning worm characteristics in local nets and uses different methods to cope with different worm behaviors, including using honeypots to deceive worms. CWDMLN coordinates these methods to give graded alarms to notify worm attacks. The grades reflect reliability of alarms. Experimental results show that this approach is promising for it can quickly find worm intrusion in local nets and extract unknown worm signatures that can be used for IDS (intrusion detection system) or firewall to prevent more worm threats. This method can also contribute to global worm alarming by scaling.

    • Provable Secure Encrypted Key Exchange Protocol Under Standard Model

      2007, 18(2):422-429.

      Abstract (5280) HTML (0) PDF 528.31 K (5680) Comment (0) Favorites

      Abstract:Encrypted key exchange protocol’s goal is to establish a high secure key used for further encryption and authentication through a low secure password. Most existing encrypted key exchange protocols either lack security proofs or rely on the Random Oracle model. Compared with those protocols based on the Random Oracle model, provable secure EKE (encrypted key exchange) protocols have heavier computation burden and their descriptions are more complex, although they don’t need the Random Oracle model. Through introducing server’s public key and applying ElGamal encryption scheme and pseudorandom function ensemble, a provable secure encrypted key exchange protocol is designed from the protocol proposed by David P. Jablon in the paper of “Extended Password Key Exchange Protocols Immune to Dictionary Attacks”, and a proof is presented. Compared with the original protocol, this protocol only needs DDH (decisional Diffie-Hellman) assumption but not ideal encryption and Random Oracle model. Compared with other provable secure encrypted key exchange protocols, because this protocol doesn’t need CCA2 (chosen ciphertext attack-2) secure public encryption scheme, it can reduce the number of exponible computations and greatly simplify the protocol’s description. Specifically, this protocol reduces 73% of the exponential computations of KOY protocol, and reduces 55% of the exponential computations of the protocol proposed by Jiang Shao-Quan et al. in the paper of “Password Based Key Exchange with Mutual Authentication”.

    • >Review Articles
    • Infinite Interpolation on Triangles

      2007, 18(2):430-441.

      Abstract (6915) HTML (0) PDF 773.02 K (8061) Comment (0) Favorites

      Abstract:Constructing triangle surface which interpolates the boundary curve and cross-boundary slopes on a triangle is the basic problem in computer aided geometric design, computer graphics and so on. This problem is called infinite interpolation on triangle. In this paper, a survey on the existing methods of constructing infinite interpolation surfaces on triangles is given, and a comparison is made between the methods by using examples. The open problems in the existing methods of infinite interpolation on triangles are discussed.

    • A Fitting Algorithm of Subdivision Surface from Noising and Dense Triangular Meshes

      2007, 18(2):442-452.

      Abstract (8827) HTML (0) PDF 742.05 K (13353) Comment (0) Favorites

      Abstract:A fitting system is developed to fit subdivision surface with sharp feature from noising and dense triangular meshes of arbitrary topology. The system includes an improved mesh denoising method based on bilateral filtering of images, sharp features extraction, feature-preserving mesh simplification and topological optimization. An estimating method for Loop subdivision surface is introduced to predict how many subdivision iterations are necessary to meet a user-defined tolerance. The method of adaptive subdivision is proposed during the fitting process to handle local detailed surface features. Both experimental results and practical applications in engineering demonstrate that the system can effectively achieve a good quality of the fitting subdivision surface with nice details while using few facets in the approximation.

    • Robust Estimator for 3D Meshes Filtering

      2007, 18(2):453-460.

      Abstract (4455) HTML (0) PDF 507.80 K (5053) Comment (0) Favorites

      Abstract:In this paper, the link between least-squares (LS) estimator and Laplacian smoothing method from the point of view of LS method is firstly disclosed. Further more, an M-estimator for mesh filtering is presented. At last, the M-estimator is extended to the re-weighted M-estimator to remove noise while preserving the surface feature efficiently. The re-weighted M-estimator is bilateral filtering in natural.

    • Hyperspectral Image Compression Using Three-Dimensional Wavelet Embedded Zeroblock Coding

      2007, 18(2):461-468.

      Abstract (4464) HTML (0) PDF 634.62 K (5512) Comment (0) Favorites

      Abstract:As 3D images, hyperspectral images result in large sized data sets. The storage and transmission of large volumes of hyperspectral data have become significant concerns. Therefore efficient compression is required for storage and transmission. In this paper, a new hyperspectral remote sensing image compression method based on asymmetric 3D wavelet transform and 3D set partitioning scheme is proposed. Because most hyperspectral images have asymmetric statistical properties in all directions, an efficient asymmetric 3D wavelet transform (3DWT) is used to reduce redundancies in both the spectral and spatial dimensions. Compared with traditional symmetric 3D wavelet transform, asymmetric 3D wavelet transform can more efficiently remove the correlation between the adjacent bands. A modified 3DSPECK (3D set partitioning embedded block) algorithm, AT-3DSPECK (asymmetric transform 3DSPECK), is proposed and used to encode the transformed coefficients. According to the distribution of energy of the transformed coefficients, the 3D zeroblock partitioning algorithm and the 3D octave band partitioning scheme are efficiently combined in the proposed AT-3DSPECK algorithm. To accelerate the speed and optimize the rate-distortion performance of the embedded bit stream, a fast algorithm of the optimal zeroblock sorting is given. Experimental results show that the proposed algorithm outperforms AT-3DSPIHT (asymmetric transform 3D set partitioning in hierarchical trees) and 3DSPECK by 0.4 dB and 1.4dB on the average PSNR (peak signal to noise ratio) respectively. Compared with popular zerotree approaches, AT-3DSPECK is faster in coding speed.

    • A Wavelet-Based Facial Ageing Synthesis Method

      2007, 18(2):469-476.

      Abstract (5490) HTML (0) PDF 540.50 K (5669) Comment (0) Favorites

      Abstract:A facial ageing synthesis (rendering) method based on combination of wavelet transform and texture transplant is proposed in this paper. Firstly, 2D discrete wavelet transform (2D DWT) is performed on ageing templates to extract the high-frequency sub-images and high-pass filtered low-frequency sub-images, which contain the texture characteristic of ageing skin. Then the corresponding sub-images of target facial image are replaced and fused with them, and the transplant of ageing texture towards target face is achieved through wavelet reconstruction. In addition, the average variation of the facial shape between young population and old population is extracted, and imposed onto the target face to enhance the ageing rendering effect. Combining with color rendering technique, an integrated framework for photorealistic facial ageing rendering is designed and implemented. In the experiment, the proposed rendering framework is applied to oriental face, western face and art painting image, and the rendering results show the realistic and impressive effects. Comparing with PCA (principal components analysis)-based method, 3D Morphable model, and ratio image method, this method contributes a solution for the problem of tradeoff between photorealistic effect and easiness to operate for facial ageing rendering.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063