• Volume 19,Issue 4,2008 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Conceptual Model and Realization Methods of Autonomic Computing

      2008, 19(4):779-802.

      Abstract (9528) HTML (0) PDF 1.01 M (11148) Comment (0) Favorites

      Abstract:Autonomic computing is an emerging research hotspot, which aims at hiding system management complexity from human users by means of "technologies managing technologies", to establish guidable, state-aware, and self-adaptive computer systems. Currently, the research of autonomic computing is still in its infancy, without systematic and mature theories. After clarifying the definition of autonomic computing, this paper proposes a conceptual model, which describes the basic working mechanisms and principles of autonomic elements and autonomic computing systems. On the basis of this model, two categories of autonomic computing systems based on knowledge model and mathematical model respectively are summarized and analyzed, with their advantages and disadvantages. Finally, the future research directions of autonomic computing are proposed.

    • Pseudosphere Filter and Edge Detection

      2008, 19(4):803-816.

      Abstract (5094) HTML (0) PDF 884.93 K (6165) Comment (0) Favorites

      Abstract:An image filter called the pseudosphere filter is presented in this paper. An edge-preserving parameter is introduced besides a scale parameter in the pseudosphere filter, and thus a better trade-off between image smoothing and edge locating can be obtained by using it. A pseudosphere-based edge detector is built by replacing the Gaussian filter in the classic Canny edge detector with the pseudosphere filter. Experimental results show that, by comparing with the classic Canny edge detector, in the case of having the same smoothness, pseudosphere-based edge detector offers a better precision for edge locating.

    • A New Event Detection Model Based on Term Reweighting

      2008, 19(4):817-828.

      Abstract (4466) HTML (0) PDF 737.02 K (6197) Comment (0) Favorites

      Abstract:New event detection (NED) is aimed at detecting from one or multiple streams of news stories the one being reported on a new event (i.e. not reported previously). Preliminary experiments show that terms of different types (e.g. Noun and Verb) have different effects for different classes of stories in determining whether or not two stories are on the same topic. Unfortunately, conventional approaches usually ignore the fact. This paper proposes a NED model utilizing two approaches to addressing the problem based on term reweighting. In the first approach, the paper proposes to employ statistics on training data to learn the model for each class of stories, and in the second, the paper proposes to adjust term weights dynamically based on previous story clusters. Experimental results on two linguistic data consortium (LDC) data sets: TDT2 and TDT3 show that both the proposed approaches can effectively improve the performance of NED task, compared to the baseline method and existing methods.

    • Binary Alpha-Plane Assisted Fast Motion Estimation Scheme of Video Object

      2008, 19(4):829-841.

      Abstract (4445) HTML (0) PDF 778.81 K (5968) Comment (0) Favorites

      Abstract:This paper proposes a fast motion estimation scheme of arbitrarily shaped video object. The instructive role of the alpha-plane taking part in the motion estimation of video object is discussed, and a weighted binary alpha-plane matching criterion (WBAMC) is proposed by using boundary extension and boundary mask techniques. Based on priority search strategy, this paper proposes a fast binary alpha-plane assisted motion estimation (BAAME) scheme of video object. First, the BAAME uses alpha-plane and the WBAMC criterion to limit the search of boundary macro-blocks (MBs) into small unimodal area of two starting points so that a conventional fast motion estimation algorithm can be employed to search the motion vectors (MVs) of boundary MBs. Second, the BAAME predicts the starting points of opaque MBs by using MVs of neighboring boundary MBs and then employs a fast motion estimation algorithm to compute the MVs of opaque MBs. The proposed scheme can be combined with many spatial domain based and frequency domain based different motion estimation algorithms, and be effectively applied to object-based video codecs. The experimental results show that the BAAME scheme can always reach high motion estimation accuracy and better subjective quality for standard test video sequences which have different characteristics respectively. The proposed scheme can achieve 0.1~0.8dB higher prediction quality on average than DS (diamond search) and PSA (priority search algorithm) (BAAS (binary alpha-plane assisted search)+DS), and a little lower PSNR (peak signal-to-noise ratio) than FS (full search). Moreover, the BAAME scheme can speed up the motion estimation about 20 times in terms of computational complexity when compared with FS.

    • Feature Selection Based on Classes Margin

      2008, 19(4):842-850.

      Abstract (5105) HTML (0) PDF 483.37 K (9779) Comment (0) Favorites

      Abstract:Firstly, a distinguishable condition is proposed for separating the features by linear classification hyper surface. Secondly, the paper analyses the properties of the feature linear distinguishable criterion based on support vector machines (SVMs). Finally, the efficiency rate of features are defined by the contribution to classes margin of each feature, and a feature selection algorithm is put forward based on the feature efficiency rate. As experimental results show, validated with the actually measuring data and UCI (University of California, Irvine) data, performance of the new feature selection method, such as classification capability and generalized capability are improved obviously in contrast to the classical Relief method.

    • Image-Analogies Based Super Resolution

      2008, 19(4):851-860.

      Abstract (5176) HTML (0) PDF 1.41 M (6781) Comment (0) Favorites

      Abstract:Most methods of super resolution so far enhance images by adding exterior information extracted from a given training set. However, this is impractical in lots of cases. From analysis of an ideal edge model and texture contents within images, it is found that many images hold similar local structure at different resolution and preserve it stably in the scale space. Based on this property, Image Analogies can be applied to pass local information onto lower resolution image and thus to achieve resolution enhancement. Original image and its lower-resolution version are used to construct the training set to fit this problem to Image Analogies, and it is resolved by minimizing a graph with energy. Experimental results show that this self analogies algorithm can amplify images much more sharply than traditional interpolation-like methods, and more importantly, it can be executed independently without any supposed outliers.

    • Cluster Analysis Based on Fuzzy Quotient Space

      2008, 19(4):861-868.

      Abstract (4692) HTML (0) PDF 442.03 K (6771) Comment (0) Favorites

      Abstract:In this paper, on the basis of fuzzy quotient space theory, cluster analysis methods based on fuzzy similarity relations and normalized distance are proposed to solve data structure analysis of complex systems. Three conclusions are given: (1) the strictly clustering analysis theoretical description by introducing hierarchical structures of fuzzy similarity relation and normalized distance; (2) the effective and rapid clustering algorithms of their hierarchical structures; (3) a sufficient condition for isomorphic hierarchical structures. These conclusions are suitable to data structure analysis of all complex systems based on similarity relation.

    • An Answer Set Programming System with Cycle Breaking Heuristic

      2008, 19(4):869-878.

      Abstract (6023) HTML (0) PDF 594.14 K (5841) Comment (0) Favorites

      Abstract:Answer set programming (ASP) is a logic programming paradigm under answer set semantics, which can be utilized in the field of non-monotonic reasoning and declarative problem solving, etc. This paper proposes and implements a cycle breaking heuristic and a bottom-restricted look-ahead procedure for ASP, and the resulting system is called LPS. The experimental results show that, relative to other state-of-the-art ASP systems, LPS could efficiently solve logic programs in phase transition hard-job-regions, and these programs are generally considered difficult to compute. In addition, by applying the so-called dynamic variable filtering (DVF) technique, LPS could greatly reduce the search tree size during the computation.

    • >Review Articles
    • Security Analysis on Node Localization Systems of Wireless Sensor Networks

      2008, 19(4):879-887.

      Abstract (7781) HTML (0) PDF 498.92 K (10376) Comment (0) Favorites

      Abstract:Correct node position information is the prerequisite and foundation of many sensor network modules, such as network building and maintenance, monitoring event localization, and target tracking. The node localization process is vulnerable to diverse attacks. In resource-constrained sensor networks, how to securely and effectively locate sensor's coordinates is one of the most challenging security problems. This paper presents various attacks against different node localization systems, analyses the principles, characteristics, and limitations of recent representative secure localization countermeasures. Finally, future research direction is summarized.

    • A Survey of Proximity Graphs in Wireless Networks

      2008, 19(4):888-911.

      Abstract (9058) HTML (0) PDF 2.09 M (12665) Comment (0) Favorites

      Abstract:Network topology can be represented by the proximity graph defined as a graph with a set of vertices V and a set of edges E such that a directed edge (u,v) belong to E if and only if the point v is in the neighborhood induced by some predefined proximity measures of point u. This paper reviews some important graphs obtained so far, and the contents mainly concentrated in five aspects of those proximity graphs including their definitions or conceptions, construction algorithms, illustrations, topological relationships, and some parameters. This paper also outlines several further research directions.

    • Research on Blog

      2008, 19(4):912-924.

      Abstract (8458) HTML (0) PDF 613.84 K (10775) Comment (0) Favorites

      Abstract:Popularity of bloggers and the amount of information in the blogosphere increase fast. Blogs have constituted a dynamic and tightly social network by using frequent links and information interaction, and become an important source of information for the real world. Most researches on blog mainly concentrate on blog definition and identification, content mining, community discovery, importance analysis, blog search and spam blog identification. Methods and technologies of link analysis and natural language processing are used in most works, and some blog-specific methods are proposed. This paper analyzes and compares these researches on blogosphere. Problems of current topics are discussed, and finally future directions are proposed in this paper.

    • Scalable and Reliable Group Membership Management with Bi-Directional Tokens

      2008, 19(4):925-945.

      Abstract (3881) HTML (0) PDF 1.26 M (5675) Comment (0) Favorites

      Abstract:Group communications have been studied in the wired Internet for many years and remain a very hot research topic, especially on extending the existing achievements into mobile and wireless network environment. This paper identifies some interesting problems in mobile group communications regarding dynamic group membership due to member joining and leaving, dynamic locations due to node mobility, and dynamic networks due to node/link failures. These problems are solved by proposing a ring-based hierarchy of proxies with bi-directional tokens. The proposed hierarchy is a combination of logical rings and logical trees, which takes advantage of the simplicity of logical rings and the scalability of logical trees. More importantly, such a combination makes the proposed protocol based on this hierarchy more reliable than the existing tree-based protocols. Theoretical analysis and simulation studies of the proposed protocol show that: (1) It scales very well when the size of a group becomes large; (2) It is strongly resilient to failures that occur in the network. It is particularly suitable for those service providers and network operators which have deployed their machines in a hierarchical setting, where each machine can be locally configured to know the information about its sibling and parent machines.

    • A Distributed Trust Mechanism Directly Evaluating Reputation of Nodes

      2008, 19(4):946-955.

      Abstract (4669) HTML (0) PDF 587.37 K (6201) Comment (0) Favorites

      Abstract:Reputation-Based trust mechanism can efficiently solve the problems of virus flooding and malicious behaviors in P2P network. Most of the existing trust mechanisms use a single reputation value to depict the node's reliability, which cannot prevent malicious nodes from concealing dishonest selling behaviors with honest buying behaviors, and cannot separate the new node from the malicious one. This paper provides a new distributed trust mechanism, which iteratively calculates for each node a global seller reputation value and a global buyer reputation value based on transaction history, and whether a node is trustable or not can be identified from them. Comparison experiments and performance analysis between the mechanism and EigenTrust show that the mechanism can reduce the global reputation value of malicious node rapidly, restrain collusion attack, and decrease probability malicious transactions.

    • Optimal Placement and Replacement Scheme in En-Route Transcoding Caching Systems

      2008, 19(4):956-966.

      Abstract (3978) HTML (0) PDF 630.23 K (5965) Comment (0) Favorites

      Abstract:This paper investigates cache routing and cache management problems for en-route transcoding caching systems. An active cache routing algorithm called CCRA (cost-aware cache routing algorithm) is designed, which, using a controllable probing load, can find the potential cache objects with minimal access cost. Then an analyzable model for en-route transcoding caching is established, with which the cooperative cache placement and replacement problem is formulated as an optimization problem and the optimal locations to cache the object are obtained by using a dynamic programming algorithm. Results of the simulation show that the proposed scheme outperforms existing meta placement algorithms on metric of CSR (cost save ratio).

    • Composing Semantic Web Services with Description Logics

      2008, 19(4):967-980.

      Abstract (5043) HTML (0) PDF 779.52 K (8224) Comment (0) Favorites

      Abstract:Aiming at critical issues in the function-oriented composition of semantic Web services, such as the incapability of classic AI planning methods to handle the dynamically created individuals during Web services' executions and the inadequacy of service matchmaking based methods to fully exploit the semantic connections between types of services' I/O parameters. Following a comparative study of description logics and dynamic logics, Web services' IOPR's (inputs, outputs, preconditions and results) are encoded in axioms of description logics, and AI (artificial intelligence) planning method based on dynamic logic is extended to accommodate the transformation of a service composition problem into reasoning task of description logics. It finally overcomes difficulties in the classic AI planning method and defects in those methods based on service matchmaking.

    • Measuring and Characterizing Topologies of P2P Networks

      2008, 19(4):981-992.

      Abstract (4907) HTML (0) PDF 675.82 K (6688) Comment (0) Favorites

      Abstract:Measuring and characterizing the topological properties of peer-to-peer networks will benefit the optimization, management of the P2P systems. It seems infeasible to capture a complete and precise snapshot of P2P overlay networks due to the variety of P2P protocols and dynamics of the servents. Studying the details of P2P protocols and analyzing the specific P2P overlay network instance become an alternative method for this goal. In this paper, the measured Gnutella network topology is basically taken as an example. The architecture and performance of the distributed crawling system (called D-Crawler system) with feedback mechanism is presented. The properties of degree-rank distributions and degree-frequency distributions of the measured topology graphs are analyzed in detail. The small world characteristics for Gnutella network are discussed. The results show that the P2P overlay network topology characters are closely related to the P2P protocols and clients' behaviors. Gnutella network shows different characters in each tier. The top level graphs fit the power law in degree-rank distribution, but follow the Gaussian function in degree-frequency distribution. The bottom level graphs show weak power law in its degree-rank distribution, but are power law in its degree-frequency distribution. Fitting results indicate that power law could fit better for the degree-rank distribution and degree-frequency distribution of bottom level graphs, Gaussian could describe the degree-frequency distribution of the top level graphs. Gnutella overlay network has the small world property, but it is not the scale-free network, its topology may have developed over time following a different set of growth processes from those of the BA (Barabási-Albert) model.

    • Large-Scale Network Intrusion Detection Algorithm Based on Distributed Learning

      2008, 19(4):993-1003.

      Abstract (5323) HTML (0) PDF 598.33 K (7282) Comment (0) Favorites

      Abstract:As Internet bandwidth is increasing at an exponential rate, it's impossible to keep up with the speed of networks by just increasing the speed of processors. In addition, those complex intrusion detection methods also further add to the pressure on network intrusion detection system (NIDS) platforms, and then the continuous increasing speed and throughput of network pose new challenges to NIDS. In order to make NIDS effective in Gigabit Ethernet, the ideal policy is to use a load balancer to split the traffic and forward them to different detection sensors, and these sensors can analyze the splitting data in parallel. If the load balancer is required to make each slice containing all the necessary evidence to detect a specific attack, it has to be designed complicatedly and becomes a new bottleneck of NIDS. To simplify the load balancer, this paper puts forward a distributed neural network learning algorithm. By using the learning algorithm, a large data set can be split randomly and each slice data is handled by an independent neural network in parallel. The first experiment tests the algorithm's learning ability on the benchmark of circle-in-the-square and compares it with ARTMAP (adaptive resonance theory supervised predictive mapping) and BP (back propagation) neural network; the second experiment is performed on the KDD'99 Data Set which is a standard intrusion detection benchmark. Comparisons with other approaches on the same benchmark show that it can perform detection at a high detection speed and low false alarm rate.

    • Real-Time Global Illumination Rendering with Dynamic Materials

      2008, 19(4):1004-1015.

      Abstract (6746) HTML (0) PDF 709.40 K (6250) Comment (0) Favorites

      Abstract:All previous algorithms of real-time global illumination rendering based on precomputation assume that the materials are invariant, which makes the transfer from lighting to outgoing radiance a linear transformation. By precomputing the linear transformation, real-time global illumination rendering can be achieved under dynamic lighting. This linearity does not hold when materials change. So far, there are no algorithms applicable to scenes with dynamic materials. This paper introduces a real-time rendering method for scenes with editable materials under direct and indirect illumination. The outgoing radiance is separated into different parts based on the times and corresponding materials of reflections. Each part of radiance is linear dependent on the product of the materials along its path of reflections. So the nonlinear problem is converted to a linear one. All materials available are represented as linear combinations of a basis. The basis is applied to the scene to get numbers of different material distributions. For each material distribution, all parts of radiance are precomputed. This paper simply linear combines the precomputed data by coefficients of materials on basis to achieve the effects of global illumination in real-time. This method is applied to scenes with fixed geometry, fixed lighting and fixed view direction. And the materials are represented with bidirectional reflectance distribution functions (BRDFs), which do not take refraction or translucency into account. The authors can simulate in real-time frame rates up to two bounces of light in the implementation, and some interesting phenomena of global illumination, such as color bleeding and caustics, can be achieved.

    • Variable-Code-Mode-Based Connectivity Compression for Triangular Meshes

      2008, 19(4):1016-1025.

      Abstract (4505) HTML (0) PDF 695.98 K (7087) Comment (0) Favorites

      Abstract:This paper presents an efficient algorithm for encoding the connectivity information of triangular meshes. In the previous algorithms, Huffman or arithmetic coding method is directly used to encode operator series, but in comparison in this method, it can efficiently improve the compression ratio of connectivity information by predicting correctly the operator currently being encoded. By the method, all triangles are traversed first to obtain operator series. Then an arithmetic coder based on variable code-mode is applied to encode the operator series. According to the operator last encoded, the property of triangular mesh and the method of mesh traversal, a code-mode is calculated for each operator currently being encoded, where the operator with higher prediction probability is given a shorter binary strand. Then the binary strand can be obtained according to its code-mode and encode every bit of this binary strand by adaptive arithmetic coding method. The algorithm is a face-based method and also a single-resolution lossless compression method for manifold triangular mesh. Testing results show that the compression ratio of the algorithm is very high and even higher than the compression ratio by using TG algorithm, which is commonly regarded as one of the best in terms of compression ratio.

    • NP-Hardness of Some Polyhedral Mesh Decomposition Problems

      2008, 19(4):1026-1035.

      Abstract (4548) HTML (0) PDF 524.97 K (6395) Comment (0) Favorites

      Abstract:This paper considers the problem of decomposing a polyhedral surface or a polyhedron into simpler components: Monotone patches or terrain polyhedra. It is shown to be NP-complete to decide if a polyhedral surface can be decomposed into k monotone patches, by constructing a geometric model to make a reduction from SAT (satisfiability) problem. And the corresponding optimization problem is shown to be NP-hard. Then, the method is extended to the problems of decomposing a polyhedron with or without holes into the minimum number of terrain polyhedra, both of which are also shown to be NP-hard.

    • Design and Implementation of High Performance Simulation Platform for Switching and Scheduling

      2008, 19(4):1036-1050.

      Abstract (8595) HTML (0) PDF 1009.20 K (6800) Comment (0) Favorites

      Abstract:Simulation has become a significant way for performance evaluations in switching and scheduling, however, the existing simulation softwares have some limitations in inheritability and extensibility. Based on the current crossbar switching fabric, by employing system level design method, and object oriented technology, a simulation platform called SPES (switching performance evaluation system) for switching fabrics and scheduling policies' developments are designed and implemented. Input queuing, output queuing, combined input-output queuing and combined input-crosspoint queuing and corresponding scheduling policies are integrated. Inheritability and extensibility attributes are obtained by designing traffic sources, switching fabrics and scheduling policies separately, and it exhibits good performances for supporting multi-fabric, variable packets sizes and QoS model's simulations. By configuring the platform through a uniform view, users can fulfill their concrete simulation environment. Besides, it can carry out end to end performances' evaluations with little modification. Finally, this paper presents a simulation case based on combined input-crosspoint queuing switch, displaying the good performance of SPES.

    • >Review Articles
    • Computer Architecture Software-Based Simulation

      2008, 19(4):1051-1068.

      Abstract (8866) HTML (0) PDF 693.29 K (13785) Comment (0) Favorites

      Abstract:Computer architecture simulation is an integral part of modern computer design process. Simulation can help architects reduce the time and cost of computer architecture design dramatically. However, difficulties of constructing simulators, long simulation time, and poor accuracy limit the effectiveness of simulation. Many researchers have proposed a wide range of approaches to resolving the three main problems, which are still open. Meanwhile, new challenges start to emerge for future computer architecture simulation. This paper investigates the history of computer architecture simulation, classifies and compares the existing methodologies and technologies, and analyzes the challenges to help architects select, develop, or research on simulators. A brief description of the simulator SimIPF design is provided as well.

    • A Use-Level Simulator for Tiled Chip Multiprocessor

      2008, 19(4):1069-1080.

      Abstract (4585) HTML (0) PDF 570.70 K (6808) Comment (0) Favorites

      Abstract:As the transistor resources and delay of interconnect wires increase, the tiled multi-core processor has been a new direction for multi-core processor. In order to thoroughly study new type processor and explore the design space of it, this paper designs and implements a user-level performance simulator for the tiled CMP architecture. The simulator adopts the directory-based Cache Coherence Protocol and the architecture of store-and-forward Network- on-Chip with Godson-2 CPU as the processing core model, and depicts out-of-order transacted requests and responses and conflictions of requests and their timing characteristics in detail. The simulator can be used to evaluate all kinds of important performance features of the tiled CMP (chip multiprocessor) architecture by running all kinds of sequential or parallel workloads, and thus provides a fast, flexible and efficient platform for architecture design of multi-core processor.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063