• Volume 20,Issue zk,2009 Table of Contents
    Select All
    Display Type: |
    • Register Mapping and Register Function Cutting out Implementation in Binary Translation

      2009, 20(zk):1-7. CSTR:

      Abstract (4947) HTML (0) PDF 526.03 K (6699) Comment (0) Favorites

      Abstract:How to migrate binary code between different ISA efficiently is a difficult problem in binary translation. This paper analyzes this problem at the aspect of register mapping and an innovative register mapping method combining segment mapping and function cutting out of special-purpose register has been presented in this paper. An implementation of the method has been done on trend, which is a dynamic binary translation system translating and executing binary code of PowerPC on Alpha. The testing results of NPB-serial and SPEC2000 show that the method can simplify instruction translation, decrease code expanding and improve the execution efficiency of binary code obtained after binary translation evidently.

    • Parallel Scalability Analysis for Multigrid Solvers in HYPRE

      2009, 20(zk):8-14. CSTR:

      Abstract (4958) HTML (0) PDF 609.89 K (8913) Comment (0) Favorites

      Abstract:HYPRE is a high performance preconditioners library for solving large sparse linear systems on parallel computers. This paper analyzes the scalability of SMG and BoomerAMG, both are multigrid solvers in HYPRE, on massively parallel computer with thousands of processors. Based on the presented results, some conclusions are got to design scalable algorithms and their parallel implementation in real numerical applications.

    • Hardware Thread Accelerating Method Based on CPU/FPGA Hybrid Architecture

      2009, 20(zk):15-22. CSTR:

      Abstract (5316) HTML (0) PDF 591.92 K (6113) Comment (0) Favorites

      Abstract:The CPU/FPGA hybrid architecture is a popular reconfigurable computing architecture. In order to ease the use of FPGA, a hardware thread approach is proposed, and a hardware thread executing mechanism is designed to make use of the reconfigurable resources. Software thread and hardware thread can be executed in parallel while computation-intensive tasks are assigned to hardware threads and control-intensive tasks are assigned to software threads. Simics simulator is adopted to simulate a hybrid architecture platform, on which software and hardware multithreading DES, MD5SUM and MergeSort algorithms are evaluated. The results show that the average speedup is 2.30, and it proves that the approach explored the performance of CPU/FPGA hybrid architecture efficiently.

    • A Parallel Optimization Model for Massive Data Stream Application

      2009, 20(zk):23-33. CSTR:

      Abstract (4822) HTML (0) PDF 1.08 M (6278) Comment (0) Favorites

      Abstract:While computing is entering a new phase in which CPU improvements are driven by the addition of multiple cores on a single chip, rather than higher frequencies. Parallel processing on these systems is in a primitive stage, and requires the explicit use and knowledge of underlying thread architecture. Based on the features of massive data stream application, this paper proposes three-level pipelining programming model of multithreading system, which realizes the new synchronization mechanism with no contention of shared structures and is capable to provide differential service for data streams. Then the paper applies the new model to remote sensing information processing system and backbone network intrusion detection system, and evaluates the improved system on several multicore platforms. In performance analysis, the optimized effects of backbone network intrusion detection system are evaluated in several aspects of throughput scalability on both SPARC T1 and x86 platforms, the impacts of different multithreading mapping methods on throughput, and the comparison of response time and service quality before and after optimization. The experimental results show that the system throughput has good scalability on both platforms, the values of response time are greatly improved and the prioritized streams achieve better response time with the differential service mechanism.

    • Feedback Directed Prefetching Optimization for Linked Data Structure

      2009, 20(zk):34-39. CSTR:

      Abstract (3884) HTML (0) PDF 454.90 K (5987) Comment (0) Favorites

      Abstract:Traditional data prefetching based on static compiler analysis mainly aims at array access. Now, linked data structures due to pointers abound in applications, which is difficult to prefetch by traditional technique. Feedback directed data prefetching optimization has become one of the advanced compiler optimization techniques. it can do well in linked data prefetching. This paper researches the profile guided optimization in ORC compiler, improves the prefetching algorithm according to features of Alpha architecture. SPEC2000 performance test shows 4.1% speedup.

    • SmartShadow: A Model of Pervasive Computing

      2009, 20(zk):40-50. CSTR:

      Abstract (5345) HTML (0) PDF 764.16 K (7166) Comment (0) Favorites

      Abstract:Pervasive computing is becoming an emerging paradigm of computing, however currently there is little work addressing pervasive computing model. The goal of this paper is to model a pervasive computing environment as a user-centric “SmartShadow” using the BDP (Belief-Desire-Plan) user model, which maps pervasive computing environments into a dynamic virtual user space. In the BDP model, desires of a user are inferred from his belief set, and plans are made to satisfy each desire. Pervasive services are used to describe computing capabilities of the cyberspace, which can be organized by the user’s BDP to accomplish his/her desires. A composition process casts pervasive services into a user’s SmartShadow. The SmartShadow will follow a user to provide the user with pervasive services, like one’s shadow in the physical world. The proposed model is logically natural, and can flexibly deal with dynamics of pervasive computing spaces. This paper also implements a simulation system to evaluate the SmartShadow model.

    • A Method for Animation Video Abstract

      2009, 20(zk):51-58. CSTR:

      Abstract (3998) HTML (0) PDF 799.22 K (6030) Comment (0) Favorites

      Abstract:In recent years, animation is increasingly rapid developing, and the processing and abstract of animation video have become a hotspot. Animation video abstract is an important step in the research of animation video.Based on distinct property of animation video which is different from that of other videos, especially news video and sports video, a video abstract method, suitable for animation video, is proposed. In this paper, the construction of animation video is analyzed. Then the visual character and clear construction are gotten. The granularity of video scene is defined. Based on content evaluation model of scene significance and The granularity of scene, we find the important scenes of the video. Then, in the temporal order, the video abstract including stroyboard and video skim is proposed. Experimental results indicate that the proposed method extracts animation video abstract efficiently. The two forms of video abstracts are produced by this method can generalize and condense the animation video effectively.

    • Intelligent File Recommendation Based on Time Access Tracking

      2009, 20(zk):59-65. CSTR:

      Abstract (4092) HTML (0) PDF 647.56 K (5771) Comment (0) Favorites

      Abstract:Since the amount of information on today’s computers grows, helping computer users to locate the required files in file systems has become an important topic in today’s intelligent interaction model research. Past research has mostly concentrated in PIM (personal information management), to re-organize file hierarchies in a more understandable way for individual users. However, due to the numerous extra operations and the long period that needed for the re-organizing of users’ knowledge systems, the preceding applications can hardly be adopted by users. Considering that there would be a certain topic or purpose when user accessing files (a user may always view several files that related to the same topic during the same time), this paper proposes file recommendation based on tracking user’s file operations. An intelligent file recommendation desktop toolkit (IFRDT) is implemented, which will track user’s file access history, recommend the most related files according to the currently being accessed file, to reduce time cost for finding desired information. Experimental results show that IFRDT can save more energy of searching files than history, and users can find over 50% of desired files in IFRDT and directly open them without searching in directories.

    • Research on Knowledge Reuse Dynamic Evolvement Model Based on Multi-Agent System and Component

      2009, 20(zk):66-75. CSTR:

      Abstract (4128) HTML (0) PDF 750.90 K (5518) Comment (0) Favorites

      Abstract:Aiming at quick and proper reuse of existing knowledge in enterprises, a new idea is advanced that separated the business logic of knowledge management from the transactions of knowledge process. Furthermore, an agile knowledge reuse model based on multi-agent and knowledge service is constructed and agile knowledge reuse system is defined. To realize the agile transference of knowledge service, a rule-coordination mode based on multi-agent is established through study on the rule model of business logic and the activity-action model of the software Agent. In this way, the dynamic reuse of knowledge and the dynamic regrouping of knowledge using process reuse can be supported effectively, the process-distribution capability and extensible function of knowledge management system can also be increased. The agents can perform the retrieval requests from users via collaboration in a distributed component repository system. The agents have their own knowledge base. They need the capability to learn new information and update their knowledge base to keep the retrieval results effective. Finally, a case study is given to illustrate the application of this model.

    • Perceptual-Control-Based Agent Architecture Model

      2009, 20(zk):76-83. CSTR:

      Abstract (3622) HTML (0) PDF 636.55 K (5371) Comment (0) Favorites

      Abstract:This paper presents a perceptual-control-based agent architecture which enable to construct interactive software system with high usability. Adopting this architecture, the usability requirements can be modeled by adding new architecture levels, in order to realize the orthogonal relationship between different features of usability. And it adopts perceptual-control theory to match the nonlinear relationship between the user-interface and application core. Comparing to the other architectures, systems adopting this architecture can present users with user-interface elements at the user-task level, and permit users to operate the system at the user-task level. The users, therefore, can naturally complete their tasks and reach their goals. Furthermore, the systems adopting the architecture can easily extend the usability dynamically.

    • Adaptive Group Navigation in Ubiquitous Computing Environments

      2009, 20(zk):94-94. CSTR:

      Abstract (4437) HTML (0) PDF 840.12 K (6128) Comment (0) Favorites

      Abstract:This paper explores the issue of adaptive group navigation in ubiquitous computing environments. Firstly, it proposes the classification and definition of four different categories of user groups and introduces the notion of group navigation and group experience. Secondly, it analyzes the quantitative evaluation of group experience and establishes a uniform evaluation model that is suitable for all kinds of user groups. Thirdly, it puts forward an adaptive group navigation technique which aims to enhance group experience. Especially, the paper deals in depth with two basic techniques of adaptive group navigation: Group modeling which contributes to the direct group experience and context-aware intra-group interaction which contributes to the indirect group experience. Finally, the paper evaluates the effectiveness of the proposed group navigation technique.

    • Similarity Based Ranging Method in Wireless Sensor Networks

      2009, 20(zk):95-103. CSTR:

      Abstract (5047) HTML (0) PDF 788.10 K (6373) Comment (0) Favorites

      Abstract:Sensor localization is used by many position dependent applications in the wireless sensor network (WSN), where ranging from sensor nodes to beacon nodes plays a fundamental role. Most state-in-the-art ranging methods rely on many assumptions of deployment and measurement. However, these assumptions do not hold in practice. Therefore, the present methods introduce such great ranging errors that they are not feasible for real applications. In order to obtain more accurate distance estimation, this paper proposes a new metric, round-route node correlation, to describe the bending of paths in the WSN. Then it proposes a method to identify turning nodes along paths. By comparing the similarities between paths, further adjustment algorithms based on similarities are proposed. Simulation results show that the proposed method outperforms PDM and DV-distance especially when beacon nodes are not deployed uniformly.

    • Temporal Data Visualization Technique and Tool

      2009, 20(zk):104-112. CSTR:

      Abstract (4501) HTML (0) PDF 742.69 K (9022) Comment (0) Favorites

      Abstract:Temporal data are widely used in many fields. One of the prominent time series analysis techniques is visualization, which may improve users’ ability in information recognition and analysis. However, users tend to fail when analyzing a long time series with existing approaches. This paper presents an approach named FisheyeLines. This visualization technique provides good overviews for large-scale information and details for these focus objects. Meanwhile, the correlation between complex information and properties of objects are easily to be acquired. It also presents a tool named FisheyeLinesVis for developing temporal data visualization application. A user study shows that this approach is efficient and easy to use.

    • An Adaptive Software Architecture Style for Pervasive Computing

      2009, 20(zk):113-122. CSTR:

      Abstract (4508) HTML (0) PDF 872.37 K (6577) Comment (0) Favorites

      Abstract:Pervasive computing software has to adapt itself to the dynamically changing execution environments and user requirements. This feature complicates software implementation significantly, which makes it necessary to adopt software reuse means on the design level, such as software architecture style, in its development. Based on an adaptive abstract model of pervasive computing space, this paper proposes a software architecture style for pervasive computing, UbiArch, and details it in its concept view, runtime view and development view. UbiArch supports a novel behavior pattern of software entities, i.e., dynamically joining applications according to user requirements and actively adapting itself to the execution environment. As a result, architectural-level can be achieved reuse for software adaptabilities. Besides, this architecture style is based on mature software techniques, such as component technology, which ensure its practicability. A software platform to support this architecture as well as several UbiArch-based applications has been developed to validate the effectiveness and generality of UbiArch.

    • Hybrid Graph Model with Two Layers for Personalized Recommendation

      2009, 20(zk):123-130. CSTR:

      Abstract (4889) HTML (0) PDF 530.05 K (6212) Comment (0) Favorites

      Abstract:A hybrid graph model for personalized recommendation, which is based on small world network and Bayesian network, is presented. Small world network has a good property in clustering and Bayesian network is compatible for probability inference. The hybrid graph model consists of two layers. One is user’s layer for representing users or customers and the other is merchandise’s layer for representing goods or products. Small world network describes the relationships among the nodes of users in lower layer. The implications among nodes of merchandises are represented by Bayesian network in higher layer. Directed arcs denote the tendency of nodes between user’s layer and merchandise’s layer. This paper also introduces several algorithms for clustering based on small world network, structure learning and parameter learning of Bayesian network, and recommended algorithm based this model. The experimentation shows that the model be accomplished to represent the relationships from user to user, merchandise to merchandise, and user to merchandise. The experimental results show that the hybrid graph model has a good performance in personalized recommendation.

    • QFEC: A Real-Time Scheduling Algorithm Based on Stream Media

      2009, 20(zk):131-137. CSTR:

      Abstract (4222) HTML (0) PDF 490.71 K (5398) Comment (0) Favorites

      Abstract:With the development of network, the video applications based on Internet are growing rapidly. The real-time video applications become popular. Because of the complexity of the network and real-time stream video on demand, the scheduling algorithm has great influence on the QoS. In this paper, a scheduling algorithm based on real-time stream video, QFEC (QoS based on FEC) algorithm is proposed, associated the FEC and Kalman filter theories. According to the status of the receiver, the sending rate is adapted automatically by Kalman filter. The state of the scheduling algorithm is analyzed. This algorithm can maintain the continuity of the real-time video transmission. The simulation results are given, which indicate the scheduling algorithm can provide good video service.

    • Generalized Decision Logic Based and End-User Oriented Service Composition Formal Model

      2009, 20(zk):138-143. CSTR:

      Abstract (4133) HTML (0) PDF 480.12 K (5250) Comment (0) Favorites

      Abstract:There’s seldom work carried on dealing with the contradiction between the vague and uncertain requirements of end-user and the precise and deterministic process in service composition. A multi-grain formal model for service composition is proposed in this paper. This model considers the requirement of customers in service composition from the end-user view and a formal specification on mapping the Web service description to the generalized decision logic language (GDL) is presented to construct multi-grain service composition views. GDL is a formal logic language proposed in granular computing research community as an expecting specification for definition of granular models. It can be used to define a multi-grain model for service composition and make users and service composition agent work in different information granule level separately. The proposed model is expected to provide a more understandable view for an end-user than traditional service composition model and conforms to the human cognition mode.

    • Minimal Spanning Tree Based Graph Indexing Algorithm

      2009, 20(zk):144-153. CSTR:

      Abstract (5194) HTML (0) PDF 857.12 K (6902) Comment (0) Favorites

      Abstract:Graphs have become popular for modeling structured data. As a result, graph indexing technique has come to play an essential role in query processing. This paper investigates the issues of indexing graphs and propose an approximation solution. The proposed approach, called MSTA, makes use of minimal spanning tree as basic indexing feature. By containment relation of edge lists and maximal common subgraph based graph distance, those minimal spanning trees are organized into an indexing structure named MST tree. MST tree can support many kinds of queries efficiently, such as subgraph queries. The performance study shows that index size and constructing time of traditional methods are tens or even a hundred times larger than MSTA.

    • Data Integrity Authentication Approach Based on Validating Queries in DAS Paradigm

      2009, 20(zk):154-164. CSTR:

      Abstract (3838) HTML (0) PDF 895.13 K (5841) Comment (0) Favorites

      Abstract:In database-as-a-service (DAS) paradigm, data owners delegate their data to a third-party: Database service provider (DSP). Compared with traditional DBMS, DAS provides Web-based data access to relieve heavy database management routines. To guarantee quality of database service, most previous work has focus on data privacy and data integrity. This paper focuses on authentication of data integrity. All previous approaches for data integrity authentication require the DSP either to provide extra information or to store extra data. In dynamic scenario, the authentication data should be updated correspondingly, which is inefficient to deploy in real life. This paper proposes a data integrity auditing approach based on validating queries. In this approach, validating queries are generated from previous queries sent by the user. According to the result of validating queries and based on the relationship between previous queries and validating queries, the client can achieve the integrity auditing effectively and efficiently based on the probability. Experimental results confirm the effectiveness of the approach.

    • Distributed Aggregate Functions Enabled Parallel Main-Memory OLAP Query Processing Technique

      2009, 20(zk):165-175. CSTR:

      Abstract (4858) HTML (0) PDF 1.06 M (6474) Comment (0) Favorites

      Abstract:A multi-node parallel main-memory OLAP system is proposed in this paper which is considered by the character of OLAP queries and the performance of main-memory database system. In this system, multi-dimensional OLAP queries with aggregate functions are distributed to each computing node to get aggregate results and final result can be available by merging all the aggregate results from multiple computing nodes. Comparing with other solutions, this system uses horizontal distribution policy to distribute massive data in multi-node with only consideration of memory capacity of computation node. According to feature of distributed aggregate function, system can improve parallel processing capacity by lazy results merging which can reduce the volume of message between nodes, the overall performance of parallel query processing can be improved. This system is easy to deploy, it is also practical with good scalability and performance for the requirements of enterprise massive data processing.

    • An Efficient Valid Page Crawling Approach for Websites with Dynamic Scripts

      2009, 20(zk):176-183. CSTR:

      Abstract (4136) HTML (0) PDF 949.53 K (6773) Comment (0) Favorites
    • IC: Incremental Algorithm for Community Identification in Dynamic Social Networks

      2009, 20(zk):184-192. CSTR:

      Abstract (5109) HTML (0) PDF 790.74 K (7634) Comment (0) Favorites

      Abstract:Community structure in social networks (SNS) could provide interesting information, such as the pattern of social activities between individuals and the trend of social development. Traditional methods to identify communities on static social networks will miss interesting laws how SNS change. The few methods on modeling and analyzing of community structures in dynamic social networks, which are obtaining more and more attention recently, fail to identify large networks in acceptable time. This paper proposes an incremental new method to identify community structure in dynamic social networks. Utilizing the time locality that there’s little change in adjacent network snapshots, the paper incrementally analyzes social networks to avoid repeatedly partitioning the whole networks. Experiments demonstrate that this approach offers orders-of-magnitude performance improvement over state-of-the-art approaches on large scale networks (105 nodes) and can produce nice community structures which reflect the essence of SNS.

    • Real-Time Synthesis of Large Textures

      2009, 20(zk):193-201. CSTR:

      Abstract (4371) HTML (0) PDF 1005.41 K (5772) Comment (0) Favorites

      Abstract:This paper presents a novel technique for synthesizing large textures of high quality in real time. By analyzing the texture periodicity, patches in an optimized size are generated to well represent the variation of exemplar features. Then, during synthesizing, the paper first distributes patches on the output texture with a vacant region left between any pair of neighboring patches in every row and every column, where a vacant region is also in the same size as a patch. Thereafter, patches are selected to fill the vacant regions in the output texture. Obviously, both patch distributing and vacant region filling can be executed in parallel. To accelerate, for each patch, the paper constructs a set of matching patches that can be efficiently merged with the corresponding patch. The computation of patch selection for vacant regions, therefore, can be simplified into set intersection. Moreover, patch distribution and set intersection are performed on a CPU while patch stitching are executed on a GPU, to take advantage of both CPU and GPU. Experimental results show that the presented method is able to generate a high-quality texture in 1024*1024 pixels at over 45 frames per second, which is hard to achieve by existing techniques.

    • Sketch-Based Method for Interactive Hairstyling

      2009, 20(zk):202-212. CSTR:

      Abstract (4663) HTML (0) PDF 781.63 K (5484) Comment (0) Favorites

      Abstract:To support hairstyling effectively, it’s crucial for hairstyling tools to find a tradeoff between realism and interaction. This paper presents an interactive hairstyling method which can handle both global and local details of the hairstyle. Vector field based hairstyling method is used to generate the global shape of the hairstyle. This paper defines three different kinds of hairstyle curves which indicate the boundary constraints of vector field so that the user can control the hairstyle generation interactively. Hair wisp model is also used to represent hairstyles and several wisp based editing operations are defined, such that users can control the local details of the hairstyle. Experimental results prove that the proposed method provides more interaction and computation efficiencies without lost of hair realism.

    • Music Retrieval Feature Database Construction Methods

      2009, 20(zk):213-220. CSTR:

      Abstract (4029) HTML (0) PDF 610.98 K (8514) Comment (0) Favorites

      Abstract:Construct music feature database properly is significant to accurate music retrieval. On the foundation of content-based music retrieval framework, this paper divides the database construct methods into pitch extraction, score information and MIDI analysis method. It proposes a pitch extraction, post-processing methods and MIDI analysis methods to implement the database construction. Experimental results show that the two methods are correct and effective, could construct a music database accurately and rapidly.

    • Bi-Directional Video Segmentation Algorithm Based on Reliability

      2009, 20(zk):221-230. CSTR:

      Abstract (3487) HTML (0) PDF 821.91 K (5583) Comment (0) Favorites

      Abstract:In this paper, an algorithm to segment foreground object from a video clip is proposed. The reliability model, which calculated from the local color pattern information, is presented. Firstly, pre-process all the frames of the video clip with the watershed algorithm; then, apply graph-cut on the key frames. Secondly, the reliability model is been computed by bi-directional procedure. Through the positive procedure, the reliability is set. Next, a small portion of the reliabilities is corrected through a reverse procedure assisted by a certain optical flow algorithm.At last, segment the video on the authority of the reliability of each frame. When dealing with the video which has similar foreground and background color, the reliability model shows a good segment result. And bi-directional procedure provides a way to improve segmentation of the occluded object in the video.

    • Sampling Subspaces Based on Wavelet Frames

      2009, 20(zk):231-238. CSTR:

      Abstract (4529) HTML (0) PDF 552.31 K (5345) Comment (0) Favorites

      Abstract:Sampling theory is one of the most powerful results in modern information theory and technology. The digital signal with sampling properties can be reconstructed from its samples in a perfect form. Walter and Zhou extended the Shannon sampling theorem to wavelet subspaces. This paper improves the classical sampling theorems based on wavelet frames. A basic problem on information theory is introduced here. For a given digital signal, whether it has sampling series form. In this paper, the digital signals with sampling properties are characterized based on wavelet frames. For a given sapling subspace, the analytic form of the signals in it is proposed. Especially some new kinds of sampling subspaces are offered here. As an application, the examples show that the new theorems improve some known relating results, which is effective for the digital signals’ sampling and reconstructions.

    • False Data Filtering Mechanism Based on Cloud-Built Authentication Model in Wireless Sensor Networks

      2009, 20(zk):239-249. CSTR:

      Abstract (3852) HTML (0) PDF 1002.48 K (5476) Comment (0) Favorites

      Abstract:Most of recent false data filtering mechanisms in WSN added t-MAC (Message Authentication Code) for the data packets, these mechanisms are usually restricted within the t-threshold safe limits, and do not support dynamic routing. Based on the idea of a virtual witnesses cluster, adopting the perturbation-based polynomial technology, this paper proposes a authentication algorithm that made a number of nodes within a cloud cooperate to generate the certification polynomial, adopting the perturbation-based polynomial technology and increasing the difficulty of an attack. On this basis, the proposed false data filtering mechanism can verify the validity of data immediately, and support dynamic routing. Theoretical analysis and simulation experiments show that the new method is not limited by the t-threshold and save as more energy as the transmission jump increasing. Compared with the other mechanisms, the method enhances the anti-trapping ability. It is more suitable for the network with low credibility and the long-distance transmission application.

    • Energy and Node Degree Synthesized Clustering Algorithm for Wireless Sensor Networks

      2009, 20(zk):250-256. CSTR:

      Abstract (4311) HTML (0) PDF 620.38 K (6293) Comment (0) Favorites

      Abstract:In clustering algorithm of wireless sensor networks, to solve the problem of excessive energy consumption in the cluster heads, an residual energy and node degree synthesized clustering algorithm named ENCA (energy and node degree synthesized clustering algorithm) is proposed in this paper. In cluster heads election phase of every round, it considers the residual energy and the average energy of all the nodes in each cluster, an optimal cluster head is elected in each cluster according to node degree. In algorithm running phase the connection of the network is guaranteed, in the mean time, it is avoided to select the node with low energy as cluster head. Simulation results show that, in comparison with LEACH and ACE, the node energy consumption is balanced and the network lifetime is efficiently prolonged in ENCA algorithm.

    • Proximity List: RSSI-Assisted Relative Localization in One Dimensional Wireless Sensor Networks

      2009, 20(zk):257-265. CSTR:

      Abstract (5844) HTML (0) PDF 835.38 K (6199) Comment (0) Favorites

      Abstract:In wireless sensor networks, relative localization problem is to infer the relative locations instead of absolute locations. This paper considers one dimensional relative localization problem in wireless sensor network, and proposes PLO (Proximity List algOrithm), which characterizes each node by its Proximity List, a list of all node IDs ordered by distances away. A trace-driven analysis verifies that the difference in distances can be mostly reflected by the difference in received RSSI (received signal strength indicator) values. Therefore, proximity lists can be obtained by comparing RSSI values. Finally, relative locations are obtained by locating end nodes in the line topology. This paper shows that this algorithm is feasible in practical situations.

    • Timing-Sync Protocol for Linear Sensor Networks

      2009, 20(zk):266-277. CSTR:

      Abstract (4094) HTML (0) PDF 861.53 K (6190) Comment (0) Favorites

      Abstract:Due to the features of high synchronization error and high power consumption, the classic synchronization scheme of two-way packets exchange is unfit for some applications in wireless sensor networks, especially for networks with multi-hop linear topology. This paper proposes a time synchronization protocol named Timing-sync Protocol for Linear Sensor Networks (TPLSN). The synchronization scheme of enhanced two-way packets exchange and the scheme of clock skew compensation are keys to the success of TPLSN. The phenomenon of its synchronization error accumulation over hop count is also investigated. TPLSN is evaluated on a Mica2-compatible test bed. Its synchronization error is less than 20μs for the node which is 9 hops away from the time beacon node, the increase ratio of synchronization error to hop count is less than 1μs per hop, and the increase ratio of synchronization error to resynchronization cycle is 0.017μs per second. Further, to synchronize all nodes in an n-hop linear wireless sensor network, only 2n packets are needed, which is least for any synchronization protocol based on synchronization scheme of two-way packet exchange. Theoretical analysis shows that three factors, including the approximated accuracy, the asymmetry of two-way packets exchange, and the clock skew, have great influences on time offset between two adjacent nodes. Furthermore, the clock frequency order of the linear network is found to be vital to the accumulation of synchronization error over hop count.

    • A Localization Algorithm in Wireless Sensor Networks with Mobile Anchor Nodes

      2009, 20(zk):278-285. CSTR:

      Abstract (5281) HTML (0) PDF 746.41 K (8705) Comment (0) Favorites

      Abstract:In process of the localization in WSN, the overhead of the whole network will increase with the number of anchor nodes added, and it’ll bring a lot of waste. Thus, in order to use less anchor nodes to achieve more precise localization, this paper proposes a range-free localization algorithm which is based on three mobile anchor nodes, the algorithm can guarantee that each unknown node can choose the anchor nodes which are not far away from itself to implement localization, and construct an optimization model to maximize the area of double overlay regions in the network. The results of simulation show that this algorithm can improve the precision of localization.

    • Method of Relevance Feedback in Keyword Search over Relational Databases

      2009, 20(zk):286-297. CSTR:

      Abstract (3768) HTML (0) PDF 864.48 K (5965) Comment (0) Favorites

      Abstract:In keyword search over relational databases (KSORD), retrieval of user’s initial query is often unsatisfying. User has to reformulate his query and execute the new query, which costs much time and effort. In this paper, a method of automatically reformulating user queries by relevance feedback is introduced. The method adopts a ranking method based on vector space model to rank KSORD results. Based on the results of user feedback or pseudo feedback, it computes expansion terms based on probability and reformulates the new query using query expansion. Experimental results verify that after KSORD systems executing the new query, more relevant results are presented to user.

    • Research on Adaptive Sketch User Interface for Virtual Education Application

      2009, 20(zk):298-305. CSTR:

      Abstract (4426) HTML (0) PDF 725.53 K (6288) Comment (0) Favorites

      Abstract:Combining sketching user interface and adaptive techniques with virtual reality (VR) techniques will be great helpful to virtual education application. It can improve the intelligence and friendliness of applications by enhancing the interaction processes between users and machines, and the effectiveness of virtual class environment is improved at the same time. This paper builds an adaptive sketch based user interface in virtual environment of education application, which is put into use to meet certain need of special applications. This paper focuses on the research of context process mechanism of sketch user interface of virtual environment of education in detail. Then the paper builds virtual education prototype, and the experimented results show verified the feasibility and effectiveness of this application prototype.

    • Common Service Access and Integration Scheme

      2009, 20(zk):306-313. CSTR:

      Abstract (3668) HTML (0) PDF 661.84 K (5548) Comment (0) Favorites

      Abstract:Because of the rapid growth of the Internet, informationization in enterprises of almost all fields is growing fast. Efficiently developing high quality, easy-maintenance enterprise business information system has become the trend. Due to the similarities in the function requirements of different business information systems, in order to provide efficient and quality-guaranteed services, developers of the information systems choose to use the common services provided by the third-parties to develop loose coupling system. In this way, developers can devote more energy into the building of their kernel business logic. However, facing the large number of third-party enterprises and their services, how to perform adaption and how to guarantee the quality of the services are the issues that must be addressed. In this paper, a common service access and integration (CSAI) method for the business information systems is proposed. CSAI integrates all the common services provided by the third-party enterprises, and provides a customizable integrated common service infrastructure. The infrastructure has a uniform, interoperable, quality-guaranteed interface and provides guaranteed services and service operation supporting capabilities for different enterprises and enterprises in different fields. Detailed explanation of the functions, structure of CSAI, are provided. And a common service integration flow abstract description language that can be used for system description and development is proposed. Practical instances are used to assist our analysis.

    • Privacy Preservation for Attribute Order Sensitive Workload in Medical Data Publishing

      2009, 20(zk):314-320. CSTR:

      Abstract (4926) HTML (0) PDF 578.14 K (6898) Comment (0) Favorites

      Abstract:Privacy becomes a more serious concern in applications involving microdata such as medical data publishing or medical data mining. Anonymization methods based on global recoding or local recoding or clustering provide privacy protection by guaranteeing that each released record will be indistinguishable to some other individual. However, such methods may not always achieve effective anonymization in terms of analysis workload using the anonymized data. The utility of attributes has not been well considered in the previous methods. This paper studies the problem of utility-based anonymization to concentrate on attributes order sensitive workload, where the order of the attributes is important to the analysis workload. Based on the multidimensional anonymization concept, a method is discussed for attributes order sensitive utility-based anonymization. The performance study using public data sets shows that the efficiency is not affected by the attributes order processing.

    • Certificateless-Based Two-Party Authenticated Key Agreement Protocol

      2009, 20(zk):321-329. CSTR:

      Abstract (5587) HTML (0) PDF 663.43 K (7593) Comment (0) Favorites

      Abstract:Two-Party authenticated key agreement protocols are constructed mainly based on the traditional public key cryptography and identity-based public key cryptography. The certificateless-based authenticated key agreement protocols have the advantages of avoiding the complexity of identity management in the traditional certificate-based schemes, as well as the key escrow issues inherited in the identity-based schemes. In 2007, Park et al. proposed a certificateless-based public key encryption scheme which is provably secure against chosen plaintext attacks in the selective-ID security model (IND-sID-CPA). Inspired on such a scheme, this paper presents a two-party certificateless-based authenticated key agreement scheme and gives the comparisons with other comparable schemes in security and efficiency. The new proposed scheme achieves almost all of the desired security attributes, especially the Perfect forward secrecy, PKG forward secrecy, Known session-specific temporary information secrecy and Key escrowless. Meanwhile it keeps the nice efficiency.

    • Clustering Method for Incomplete Text System Based on Set Pair Analysis

      2009, 20(zk):330-335. CSTR:

      Abstract (4135) HTML (0) PDF 520.39 K (5802) Comment (0) Favorites

      Abstract:This paper presents a novel approach for incomplete text system. Which is based on hypergraph model clustering by using the set-pair analysis, and in which the similar, different and anti-contact connectivity of Set-pair and the similarity value of set-pair are used. After hypergraph model set up, a hypergraph partitioning algorithm is used to find clusters. This new method can eliminate disadvantageous factors and decrease the number of dimensions of the incomplete text data and enhance the speed largely and precision of text clustering. The experimental results show that the algorithm is feasibile and efficient.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063