WEN Yan-Hua , TANG Da-Guo , QI Feng-Bin
2009, 20(zk):1-7.
Abstract:How to migrate binary code between different ISA efficiently is a difficult problem in binary translation. This paper analyzes this problem at the aspect of register mapping and an innovative register mapping method combining segment mapping and function cutting out of special-purpose register has been presented in this paper. An implementation of the method has been done on trend, which is a dynamic binary translation system translating and executing binary code of PowerPC on Alpha. The testing results of NPB-serial and SPEC2000 show that the method can simplify instruction translation, decrease code expanding and improve the execution efficiency of binary code obtained after binary translation evidently.
XU Xiao-Wen , MO Ze-Yao , CAO Xiao-Lin
2009, 20(zk):8-14.
Abstract:HYPRE is a high performance preconditioners library for solving large sparse linear systems on parallel computers. This paper analyzes the scalability of SMG and BoomerAMG, both are multigrid solvers in HYPRE, on massively parallel computer with thousands of processors. Based on the presented results, some conclusions are got to design scalable algorithms and their parallel implementation in real numerical applications.
CHEN Tian-Zhou , YAN Li-Ke , HU Wei , MA Ji-Jun
2009, 20(zk):15-22.
Abstract:The CPU/FPGA hybrid architecture is a popular reconfigurable computing architecture. In order to ease the use of FPGA, a hardware thread approach is proposed, and a hardware thread executing mechanism is designed to make use of the reconfigurable resources. Software thread and hardware thread can be executed in parallel while computation-intensive tasks are assigned to hardware threads and control-intensive tasks are assigned to software threads. Simics simulator is adopted to simulate a hybrid architecture platform, on which software and hardware multithreading DES, MD5SUM and MergeSort algorithms are evaluated. The results show that the average speedup is 2.30, and it proves that the approach explored the performance of CPU/FPGA hybrid architecture efficiently.
SUN Xiao-Juan , SUN Ning-Hui , LEI Bin
2009, 20(zk):23-33.
Abstract:While computing is entering a new phase in which CPU improvements are driven by the addition of multiple cores on a single chip, rather than higher frequencies. Parallel processing on these systems is in a primitive stage, and requires the explicit use and knowledge of underlying thread architecture. Based on the features of massive data stream application, this paper proposes three-level pipelining programming model of multithreading system, which realizes the new synchronization mechanism with no contention of shared structures and is capable to provide differential service for data streams. Then the paper applies the new model to remote sensing information processing system and backbone network intrusion detection system, and evaluates the improved system on several multicore platforms. In performance analysis, the optimized effects of backbone network intrusion detection system are evaluated in several aspects of throughput scalability on both SPARC T1 and x86 platforms, the impacts of different multithreading mapping methods on throughput, and the comparison of response time and service quality before and after optimization. The experimental results show that the system throughput has good scalability on both platforms, the values of response time are greatly improved and the prioritized streams achieve better response time with the differential service mechanism.
QI Feng-Bin , WANG Fei , LI Zhong-Sheng
2009, 20(zk):34-39.
Abstract:Traditional data prefetching based on static compiler analysis mainly aims at array access. Now, linked data structures due to pointers abound in applications, which is difficult to prefetch by traditional technique. Feedback directed data prefetching optimization has become one of the advanced compiler optimization techniques. it can do well in linked data prefetching. This paper researches the profile guided optimization in ORC compiler, improves the prefetching algorithm according to features of Alpha architecture. SPEC2000 performance test shows 4.1% speedup.
PAN Gang , ZHANG Li , LI Shi-Jian , WU Zhao-Hui
2009, 20(zk):40-50.
Abstract:Pervasive computing is becoming an emerging paradigm of computing, however currently there is little work addressing pervasive computing model. The goal of this paper is to model a pervasive computing environment as a user-centric “SmartShadow” using the BDP (Belief-Desire-Plan) user model, which maps pervasive computing environments into a dynamic virtual user space. In the BDP model, desires of a user are inferred from his belief set, and plans are made to satisfy each desire. Pervasive services are used to describe computing capabilities of the cyberspace, which can be organized by the user’s BDP to accomplish his/her desires. A composition process casts pervasive services into a user’s SmartShadow. The SmartShadow will follow a user to provide the user with pervasive services, like one’s shadow in the physical world. The proposed model is logically natural, and can flexibly deal with dynamics of pervasive computing spaces. This paper also implements a simulation system to evaluate the SmartShadow model.
YUAN Zhi-Min , WU Ling-Da , CHEN Dan-Wen , TAN Jie , ZHOU Wen
2009, 20(zk):51-58.
Abstract:In recent years, animation is increasingly rapid developing, and the processing and abstract of animation video have become a hotspot. Animation video abstract is an important step in the research of animation video.Based on distinct property of animation video which is different from that of other videos, especially news video and sports video, a video abstract method, suitable for animation video, is proposed. In this paper, the construction of animation video is analyzed. Then the visual character and clear construction are gotten. The granularity of video scene is defined. Based on content evaluation model of scene significance and The granularity of scene, we find the important scenes of the video. Then, in the temporal order, the video abstract including stroyboard and video skim is proposed. Experimental results indicate that the proposed method extracts animation video abstract efficiently. The two forms of video abstracts are produced by this method can generalize and condense the animation video effectively.
2009, 20(zk):59-65.
Abstract:Since the amount of information on today’s computers grows, helping computer users to locate the required files in file systems has become an important topic in today’s intelligent interaction model research. Past research has mostly concentrated in PIM (personal information management), to re-organize file hierarchies in a more understandable way for individual users. However, due to the numerous extra operations and the long period that needed for the re-organizing of users’ knowledge systems, the preceding applications can hardly be adopted by users. Considering that there would be a certain topic or purpose when user accessing files (a user may always view several files that related to the same topic during the same time), this paper proposes file recommendation based on tracking user’s file operations. An intelligent file recommendation desktop toolkit (IFRDT) is implemented, which will track user’s file access history, recommend the most related files according to the currently being accessed file, to reduce time cost for finding desired information. Experimental results show that IFRDT can save more energy of searching files than history, and users can find over 50% of desired files in IFRDT and directly open them without searching in directories.
2009, 20(zk):66-75.
Abstract:Aiming at quick and proper reuse of existing knowledge in enterprises, a new idea is advanced that separated the business logic of knowledge management from the transactions of knowledge process. Furthermore, an agile knowledge reuse model based on multi-agent and knowledge service is constructed and agile knowledge reuse system is defined. To realize the agile transference of knowledge service, a rule-coordination mode based on multi-agent is established through study on the rule model of business logic and the activity-action model of the software Agent. In this way, the dynamic reuse of knowledge and the dynamic regrouping of knowledge using process reuse can be supported effectively, the process-distribution capability and extensible function of knowledge management system can also be increased. The agents can perform the retrieval requests from users via collaboration in a distributed component repository system. The agents have their own knowledge base. They need the capability to learn new information and update their knowledge base to keep the retrieval results effective. Finally, a case study is given to illustrate the application of this model.
HUA Qing-Yi , LIU Qing-Fang , YU Di , WANG Xiao-Wen
2009, 20(zk):76-83.
Abstract:This paper presents a perceptual-control-based agent architecture which enable to construct interactive software system with high usability. Adopting this architecture, the usability requirements can be modeled by adding new architecture levels, in order to realize the orthogonal relationship between different features of usability. And it adopts perceptual-control theory to match the nonlinear relationship between the user-interface and application core. Comparing to the other architectures, systems adopting this architecture can present users with user-interface elements at the user-task level, and permit users to operate the system at the user-task level. The users, therefore, can naturally complete their tasks and reach their goals. Furthermore, the systems adopting the architecture can easily extend the usability dynamically.
WANG Zhu , ZHOU Xing-She , WANG Hai-Peng , NI Hong-Bo , WU Rui-Juan
2009, 20(zk):94-94.
Abstract:This paper explores the issue of adaptive group navigation in ubiquitous computing environments. Firstly, it proposes the classification and definition of four different categories of user groups and introduces the notion of group navigation and group experience. Secondly, it analyzes the quantitative evaluation of group experience and establishes a uniform evaluation model that is suitable for all kinds of user groups. Thirdly, it puts forward an adaptive group navigation technique which aims to enhance group experience. Especially, the paper deals in depth with two basic techniques of adaptive group navigation: Group modeling which contributes to the direct group experience and context-aware intra-group interaction which contributes to the indirect group experience. Finally, the paper evaluates the effectiveness of the proposed group navigation technique.
2009, 20(zk):95-103.
Abstract:Sensor localization is used by many position dependent applications in the wireless sensor network (WSN), where ranging from sensor nodes to beacon nodes plays a fundamental role. Most state-in-the-art ranging methods rely on many assumptions of deployment and measurement. However, these assumptions do not hold in practice. Therefore, the present methods introduce such great ranging errors that they are not feasible for real applications. In order to obtain more accurate distance estimation, this paper proposes a new metric, round-route node correlation, to describe the bending of paths in the WSN. Then it proposes a method to identify turning nodes along paths. By comparing the similarities between paths, further adjustment algorithms based on similarities are proposed. Simulation results show that the proposed method outperforms PDM and DV-distance especially when beacon nodes are not deployed uniformly.
LUO Xiong-Fei , WANG Hong-An , TIAN Feng , DAI Guo-Zhong , TENG Dong-Xing
2009, 20(zk):104-112.
Abstract:Temporal data are widely used in many fields. One of the prominent time series analysis techniques is visualization, which may improve users’ ability in information recognition and analysis. However, users tend to fail when analyzing a long time series with existing approaches. This paper presents an approach named FisheyeLines. This visualization technique provides good overviews for large-scale information and details for these focus objects. Meanwhile, the correlation between complex information and properties of objects are easily to be acquired. It also presents a tool named FisheyeLinesVis for developing temporal data visualization application. A user study shows that this approach is efficient and easy to use.
DING Bo , WANG Huai-Min , SHI Dian-Xi
2009, 20(zk):113-122.
Abstract:Pervasive computing software has to adapt itself to the dynamically changing execution environments and user requirements. This feature complicates software implementation significantly, which makes it necessary to adopt software reuse means on the design level, such as software architecture style, in its development. Based on an adaptive abstract model of pervasive computing space, this paper proposes a software architecture style for pervasive computing, UbiArch, and details it in its concept view, runtime view and development view. UbiArch supports a novel behavior pattern of software entities, i.e., dynamically joining applications according to user requirements and actively adapting itself to the execution environment. As a result, architectural-level can be achieved reuse for software adaptabilities. Besides, this architecture style is based on mature software techniques, such as component technology, which ensure its practicability. A software platform to support this architecture as well as several UbiArch-based applications has been developed to validate the effectiveness and generality of UbiArch.
ZHANG Shao-Zhong , CHEN De-Ren
2009, 20(zk):123-130.
Abstract:A hybrid graph model for personalized recommendation, which is based on small world network and Bayesian network, is presented. Small world network has a good property in clustering and Bayesian network is compatible for probability inference. The hybrid graph model consists of two layers. One is user’s layer for representing users or customers and the other is merchandise’s layer for representing goods or products. Small world network describes the relationships among the nodes of users in lower layer. The implications among nodes of merchandises are represented by Bayesian network in higher layer. Directed arcs denote the tendency of nodes between user’s layer and merchandise’s layer. This paper also introduces several algorithms for clustering based on small world network, structure learning and parameter learning of Bayesian network, and recommended algorithm based this model. The experimentation shows that the model be accomplished to represent the relationships from user to user, merchandise to merchandise, and user to merchandise. The experimental results show that the hybrid graph model has a good performance in personalized recommendation.
TAN Yu-Bo , XIA Bin , TAO Yang
2009, 20(zk):131-137.
Abstract:With the development of network, the video applications based on Internet are growing rapidly. The real-time video applications become popular. Because of the complexity of the network and real-time stream video on demand, the scheduling algorithm has great influence on the QoS. In this paper, a scheduling algorithm based on real-time stream video, QFEC (QoS based on FEC) algorithm is proposed, associated the FEC and Kalman filter theories. According to the status of the receiver, the sending rate is adapted automatically by Kalman filter. The state of the scheduling algorithm is analyzed. This algorithm can maintain the continuity of the real-time video transmission. The simulation results are given, which indicate the scheduling algorithm can provide good video service.
LUO Yuan-Sheng , QI Yong , HOU Di , SHI Yi , CHEN Ying , SHEN Lin-Feng
2009, 20(zk):138-143.
Abstract:There’s seldom work carried on dealing with the contradiction between the vague and uncertain requirements of end-user and the precise and deterministic process in service composition. A multi-grain formal model for service composition is proposed in this paper. This model considers the requirement of customers in service composition from the end-user view and a formal specification on mapping the Web service description to the generalized decision logic language (GDL) is presented to construct multi-grain service composition views. GDL is a formal logic language proposed in granular computing research community as an expecting specification for definition of granular models. It can be used to define a multi-grain model for service composition and make users and service composition agent work in different information granule level separately. The proposed model is expected to provide a more understandable view for an end-user than traditional service composition model and conforms to the human cognition mode.
LI Nan , GAO Hong , LI Jian-Zhong
2009, 20(zk):144-153.
Abstract:Graphs have become popular for modeling structured data. As a result, graph indexing technique has come to play an essential role in query processing. This paper investigates the issues of indexing graphs and propose an approximation solution. The proposed approach, called MSTA, makes use of minimal spanning tree as basic indexing feature. By containment relation of edge lists and maximal common subgraph based graph distance, those minimal spanning trees are organized into an indexing structure named MST tree. MST tree can support many kinds of queries efficiently, such as subgraph queries. The performance study shows that index size and constructing time of traditional methods are tens or even a hundred times larger than MSTA.
YAN Qiao-Zhi , WANG Jie-Ping , DU Xiao-Yong
2009, 20(zk):154-164.
Abstract:In database-as-a-service (DAS) paradigm, data owners delegate their data to a third-party: Database service provider (DSP). Compared with traditional DBMS, DAS provides Web-based data access to relieve heavy database management routines. To guarantee quality of database service, most previous work has focus on data privacy and data integrity. This paper focuses on authentication of data integrity. All previous approaches for data integrity authentication require the DSP either to provide extra information or to store extra data. In dynamic scenario, the authentication data should be updated correspondingly, which is inefficient to deploy in real life. This paper proposes a data integrity auditing approach based on validating queries. In this approach, validating queries are generated from previous queries sent by the user. According to the result of validating queries and based on the relationship between previous queries and validating queries, the client can achieve the integrity auditing effectively and efficiently based on the probability. Experimental results confirm the effectiveness of the approach.
ZHANG Yan-Song , ZHANG Yu , HUANG Wei , WANG Shan , CHEN Hong
2009, 20(zk):165-175.
Abstract:A multi-node parallel main-memory OLAP system is proposed in this paper which is considered by the character of OLAP queries and the performance of main-memory database system. In this system, multi-dimensional OLAP queries with aggregate functions are distributed to each computing node to get aggregate results and final result can be available by merging all the aggregate results from multiple computing nodes. Comparing with other solutions, this system uses horizontal distribution policy to distribute massive data in multi-node with only consideration of memory capacity of computation node. According to feature of distributed aggregate function, system can improve parallel processing capacity by lazy results merging which can reduce the volume of message between nodes, the overall performance of parallel query processing can be improved. This system is easy to deploy, it is also practical with good scalability and performance for the requirements of enterprise massive data processing.
XIA Bing , GAO Jun , WANG Teng-Jiao , YANG Dong-Qing
2009, 20(zk):176-183.
Abstract:In times of Web 2.0, more and more websites adopt dynamic scripts for user interaction, and the switches between pages are no longer all based on the “” tags and the URL is no longer the unique identification of a Web page. Traditional Web crawlers can’t deal with Web pages containing dynamic scripts, as a result, search engines, such as Google, give up these Web pages. The research on crawling website with dynamic scripts is still in the early stage. This paper proposes an efficient valid page crawling approach for websites with dynamic scripts. Firstly, by training the paper can get the events and the Web elements that triggered the events, which would lead the people to desired Web pages. Then, the paper generates the XPath patterns of these elements and record the events the people need to trigger. During crawling, the paper only considers these event and element combinations for accelerating the crawling. Additionally, the paper demonstrates the efficiency and the effectiveness of the approach by extensive experimental evaluation.
SHAN Bo , JIANG Shou-Xu , ZHANG Shuo , GAO Hong , LI Jian-Zhong
2009, 20(zk):184-192.
Abstract:Community structure in social networks (SNS) could provide interesting information, such as the pattern of social activities between individuals and the trend of social development. Traditional methods to identify communities on static social networks will miss interesting laws how SNS change. The few methods on modeling and analyzing of community structures in dynamic social networks, which are obtaining more and more attention recently, fail to identify large networks in acceptable time. This paper proposes an incremental new method to identify community structure in dynamic social networks. Utilizing the time locality that there’s little change in adjacent network snapshots, the paper incrementally analyzes social networks to avoid repeatedly partitioning the whole networks. Experiments demonstrate that this approach offers orders-of-magnitude performance improvement over state-of-the-art approaches on large scale networks (105 nodes) and can produce nice community structures which reflect the essence of SNS.
2009, 20(zk):193-201.
Abstract:This paper presents a novel technique for synthesizing large textures of high quality in real time. By analyzing the texture periodicity, patches in an optimized size are generated to well represent the variation of exemplar features. Then, during synthesizing, the paper first distributes patches on the output texture with a vacant region left between any pair of neighboring patches in every row and every column, where a vacant region is also in the same size as a patch. Thereafter, patches are selected to fill the vacant regions in the output texture. Obviously, both patch distributing and vacant region filling can be executed in parallel. To accelerate, for each patch, the paper constructs a set of matching patches that can be efficiently merged with the corresponding patch. The computation of patch selection for vacant regions, therefore, can be simplified into set intersection. Moreover, patch distribution and set intersection are performed on a CPU while patch stitching are executed on a GPU, to take advantage of both CPU and GPU. Experimental results show that the presented method is able to generate a high-quality texture in 1024*1024 pixels at over 45 frames per second, which is hard to achieve by existing techniques.
WANG Bin , SUN Zheng-Xing , ZHANG Yan
2009, 20(zk):202-212.
Abstract:To support hairstyling effectively, it’s crucial for hairstyling tools to find a tradeoff between realism and interaction. This paper presents an interactive hairstyling method which can handle both global and local details of the hairstyle. Vector field based hairstyling method is used to generate the global shape of the hairstyle. This paper defines three different kinds of hairstyle curves which indicate the boundary constraints of vector field so that the user can control the hairstyle generation interactively. Hair wisp model is also used to represent hairstyles and several wisp based editing operations are defined, such that users can control the local details of the hairstyle. Experimental results prove that the proposed method provides more interaction and computation efficiencies without lost of hair realism.
LI Peng , ZHOU Ming-Quan , LI Juan , LI Nan-Shan
2009, 20(zk):213-220.
Abstract:Construct music feature database properly is significant to accurate music retrieval. On the foundation of content-based music retrieval framework, this paper divides the database construct methods into pitch extraction, score information and MIDI analysis method. It proposes a pitch extraction, post-processing methods and MIDI analysis methods to implement the database construction. Experimental results show that the two methods are correct and effective, could construct a music database accurately and rapidly.
YAN Chao , MA Li-Zhuang , SHEN Yang
2009, 20(zk):221-230.
Abstract:In this paper, an algorithm to segment foreground object from a video clip is proposed. The reliability model, which calculated from the local color pattern information, is presented. Firstly, pre-process all the frames of the video clip with the watershed algorithm; then, apply graph-cut on the key frames. Secondly, the reliability model is been computed by bi-directional procedure. Through the positive procedure, the reliability is set. Next, a small portion of the reliabilities is corrected through a reverse procedure assisted by a certain optical flow algorithm.At last, segment the video on the authority of the reliability of each frame. When dealing with the video which has similar foreground and background color, the reliability model shows a good segment result. And bi-directional procedure provides a way to improve segmentation of the occluded object in the video.
YANG De-Yun , LI Xiu-Zhen , SANG Sheng-Jü , HOU Ying-Kun , LIU Ming-Xia
2009, 20(zk):231-238.
Abstract:Sampling theory is one of the most powerful results in modern information theory and technology. The digital signal with sampling properties can be reconstructed from its samples in a perfect form. Walter and Zhou extended the Shannon sampling theorem to wavelet subspaces. This paper improves the classical sampling theorems based on wavelet frames. A basic problem on information theory is introduced here. For a given digital signal, whether it has sampling series form. In this paper, the digital signals with sampling properties are characterized based on wavelet frames. For a given sapling subspace, the analytic form of the signals in it is proposed. Especially some new kinds of sampling subspaces are offered here. As an application, the examples show that the new theorems improve some known relating results, which is effective for the digital signals’ sampling and reconstructions.
PENG Ge , LIN Ya-Ping , YI Ye-Qing
2009, 20(zk):239-249.
Abstract:Most of recent false data filtering mechanisms in WSN added t-MAC (Message Authentication Code) for the data packets, these mechanisms are usually restricted within the t-threshold safe limits, and do not support dynamic routing. Based on the idea of a virtual witnesses cluster, adopting the perturbation-based polynomial technology, this paper proposes a authentication algorithm that made a number of nodes within a cloud cooperate to generate the certification polynomial, adopting the perturbation-based polynomial technology and increasing the difficulty of an attack. On this basis, the proposed false data filtering mechanism can verify the validity of data immediately, and support dynamic routing. Theoretical analysis and simulation experiments show that the new method is not limited by the t-threshold and save as more energy as the transmission jump increasing. Compared with the other mechanisms, the method enhances the anti-trapping ability. It is more suitable for the network with low credibility and the long-distance transmission application.
LIU Zhi-Xin , ZHENG Qing-Chao , XUE Liang , GUAN Xin-Ping
2009, 20(zk):250-256.
Abstract:In clustering algorithm of wireless sensor networks, to solve the problem of excessive energy consumption in the cluster heads, an residual energy and node degree synthesized clustering algorithm named ENCA (energy and node degree synthesized clustering algorithm) is proposed in this paper. In cluster heads election phase of every round, it considers the residual energy and the average energy of all the nodes in each cluster, an optimal cluster head is elected in each cluster according to node degree. In algorithm running phase the connection of the network is guaranteed, in the mean time, it is avoided to select the node with low energy as cluster head. Simulation results show that, in comparison with LEACH and ACE, the node energy consumption is balanced and the network lifetime is efficiently prolonged in ENCA algorithm.
ZHU Xiao-Jun , LUO Di-Jun , CHEN Gui-Hai
2009, 20(zk):257-265.
Abstract:In wireless sensor networks, relative localization problem is to infer the relative locations instead of absolute locations. This paper considers one dimensional relative localization problem in wireless sensor network, and proposes PLO (Proximity List algOrithm), which characterizes each node by its Proximity List, a list of all node IDs ordered by distances away. A trace-driven analysis verifies that the difference in distances can be mostly reflected by the difference in received RSSI (received signal strength indicator) values. Therefore, proximity lists can be obtained by comparing RSSI values. Finally, relative locations are obtained by locating end nodes in the line topology. This paper shows that this algorithm is feasible in practical situations.
XU Chao-Nong , XU Yong-Jun , DENG Zhi-Dong
2009, 20(zk):266-277.
Abstract:Due to the features of high synchronization error and high power consumption, the classic synchronization scheme of two-way packets exchange is unfit for some applications in wireless sensor networks, especially for networks with multi-hop linear topology. This paper proposes a time synchronization protocol named Timing-sync Protocol for Linear Sensor Networks (TPLSN). The synchronization scheme of enhanced two-way packets exchange and the scheme of clock skew compensation are keys to the success of TPLSN. The phenomenon of its synchronization error accumulation over hop count is also investigated. TPLSN is evaluated on a Mica2-compatible test bed. Its synchronization error is less than 20μs for the node which is 9 hops away from the time beacon node, the increase ratio of synchronization error to hop count is less than 1μs per hop, and the increase ratio of synchronization error to resynchronization cycle is 0.017μs per second. Further, to synchronize all nodes in an n-hop linear wireless sensor network, only 2n packets are needed, which is least for any synchronization protocol based on synchronization scheme of two-way packet exchange. Theoretical analysis shows that three factors, including the approximated accuracy, the asymmetry of two-way packets exchange, and the clock skew, have great influences on time offset between two adjacent nodes. Furthermore, the clock frequency order of the linear network is found to be vital to the accumulation of synchronization error over hop count.
SHI Ting-Jun , SANG Xia , XU Li-Jie , YIN Xin-Chun
2009, 20(zk):278-285.
Abstract:In process of the localization in WSN, the overhead of the whole network will increase with the number of anchor nodes added, and it’ll bring a lot of waste. Thus, in order to use less anchor nodes to achieve more precise localization, this paper proposes a range-free localization algorithm which is based on three mobile anchor nodes, the algorithm can guarantee that each unknown node can choose the anchor nodes which are not far away from itself to implement localization, and construct an optimization model to maximize the area of double overlay regions in the network. The results of simulation show that this algorithm can improve the precision of localization.
PENG Zhao-Hui , CUI Li-Zhen , WANG Shan , ZHANG Jun , WANG Chang-Liang
2009, 20(zk):286-297.
Abstract:In keyword search over relational databases (KSORD), retrieval of user’s initial query is often unsatisfying. User has to reformulate his query and execute the new query, which costs much time and effort. In this paper, a method of automatically reformulating user queries by relevance feedback is introduced. The method adopts a ranking method based on vector space model to rank KSORD results. Based on the results of user feedback or pseudo feedback, it computes expansion terms based on probability and reformulates the new query using query expansion. Experimental results verify that after KSORD systems executing the new query, more relevant results are presented to user.
TENG Dong-Xing , DU Yi , MA Cui-Xia , WANG Hong-An , DAI Guo-Zhong
2009, 20(zk):298-305.
Abstract:Combining sketching user interface and adaptive techniques with virtual reality (VR) techniques will be great helpful to virtual education application. It can improve the intelligence and friendliness of applications by enhancing the interaction processes between users and machines, and the effectiveness of virtual class environment is improved at the same time. This paper builds an adaptive sketch based user interface in virtual environment of education application, which is put into use to meet certain need of special applications. This paper focuses on the research of context process mechanism of sketch user interface of virtual environment of education in detail. Then the paper builds virtual education prototype, and the experimented results show verified the feasibility and effectiveness of this application prototype.
SONG Mei-Na , SONG Jun-De , WANG Qian
2009, 20(zk):306-313.
Abstract:Because of the rapid growth of the Internet, informationization in enterprises of almost all fields is growing fast. Efficiently developing high quality, easy-maintenance enterprise business information system has become the trend. Due to the similarities in the function requirements of different business information systems, in order to provide efficient and quality-guaranteed services, developers of the information systems choose to use the common services provided by the third-parties to develop loose coupling system. In this way, developers can devote more energy into the building of their kernel business logic. However, facing the large number of third-party enterprises and their services, how to perform adaption and how to guarantee the quality of the services are the issues that must be addressed. In this paper, a common service access and integration (CSAI) method for the business information systems is proposed. CSAI integrates all the common services provided by the third-party enterprises, and provides a customizable integrated common service infrastructure. The infrastructure has a uniform, interoperable, quality-guaranteed interface and provides guaranteed services and service operation supporting capabilities for different enterprises and enterprises in different fields. Detailed explanation of the functions, structure of CSAI, are provided. And a common service integration flow abstract description language that can be used for system description and development is proposed. Practical instances are used to assist our analysis.
2009, 20(zk):314-320.
Abstract:Privacy becomes a more serious concern in applications involving microdata such as medical data publishing or medical data mining. Anonymization methods based on global recoding or local recoding or clustering provide privacy protection by guaranteeing that each released record will be indistinguishable to some other individual. However, such methods may not always achieve effective anonymization in terms of analysis workload using the anonymized data. The utility of attributes has not been well considered in the previous methods. This paper studies the problem of utility-based anonymization to concentrate on attributes order sensitive workload, where the order of the attributes is important to the analysis workload. Based on the multidimensional anonymization concept, a method is discussed for attributes order sensitive utility-based anonymization. The performance study using public data sets shows that the efficiency is not affected by the attributes order processing.
HOU Meng-Bo , XU Qiu-Liang , GUO Shan-Qing
2009, 20(zk):321-329.
Abstract:Two-Party authenticated key agreement protocols are constructed mainly based on the traditional public key cryptography and identity-based public key cryptography. The certificateless-based authenticated key agreement protocols have the advantages of avoiding the complexity of identity management in the traditional certificate-based schemes, as well as the key escrow issues inherited in the identity-based schemes. In 2007, Park et al. proposed a certificateless-based public key encryption scheme which is provably secure against chosen plaintext attacks in the selective-ID security model (IND-sID-CPA). Inspired on such a scheme, this paper presents a two-party certificateless-based authenticated key agreement scheme and gives the comparisons with other comparable schemes in security and efficiency. The new proposed scheme achieves almost all of the desired security attributes, especially the Perfect forward secrecy, PKG forward secrecy, Known session-specific temporary information secrecy and Key escrowless. Meanwhile it keeps the nice efficiency.
2009, 20(zk):330-335.
Abstract:This paper presents a novel approach for incomplete text system. Which is based on hypergraph model clustering by using the set-pair analysis, and in which the similar, different and anti-contact connectivity of Set-pair and the similarity value of set-pair are used. After hypergraph model set up, a hypergraph partitioning algorithm is used to find clusters. This new method can eliminate disadvantageous factors and decrease the number of dimensions of the incomplete text data and enhance the speed largely and precision of text clustering. The experimental results show that the algorithm is feasibile and efficient.