• Volume 19,Issue 10,2008 Table of Contents
    Select All
    Display Type: |
    • LCS and MSC Reasoning of Hybrid Terminological Cycles in Description Logic εL

      2008, 19(10):2483-2497. CSTR:

      Abstract (4451) HTML (0) PDF 746.78 K (5580) Comment (0) Favorites

      Abstract:Current research progresses and the existing problems of terminological cycles in description logics are analyzed in this paper. Based on the works of F. Baader, the LCS (least common subsumer) and MSC (most specific concept) reasoning of hybrid terminological cycles in description logic εL is further studied. The syntax and semantics of hybrid terminological cycles in description logic εL are given. Under the requirement of the LCS and MSC reasoning of hybrid terminological cycles in description logic εL, TBox-completion is presented, and the description graph is redefined. The LCS and MSC reasoning algorithms of hybrid terminological cycles in description logic εL w.r.t. greatest fixpoint semantics are presented by using TBox-completion and description graph. The correctness of reasoning algorithms is proved, and it is also proved that the LCS and MSC reasoning w.r.t. greatest fixpoint semantics can be computed in the polynomial time. Theoretical foundation for the LCS and MSC reasoning for hybrid terminological cycles in description logic εL is provided through the reasoning algorithms.

    • Extended Fuzzy Description Logics with Comparisons Between Fuzzy Membership Degrees

      2008, 19(10):2498-2507. CSTR:

      Abstract (3959) HTML (0) PDF 674.26 K (5214) Comment (0) Favorites

      Abstract:The representation and application of fuzzy knowledge on the semantic Web often relate to the comparisons between fuzzy membership degrees. However, the current fuzzy extensions of description logics do not support the expression of such comparisons. This paper proposes an extended fuzzy description logic that supports the comparisons of fuzzy membership degrees and the concept constructors from description logic ALCN (attributive concept description language with complements and number restriction), written FCALCN (fuzzy comparable ALCN). FCALCN introduces new forms of atom concepts in order to support the comparisons between fuzzy membership degrees. A reasoning algorithm for FCALCN is proposed, and the complexity of the reasoning problems of FCALCN with empty TBox is proved to be PSpace-complete. FCALCN can represent expressive fuzzy knowledge involving the comparisons of fuzzy membership degrees on the semantic Web and enable reasoning of them.

    • Two Values Passing CPS Transformation for Call-by-Name Calculus with Constants

      2008, 19(10):2508-2516. CSTR:

      Abstract (4608) HTML (0) PDF 551.95 K (5401) Comment (0) Favorites

      Abstract:In this paper, a new CPS (continuation-passing-style) transformation for Plotkin's call-by-name ( calculus with constants is proposed. It is based on evaluation contexts transformation and the features that two values, instead of one, are passed to the continuation every time. With encodings, a CPS language is introduced. Then, Plotkin's simulation theorem is proved by establishing 1-to-1 correspondence between the source language and CPS language. Compared with Plotkin's work, a reduction closed CPS language is defined in which all continuations are explicitly expressed as functional encodings and it is simpler to prove both the soundness and completeness directions of simulation theorem.

    • Dynamics Analysis Method of Cellular Automata for Complex Networking Storage System

      2008, 19(10):2517-2526. CSTR:

      Abstract (4713) HTML (0) PDF 2.81 M (4786) Comment (0) Favorites

      Abstract:Some inherent dynamics rules are concealed in large-scale network storage system on account of the complexity of data transmission behaviors. This paper studies object-based storage system and proposes two storage cellular automata models called SNCA and OSDCA from macro and micro aspect respectively, and these models can be used to analyze behaviors and rules of complex and dynamic network storage by capturing the intelligent and initiative properties of storage object. In the model SNCA, the lifetime attribute of storage object is used to analyze the data flow rules in storage network to ascertain the congestion degree from macro aspect based on special lattice network topology architecture, and simulation results show that data object flow has global relativity with the phase transition of data flow. In the model OSDCA, data migration and replication mechanism are combined to analyze hotspot migration rule based on the load distribution condition among storage nodes, and simulation results show that data distribution in OBS system has characters of some self-organization.

    • An Approach to Solve Deadlock Problem of Component Connection

      2008, 19(10):2527-2538. CSTR:

      Abstract (4335) HTML (0) PDF 750.41 K (5903) Comment (0) Favorites

      Abstract:Procedure call-based component connections which are latent in hard code do not only restrict the flexibility of software but also cause hidden problems to software reliability because of the existing deadlock connection loops. To solve this problem, first, a formal semantic model called call-based connector has been built which explicitly separates connection from components. Second, mapping rules used to convert call-based connectors into component the connection directed graph are proposed. Then, two algorithms, TPDCC (two phases deadlock connection check) used to find all existing deadlock connection loops, and DCEMRF (deadlock connection elimination based on maximum reuse frequency) used to find locations with the least number of connections that must be eliminated to eliminate the loops, are provided respectively. Last, its application and experimental results show that the presented approach is feasible and effective, so it can be used to enhance the reliability of software, also be fit as a basis to further design and implement adaptive connector due to its separative way of description and storage of component connection in semantic.

    • Design Calculus Based Approach to Modeling Use Case

      2008, 19(10):2539-2549. CSTR:

      Abstract (3886) HTML (0) PDF 647.03 K (5394) Comment (0) Favorites

      Abstract:This paper proposes a formal approach to modeling use case which captures requirements from multi-angle views: The class diagrams, the use case sequence diagrams, the use case state diagrams, the specification mapping and the system invariant. By defining formal semantics of those views, each aspect of requirements is given exact formal descriptions. As a result, integrated specification of one method can be built by integrating formal descriptions of its interaction specification and its functional specification. At the same time, properties of use case models can be specified and analyzed through the proof in design calculus. As an application, rules for checking the consistence of use case models are studied. An example to illustrate the feasibility of the proposed method is given.

    • Value Equality Analysis in C Program API Conformance Validation

      2008, 19(10):2550-2561. CSTR:

      Abstract (4072) HTML (0) PDF 644.93 K (5775) Comment (0) Favorites

      Abstract:In the attempt to apply static analysis method to API (application programming interface) conformance validation, It is noticed that the gap between numeric value based temporal properties in API specification and variable symbol based predicates extracted through static analysis. Therefore, the paper first investigates value equality relations between variable symbols in C program, and then designs an ECS (equality class space) based value equality analysis method. This flow-sensitive method can maintain the correspondence between variable symbol domain and numeric value domain during the process of API conformance validation, and in addition, can effectively support the optimization of subsequent analysis by hiding all irrelevant information except value equality relations.

    • FJ Extended Calculus for Multi-Version Class Dynamic Update

      2008, 19(10):2562-2572. CSTR:

      Abstract (4252) HTML (0) PDF 566.02 K (5170) Comment (0) Favorites

      Abstract:Aiming at resolving the problem of type-safety in dynamic updating O-O (object-oriented) software, a simple formal system, MCUFJ (multi-version class dynamic updatable calculus based on FJ (featherweight Java) calculus) calculus, is established with the goal of understanding the underlying foundations of updating classes dynamically. MCUFJ is formulated as an extension of a core calculus for Featherweight Java with an update operator. Multi-Version classes make objects with different versions coexisting. This study also discusses what kind of change is type-safe, such as adding, deleting, modifying methods/fields, or changing methods'/fields' type, and concludes some restrictions on type-safe updating. The paper also proves the results formally. This calculus can be used as a foundation of Java and O-O update.

    • Main Memory Database TPC-H Workload Characterization on Modern Processor

      2008, 19(10):2573-2584. CSTR:

      Abstract (5056) HTML (0) PDF 933.63 K (6559) Comment (0) Favorites

      Abstract:In 1999, the research of database systems' execution time breakdown on modern computer platforms has been analyzed by Ailamaki, et al. The primary motivation of these studies is to improve the performance of Disk Resident Databases (DRDBs), which form the main stream of database systems until now. The typical benchmark used in those studies is TPC-C. However, continuing hardware advancements have "moved-up" on the memory hierarchy, such as the larger and larger on-chip and off-chip caches, the steadily increasing RAM space, and the commercial availability of huge flash memory (solid-state disk) on top of regular disk, etc. To reflect such a trend, the target of workload characterization research along the memory hierarchy is also studied. This paper focuses on Main Memory Databases (MMDBs), and the TPC-H benchmark. Unlike the performance of DRDB which is I/O bound and may be optimized by high-level mechanisms such as indexing, the performance of MMDB is basically CPU and memory bound. In this study, the paper first compares the execution time breakdown of DRDB and MMDB, and the paper proposes an optimize strategy to optimize the memory resident aggregate. Then, the paper explores the difference between column-oriented and row-oriented storage models in CPU and cache utilization. Furthermore, the paper measures performance of MMDBs on different generational CPUs. In addition, the paper analyzes the index influence and gives a strategy for main memory database index optimization. Finally, the paper analyzes each query in the full TPC-H benchmark in detail, and obtains systematic results, which help design micro-benchmarks for further analysis of CPU cache stall. Results of this study are expected to benefit the performance optimization of MMDBs, and the architecture design memory-oriented databases of the next generation.

    • Mining the Frequent Patterns in an Arbitrary Sliding Window over Online Data Streams

      2008, 19(10):2585-2596. CSTR:

      Abstract (4650) HTML (0) PDF 627.59 K (6141) Comment (0) Favorites

      Abstract:Because of the fluidity and continuity of data stream, the knowledge embedded in stream data is most likely to be changed as time goes by. Thus, in most data stream applications, people are more interested in the information of the recent transactions than that of the old. This paper proposes a method for mining the frequent patterns in an arbitrary sliding window of data streams. As data stream flows, the contents of the data stream are captured with a compact prefix-tree by scanning the stream only once. And the obsolete and infrequent items are deleted by periodically pruning the tree. To differentiate the patterns of recently generated transactions from those of historic transactions, a time decaying model is also applied. Extensive simulations are conducted and the experimental results show that the proposed method is efficient and scalable, and also superior to other analogous algorithms.

    • Credible Association Rule and Its Mining Algorithm Based on Maximum Clique

      2008, 19(10):2597-2610. CSTR:

      Abstract (5070) HTML (0) PDF 812.22 K (6276) Comment (0) Favorites

      Abstract:Existing association-rule mining algorithms mainly rely on the support-based pruning strategy to prune its combinatorial search space. This strategy is not quite effective in the process of mining potentially interesting low-support patterns. To solve this problem, the paper presents a novel concept of association pattern called credible association rule (CAR), in which each item has the same support level. The confidence directly reflects the credible degree of the rule instead of the traditional support. This paper also proposes a MaxCliqueMining algorithm which creates 2-item credible sets by adjacency matrix and then generates all rules based on maximum clique. Some propositions are verified and which show the properties of CAR and the feasibility and validity of the algorithm. Experimental results on the alarm dataset and Pumsb dataset demonstrate the effectiveness and accuracy of this method for finding CAR.

    • Structural Query Expansion Based on Weighted Query Term for XML Documents

      2008, 19(10):2611-2619. CSTR:

      Abstract (4636) HTML (0) PDF 531.52 K (5745) Comment (0) Favorites

      Abstract:The main reason of low precision in information retrieval (IR) is that it is difficult for the users to submit a precise query expression for their query intensions. Furthermore, XML documents have characteristics not only in the content, but also in its structure. Therefore it is more difficult for users to submit precise query expressions. In order to solve this problem, this paper puts forward a new query expansion method based on relevance feedback. It can help users to construct a content and structure query expression which can satisfy users' intentions. This method includes two steps. The first step is to expand keywords for finding the weighted keyword which can represent the user's intentions. The second step is structural expansion based on the weighted keywords. Finally a full-edged content-structure query is formalized. Experimental results show that the method can obtain better retrieval results. The average precision of prec@10 and prec@20 is 30% higher than the original query.

    • Service Selection Approach Considering the Trustworthiness of QoS Data

      2008, 19(10):2620-2627. CSTR:

      Abstract (5905) HTML (0) PDF 604.18 K (6572) Comment (0) Favorites

      Abstract:As the number of web services with the similar function is increasing, QoS-based service selection at runtime has become an important research topic. The existing QoS-based services selection approaches always assume that the QoS data coming from service providers and users are effective and trustworthy, which is actually impossible in reality. This paper proposes a service selection approach considering the trustworthiness of QoS data, which classifies and computes the QoS attributes according to the source of QoS data. For the QoS attributes whose data come from service providers, the statistics of past runtime data to revise the providers' QoS data are used. For the QoS attributes whose data come from users, feedback similarity to weigh users' QoS data is used. Furthermore, an implementation framework and a set of experiments are given, which show that this approach can effectively weaken the influence of untrustworthy QoS data on the services selection, thus can strengthen the accuracy of the service selection.

    • ServLoc: Secure Location Service for Wireless Sensor and Actuator Network

      2008, 19(10):2628-2637. CSTR:

      Abstract (4723) HTML (0) PDF 603.00 K (5659) Comment (0) Favorites

      Abstract:To solve the problem of secure location service, this paper proposes a range-free secure localization protocol—ServLoc localization protocol. By means of authenticating messages, hidden actors passively receiving localization requests and filtering false location reports, etc, ServLoc localization protocol can defend location attacks, keep location in privacy and locate sensor node in a distributive way. In addition, this paper also proposes a voting-based location verification scheme—ServLoc verification protocol and ways to defend actor attacks. The analysis illustrates that ServLoc verification protocol can trade off the security and effective localization problem of WSAN. ServLoc scheme can also effectively provide secure location service, even if the WSANs suffer location attacks.

    • A Method for Multipath Routing Based on Network Coding in Wireless Sensor Network

      2008, 19(10):2638-2647. CSTR:

      Abstract (5343) HTML (0) PDF 1001.49 K (8295) Comment (0) Favorites

      Abstract:Reliability is crucial in many wireless sensor network (WSN) applications. Most of existing approaches are redundancy-based, such as employing multi-path or retransmission schemes. However, those designs often waste energy, and thus shorten the network lifetime. To address this issue, this paper proposes an energy aware method which employs network coding scheme based on multi-path routings. By encoding a group of data into independent new packets and transmitting them along multiple paths, this paper offsets the effect of link failure with a little extra overhead. The other strength of this design is that it only needs small-scale linear operations. An approximate method to effectively estimate the number of paths needed is also employed. Comprehensive simulations and results verify the validation of the theoretical results in the paper.

    • Multicast Fast Handover Algorithm Based on Neighbor Information Exchange

      2008, 19(10):2648-2658. CSTR:

      Abstract (4301) HTML (0) PDF 494.90 K (4948) Comment (0) Favorites

      Abstract:The handover latency and the packet loss rate are important criterions that determine if the mobile multicast algorithm could adapt to real-time multicast appliances. This paper proposes a multicast fast handover algorithm based on neighbor hood information exchange (M-FMIPv6/NIE). Before L2 trigger event occurs, M-FMIPv6/NIE can configure new care-of-address (nCoA) and request new access routers (AR) to join multicast tree. The performance analysis and simulation results indicate that, the multicast service disruption time of M-FMIPv6/NIE is less than that of existing multicast fast handover algorithms and it has good performance in buffer size and packet loss rate.

    • Rectangle and Boomerang Attacks on DES

      2008, 19(10):2659-2666. CSTR:

      Abstract (4561) HTML (0) PDF 521.93 K (6010) Comment (0) Favorites

      Abstract:In spite of being replaced by AES (advanced encryption standard), DES (data encryption standard) still plays an important role as encryption standard. DES and the triple DES are still widely used in many areas, especially in the financial sector. Recently, some new cryptanalytic techniques are introduced and of which the Rectangle attack and the Boomerang attack had proved to be very powerful. Therefore, it is necessary to re-evaluate the effects that these new cryptanalytic techniques may have on DES. This paper examines the strength of DES against the Rectangle attack and the Boomerang attack. By using the best differential characteristic of DES, the paper gets an attack against up to 12-round DES using the Rectangle attack and an attack against 11-round DES using the Boomerang attack respectively. The Rectangle attack on 12-round DES requires 262 chosen plaintexts and the time complexity is equivalent to 242 12-round encryptions, while the Boomerang attack on 11-round DES requires 258 adaptive chosen plaintexts and ciphertexts and the time complexity is equivalent to 238 11-round encryptions. Because the differential characteristics used in the attacks are all the best ones, it is believed that the attacks are the best results that the Rectangle attack and the Boomerang attack can get on DES.

    • An Integrated Indexing Structure for Large-Scale Cross-Media Retrieval

      2008, 19(10):2667-2680. CSTR:

      Abstract (4342) HTML (0) PDF 905.37 K (5449) Comment (0) Favorites

      Abstract:This paper proposes a novel integrated indexing structure for the large-scale cross-media retrieval. In the cross-media retrieval, first a cross reference graph (CRG) is generated by hyperlink analysis of the webpages, which supports the cross-media retrieval. Then a refinement process of the CRG is conducted by users' relevance feedbacks. Three steps are made. First, when the user submits a query media object, the candidate media objects are quickly identified by searching the cross reference graph. Then the distance computation of the candidate vectors is conducted to get the answer set. The analysis and experimental results show that the performance of the algorithm is superior to that of sequential scan.

    • A Conceptual Framework for Developing Adaptive Pen-Based User Interface

      2008, 19(10):2681-2693. CSTR:

      Abstract (5039) HTML (0) PDF 683.02 K (5466) Comment (0) Favorites

      Abstract:This paper presents CFAPUI (a conceptual framework for developing adaptive pen-based user interface), a conceptual framework for developing adaptive pen-based user interface. CFAPUI proposes an architecture for adaptive pen-based applications, a set of guidelines to assist the design process of adaptive pen-based applications. To demonstrate CFAPUI's validity, the development process of an adaptive pen-based form application is illustrated.

    • U-Tree Based Indexing Method for Uncertain Moving Objects

      2008, 19(10):2696-2705. CSTR:

      Abstract (4868) HTML (0) PDF 763.32 K (6403) Comment (0) Favorites

      Abstract:This paper proposes a novel, U-tree based indexing technique that addresses the problem of managing uncertain data in a constantly evolving environment. The technique called TPU-tree is capable of indexing the moving objects with uncertainty in multi-dimensional spaces. Along with the data models capturing the temporal and spatial uncertainty, a modified p-bound based range query (MP_BBRQ) algorithms for probabilistic queries is also developed. Experimental evaluations demonstrate that the TPU-tree supports queries on uncertain moving objects quite efficiently. It yields rather good update performance even under frequent update environments, and has a practical value.

    • >Review Articles
    • P2P Web Search Technology

      2008, 19(10):2706-2719. CSTR:

      Abstract (12223) HTML (0) PDF 778.29 K (13094) Comment (0) Favorites

      Abstract:Web search engine has become a very important tool for finding information efficiently from the massive Web data. With the explosive growth of the Web data, traditional centralized search engines become harder to catch up with the growing step of people's information needs. With the rapid development of peer-to-peer (P2P) technology, the notion of P2P Web search has been proposed and quickly becomes a research focus. The goal of this paper is to give a brief summary of current P2P Web search technologies in order to facilitate future research. First, some main challenges for P2P Web search are presented. Then, key techniques for building a feasible and efficient P2P Web search engine are reviewed, including system topology, data placement, query routing, index partitioning, collection selection, relevance ranking and Web crawling. Finally, three recently proposed novel P2P Web search prototypes are introduced.

    • A Method for Code Safety Proof Construction

      2008, 19(10):2720-2727. CSTR:

      Abstract (4867) HTML (0) PDF 562.82 K (5462) Comment (0) Favorites

      Abstract:This paper proposes a new method to achieve proof construction, the basic idea of which is to construct proof with auxiliary recursive functions in the foundational logic. In this way, the workload of proof construction and the size of constructed proof can be reduced while maintaining the same trusted computing base. This paper also illustrates how to adapt this method to a type-based FPCC system, where the safety proof can be constructed automatically. All this work is implemented in the proof assistant Coq.

    • A High Performance XML Parser OnceXMLParser

      2008, 19(10):2728-2738. CSTR:

      Abstract (4809) HTML (0) PDF 624.07 K (7202) Comment (0) Favorites

      Abstract:An XML (extensible markup language) parser is the fundamental software for analyzing and processing XML documents. The implementation of a high performance full-validating XML parser is studied in this paper. This research develops a OnceXMLParser which supports three kinds of parsing models. It passes the rigorous XML conformance testing and API (application programming interface) conformance testing. OnceXMLParser adopts light-weighted architecture and is optimized on many aspects including efficient lexical analysis, statistical automaton implementation, reasonable resource allocation strategies and some fine tunings on language level. Performance testing results show that OnceXMLParser has outstanding parsing efficiency.

    • An Identity (ID)-Based Threshold Multi-Secret Sharing Scheme

      2008, 19(10):2739-2745. CSTR:

      Abstract (4227) HTML (0) PDF 468.23 K (5880) Comment (0) Favorites

      Abstract:In order to avoid the flaw of the secret shadow distribution method in the existing secret sharing schemes, a secret shadow distribution method is proposed with the ID-based public key technology integrated, which uses the participant's private key as his master shadow. Firstly, security analyses are made on Zheng's signcryption scheme, which shows his scheme does not offer forward secrecy. Then, an improvement is made on Zheng's signcryption scheme and a new scheme is proposed. Based on the proposed signcryption scheme and the ID-based public key cryptosystem, a new threshold multi-secret sharing scheme is proposed. The problem of the secret shadow distribution is well resolved, and no information exchange is needed between the secret dealer and each participant in advance. The secret shadow distribution can be processed during the secret distribution. At the same time, the proposed scheme offers forward secrecy. That is to say, even if the private key of the secret dealer is exposed, the security of the shared secrets will not be threatened. Therefore, the proposed ID-based secret sharing scheme is more secure and effective than others, and it can be more applicable.

    • An Intrusion Response Method Based on Attack Graph

      2008, 19(10):2746-2753. CSTR:

      Abstract (4792) HTML (0) PDF 536.24 K (5538) Comment (0) Favorites

      Abstract:System incentive and alternation of attacker's strategies are not taken into full consideration in current intrusion response research. An intrusion response model (intrusion response based on attack graph, IRAG) based on the attack graph is proposed to solve this problem. IRAG model deals well with the attack's intent and alternation of strategies, and takes account of incentives of system and attacker across-the-board. The experimental results show that the IRAG model can effectively improve the accuracy and effectiveness of alert response.

    • Negotiation-Based Dynamic Role Transition in Data Access across Multi-VOs

      2008, 19(10):2754-2761. CSTR:

      Abstract (4303) HTML (0) PDF 529.59 K (5122) Comment (0) Favorites

      Abstract:This paper proposes an approach to data access across multi-Vos (virtual organizations), using dynamic role transition under role-based access control model in data grid. A negotiation mechanism is introduced to establish the data access based on the trust of both VOs, which also sets up the role transition relationship between them. Once the negotiation succeeds, the dynamic role transition is controlled by several local policies according to the role owned by the user.

    • Performance Analysis Based on Packet Arrival Rate for the IEEE 802.11 DCF

      2008, 19(10):2762-2769. CSTR:

      Abstract (4832) HTML (0) PDF 727.91 K (6009) Comment (0) Favorites

      Abstract:A performance analytic model based on packet arrival rate is proposed for IEEE 802.11 DCF (distributed coordination function) in infrastructure-based WLAN (wireless local area network). The model does not only take into account the factors of number of terminals, traffic loads, the binary exponential back-off mechanism, but also analyzes the impact of the finite queue system at the MAC (medium access control) layer. Each terminal is modeled as an M/M/1/K queue, and the virtual time slot is introduced to discretize the state of the queue system. Based on this, a three dimensional discrete time Markov chain is used to model the system. By using the model, normalized throughput, packet delay and packet losing ratio are obtained. Simulation results show that the proposed model can validly predict the performance of the DCF under various packet arrival rates.

    • Cross-Entropy Method Based on Markov Decision Process for Optimal Software Testing

      2008, 19(10):2770-2779. CSTR:

      Abstract (5255) HTML (0) PDF 610.09 K (6294) Comment (0) Favorites

      Abstract:This paper demonstrates an approach to optimize software testing by minimizing the expected cost with given software parameters of concern. Taking software testing process as a Markov decision process, a Markov decision model of software testing is proposed in this paper, and by using a learning strategy based on the cross-entropy method to optimize the software testing, this paper obtains the optimal testing profile. Simulation results show that the testing profile with the learning strategy performs significantly better than the random testing strategy with respect to the expected cost. Moreover, this learning strategy is more feasible and can significantly reduce the number of test cases required to detect and remove a certain number of software defects in comparison with the random testing strategy.

    • Research on Development Tools for Collaborative Pen-Based User Interfaces

      2008, 19(10):2780-2788. CSTR:

      Abstract (4784) HTML (0) PDF 956.66 K (5585) Comment (0) Favorites

      Abstract:Pen-Based user interface is a primary Post-WIMP (window icon menu pointer) interface, and it provides natural interactions for users. However, the current toolkits for constructing pen-based user interface are built for single-user tasks, which can not support collaborative work well. This paper analyzes the features of pen-based interaction and collaboration environment. Furthermore, the design and implementation of a toolkit, CoPen Toolkit, which can support the development of collaborative pen-based user interfaces are discussed. CoPen Toolkit offers flexible architecture and extended components, which can support ink-based data, event processing, networked collaboration, etc. Based on CoPen Toolkit, several typical prototypes have been built, and the results show that it can support construction and fast-prototyping of collaborative pen-based user interfaces.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063