JIANG Yun-Cheng , WANG Ju , ZHOU Sheng-Ming , TANG Yong
Abstract:Current research progresses and the existing problems of terminological cycles in description logics are analyzed in this paper. Based on the works of F. Baader, the LCS (least common subsumer) and MSC (most specific concept) reasoning of hybrid terminological cycles in description logic εL is further studied. The syntax and semantics of hybrid terminological cycles in description logic εL are given. Under the requirement of the LCS and MSC reasoning of hybrid terminological cycles in description logic εL, TBox-completion is presented, and the description graph is redefined. The LCS and MSC reasoning algorithms of hybrid terminological cycles in description logic εL w.r.t. greatest fixpoint semantics are presented by using TBox-completion and description graph. The correctness of reasoning algorithms is proved, and it is also proved that the LCS and MSC reasoning w.r.t. greatest fixpoint semantics can be computed in the polynomial time. Theoretical foundation for the LCS and MSC reasoning for hybrid terminological cycles in description logic εL is provided through the reasoning algorithms.
KANG Da-Zhou , XU Bao-Wen , LU Jian-Jiang , LI Yan-Hui
Abstract:The representation and application of fuzzy knowledge on the semantic Web often relate to the comparisons between fuzzy membership degrees. However, the current fuzzy extensions of description logics do not support the expression of such comparisons. This paper proposes an extended fuzzy description logic that supports the comparisons of fuzzy membership degrees and the concept constructors from description logic ALCN (attributive concept description language with complements and number restriction), written FCALCN (fuzzy comparable ALCN). FCALCN introduces new forms of atom concepts in order to support the comparisons between fuzzy membership degrees. A reasoning algorithm for FCALCN is proposed, and the complexity of the reasoning problems of FCALCN with empty TBox is proved to be PSpace-complete. FCALCN can represent expressive fuzzy knowledge involving the comparisons of fuzzy membership degrees on the semantic Web and enable reasoning of them.
Abstract:In this paper, a new CPS (continuation-passing-style) transformation for Plotkin's call-by-name ( calculus with constants is proposed. It is based on evaluation contexts transformation and the features that two values, instead of one, are passed to the continuation every time. With encodings, a CPS language is introduced. Then, Plotkin's simulation theorem is proved by establishing 1-to-1 correspondence between the source language and CPS language. Compared with Plotkin's work, a reduction closed CPS language is defined in which all continuations are explicitly expressed as functional encodings and it is simpler to prove both the soundness and completeness directions of simulation theorem.
CHEN Jin-Cai , HE Ping , GE Xiong-Zi
Abstract:Some inherent dynamics rules are concealed in large-scale network storage system on account of the complexity of data transmission behaviors. This paper studies object-based storage system and proposes two storage cellular automata models called SNCA and OSDCA from macro and micro aspect respectively, and these models can be used to analyze behaviors and rules of complex and dynamic network storage by capturing the intelligent and initiative properties of storage object. In the model SNCA, the lifetime attribute of storage object is used to analyze the data flow rules in storage network to ascertain the congestion degree from macro aspect based on special lattice network topology architecture, and simulation results show that data object flow has global relativity with the phase transition of data flow. In the model OSDCA, data migration and replication mechanism are combined to analyze hotspot migration rule based on the load distribution condition among storage nodes, and simulation results show that data distribution in OBS system has characters of some self-organization.
MAO Fei-Qiao , QI De-Yu , LIN Wei-Wei
Abstract:Procedure call-based component connections which are latent in hard code do not only restrict the flexibility of software but also cause hidden problems to software reliability because of the existing deadlock connection loops. To solve this problem, first, a formal semantic model called call-based connector has been built which explicitly separates connection from components. Second, mapping rules used to convert call-based connectors into component the connection directed graph are proposed. Then, two algorithms, TPDCC (two phases deadlock connection check) used to find all existing deadlock connection loops, and DCEMRF (deadlock connection elimination based on maximum reuse frequency) used to find locations with the least number of connections that must be eliminated to eliminate the loops, are provided respectively. Last, its application and experimental results show that the presented approach is feasible and effective, so it can be used to enhance the reliability of software, also be fit as a basis to further design and implement adaptive connector due to its separative way of description and storage of component connection in semantic.
Abstract:This paper proposes a formal approach to modeling use case which captures requirements from multi-angle views: The class diagrams, the use case sequence diagrams, the use case state diagrams, the specification mapping and the system invariant. By defining formal semantics of those views, each aspect of requirements is given exact formal descriptions. As a result, integrated specification of one method can be built by integrating formal descriptions of its interaction specification and its functional specification. At the same time, properties of use case models can be specified and analyzed through the proof in design calculus. As an application, rules for checking the consistence of use case models are studied. An example to illustrate the feasibility of the proposed method is given.
MENG Ce , HE Ye-Ping , LUO Yu-Xiang
Abstract:In the attempt to apply static analysis method to API (application programming interface) conformance validation, It is noticed that the gap between numeric value based temporal properties in API specification and variable symbol based predicates extracted through static analysis. Therefore, the paper first investigates value equality relations between variable symbols in C program, and then designs an ECS (equality class space) based value equality analysis method. This flow-sensitive method can maintain the correspondence between variable symbol domain and numeric value domain during the process of API conformance validation, and in addition, can effectively support the optimization of subsequent analysis by hiding all irrelevant information except value equality relations.
Abstract:Aiming at resolving the problem of type-safety in dynamic updating O-O (object-oriented) software, a simple formal system, MCUFJ (multi-version class dynamic updatable calculus based on FJ (featherweight Java) calculus) calculus, is established with the goal of understanding the underlying foundations of updating classes dynamically. MCUFJ is formulated as an extension of a core calculus for Featherweight Java with an update operator. Multi-Version classes make objects with different versions coexisting. This study also discusses what kind of change is type-safe, such as adding, deleting, modifying methods/fields, or changing methods'/fields' type, and concludes some restrictions on type-safe updating. The paper also proves the results formally. This calculus can be used as a foundation of Java and O-O update.
LIU Da-Wei , LUAN Hua , WANG Shan , QIN Biao
Abstract:In 1999, the research of database systems' execution time breakdown on modern computer platforms has been analyzed by Ailamaki, et al. The primary motivation of these studies is to improve the performance of Disk Resident Databases (DRDBs), which form the main stream of database systems until now. The typical benchmark used in those studies is TPC-C. However, continuing hardware advancements have "moved-up" on the memory hierarchy, such as the larger and larger on-chip and off-chip caches, the steadily increasing RAM space, and the commercial availability of huge flash memory (solid-state disk) on top of regular disk, etc. To reflect such a trend, the target of workload characterization research along the memory hierarchy is also studied. This paper focuses on Main Memory Databases (MMDBs), and the TPC-H benchmark. Unlike the performance of DRDB which is I/O bound and may be optimized by high-level mechanisms such as indexing, the performance of MMDB is basically CPU and memory bound. In this study, the paper first compares the execution time breakdown of DRDB and MMDB, and the paper proposes an optimize strategy to optimize the memory resident aggregate. Then, the paper explores the difference between column-oriented and row-oriented storage models in CPU and cache utilization. Furthermore, the paper measures performance of MMDBs on different generational CPUs. In addition, the paper analyzes the index influence and gives a strategy for main memory database index optimization. Finally, the paper analyzes each query in the full TPC-H benchmark in detail, and obtains systematic results, which help design micro-benchmarks for further analysis of CPU cache stall. Results of this study are expected to benefit the performance optimization of MMDBs, and the architecture design memory-oriented databases of the next generation.
Abstract:Because of the fluidity and continuity of data stream, the knowledge embedded in stream data is most likely to be changed as time goes by. Thus, in most data stream applications, people are more interested in the information of the recent transactions than that of the old. This paper proposes a method for mining the frequent patterns in an arbitrary sliding window of data streams. As data stream flows, the contents of the data stream are captured with a compact prefix-tree by scanning the stream only once. And the obsolete and infrequent items are deleted by periodically pruning the tree. To differentiate the patterns of recently generated transactions from those of historic transactions, a time decaying model is also applied. Extensive simulations are conducted and the experimental results show that the proposed method is efficient and scalable, and also superior to other analogous algorithms.
XIAO Bo , XU Qian-Fang , LIN Zhi-Qing , GUO Jun , LI Chun-Guang
Abstract:Existing association-rule mining algorithms mainly rely on the support-based pruning strategy to prune its combinatorial search space. This strategy is not quite effective in the process of mining potentially interesting low-support patterns. To solve this problem, the paper presents a novel concept of association pattern called credible association rule (CAR), in which each item has the same support level. The confidence directly reflects the credible degree of the rule instead of the traditional support. This paper also proposes a MaxCliqueMining algorithm which creates 2-item credible sets by adjacency matrix and then generates all rules based on maximum clique. Some propositions are verified and which show the properties of CAR and the feasibility and validity of the algorithm. Experimental results on the alarm dataset and Pumsb dataset demonstrate the effectiveness and accuracy of this method for finding CAR.
Abstract:The main reason of low precision in information retrieval (IR) is that it is difficult for the users to submit a precise query expression for their query intensions. Furthermore, XML documents have characteristics not only in the content, but also in its structure. Therefore it is more difficult for users to submit precise query expressions. In order to solve this problem, this paper puts forward a new query expansion method based on relevance feedback. It can help users to construct a content and structure query expression which can satisfy users' intentions. This method includes two steps. The first step is to expand keywords for finding the weighted keyword which can represent the user's intentions. The second step is structural expansion based on the weighted keywords. Finally a full-edged content-structure query is formalized. Experimental results show that the method can obtain better retrieval results. The average precision of prec@10 and prec@20 is 30% higher than the original query.
LI Yan , ZHOU Ming-Hui , LI Rui-Chao , CAO Dong-Gang , MEI Hong
Abstract:As the number of web services with the similar function is increasing, QoS-based service selection at runtime has become an important research topic. The existing QoS-based services selection approaches always assume that the QoS data coming from service providers and users are effective and trustworthy, which is actually impossible in reality. This paper proposes a service selection approach considering the trustworthiness of QoS data, which classifies and computes the QoS attributes according to the source of QoS data. For the QoS attributes whose data come from service providers, the statistics of past runtime data to revise the providers' QoS data are used. For the QoS attributes whose data come from users, feedback similarity to weigh users' QoS data is used. Furthermore, an implementation framework and a set of experiments are given, which show that this approach can effectively weaken the influence of untrustworthy QoS data on the services selection, thus can strengthen the accuracy of the service selection.
MA Jian-Qing , ZHONG Yi-Ping , ZHANG Shi-Yong
Abstract:To solve the problem of secure location service, this paper proposes a range-free secure localization protocol—ServLoc localization protocol. By means of authenticating messages, hidden actors passively receiving localization requests and filtering false location reports, etc, ServLoc localization protocol can defend location attacks, keep location in privacy and locate sensor node in a distributive way. In addition, this paper also proposes a voting-based location verification scheme—ServLoc verification protocol and ways to defend actor attacks. The analysis illustrates that ServLoc verification protocol can trade off the security and effective localization problem of WSAN. ServLoc scheme can also effectively provide secure location service, even if the WSANs suffer location attacks.
LI Shan-Shan , LIAO Xiang-Ke , ZHU Pei-Dong , XIAO Nong
Abstract:Reliability is crucial in many wireless sensor network (WSN) applications. Most of existing approaches are redundancy-based, such as employing multi-path or retransmission schemes. However, those designs often waste energy, and thus shorten the network lifetime. To address this issue, this paper proposes an energy aware method which employs network coding scheme based on multi-path routings. By encoding a group of data into independent new packets and transmitting them along multiple paths, this paper offsets the effect of link failure with a little extra overhead. The other strength of this design is that it only needs small-scale linear operations. An approximate method to effectively estimate the number of paths needed is also employed. Comprehensive simulations and results verify the validation of the theoretical results in the paper.
Abstract:The handover latency and the packet loss rate are important criterions that determine if the mobile multicast algorithm could adapt to real-time multicast appliances. This paper proposes a multicast fast handover algorithm based on neighbor hood information exchange (M-FMIPv6/NIE). Before L2 trigger event occurs, M-FMIPv6/NIE can configure new care-of-address (nCoA) and request new access routers (AR) to join multicast tree. The performance analysis and simulation results indicate that, the multicast service disruption time of M-FMIPv6/NIE is less than that of existing multicast fast handover algorithms and it has good performance in buffer size and packet loss rate.
Abstract:In spite of being replaced by AES (advanced encryption standard), DES (data encryption standard) still plays an important role as encryption standard. DES and the triple DES are still widely used in many areas, especially in the financial sector. Recently, some new cryptanalytic techniques are introduced and of which the Rectangle attack and the Boomerang attack had proved to be very powerful. Therefore, it is necessary to re-evaluate the effects that these new cryptanalytic techniques may have on DES. This paper examines the strength of DES against the Rectangle attack and the Boomerang attack. By using the best differential characteristic of DES, the paper gets an attack against up to 12-round DES using the Rectangle attack and an attack against 11-round DES using the Boomerang attack respectively. The Rectangle attack on 12-round DES requires 262 chosen plaintexts and the time complexity is equivalent to 242 12-round encryptions, while the Boomerang attack on 11-round DES requires 258 adaptive chosen plaintexts and ciphertexts and the time complexity is equivalent to 238 11-round encryptions. Because the differential characteristics used in the attacks are all the best ones, it is believed that the attacks are the best results that the Rectangle attack and the Boomerang attack can get on DES.
ZHUANG Yi , ZHUANG Yue-Ting , WU Fei
Abstract:This paper proposes a novel integrated indexing structure for the large-scale cross-media retrieval. In the cross-media retrieval, first a cross reference graph (CRG) is generated by hyperlink analysis of the webpages, which supports the cross-media retrieval. Then a refinement process of the CRG is conducted by users' relevance feedbacks. Three steps are made. First, when the user submits a query media object, the candidate media objects are quickly identified by searching the cross reference graph. Then the distance computation of the candidate vectors is conducted to get the answer set. The analysis and experimental results show that the performance of the algorithm is superior to that of sequential scan.
XU Li-Shuang , ZHOU Ming-Jun , DENG Chang-Zhi , TIAN Feng , LIU Yuan-Yuan , DAI Guo-Zhong
Abstract:This paper presents CFAPUI (a conceptual framework for developing adaptive pen-based user interface), a conceptual framework for developing adaptive pen-based user interface. CFAPUI proposes an architecture for adaptive pen-based applications, a set of guidelines to assist the design process of adaptive pen-based applications. To demonstrate CFAPUI's validity, the development process of an adaptive pen-based form application is illustrated.
DING Xiao-Feng , LU Yan-Sheng , PAN Peng , HONG Liang , WEI Qiong
Abstract:This paper proposes a novel, U-tree based indexing technique that addresses the problem of managing uncertain data in a constantly evolving environment. The technique called TPU-tree is capable of indexing the moving objects with uncertainty in multi-dimensional spaces. Along with the data models capturing the temporal and spatial uncertainty, a modified p-bound based range query (MP_BBRQ) algorithms for probabilistic queries is also developed. Experimental evaluations demonstrate that the TPU-tree supports queries on uncertain moving objects quite efficiently. It yields rather good update performance even under frequent update environments, and has a practical value.
FANG Qi-Ming , YANG Guang-Wen , WU Yong-Wei , ZHENG Wei-Min
Abstract:Web search engine has become a very important tool for finding information efficiently from the massive Web data. With the explosive growth of the Web data, traditional centralized search engines become harder to catch up with the growing step of people's information needs. With the rapid development of peer-to-peer (P2P) technology, the notion of P2P Web search has been proposed and quickly becomes a research focus. The goal of this paper is to give a brief summary of current P2P Web search technologies in order to facilitate future research. First, some main challenges for P2P Web search are presented. Then, key techniques for building a feasible and efficient P2P Web search engine are reviewed, including system topology, data placement, query routing, index partitioning, collection selection, relevance ranking and Web crawling. Finally, three recently proposed novel P2P Web search prototypes are introduced.
GUO Yu , CHEN Yi-Yun , LIN Chun-Xiao
Abstract:This paper proposes a new method to achieve proof construction, the basic idea of which is to construct proof with auxiliary recursive functions in the foundational logic. In this way, the workload of proof construction and the size of constructed proof can be reduced while maintaining the same trusted computing base. This paper also illustrates how to adapt this method to a type-based FPCC system, where the safety proof can be constructed automatically. All this work is implemented in the proof assistant Coq.
JIN Bei-Hong , CAO Dong-Lei , REN Xin , YU Shuang , DAI Bei-Jie
Abstract:An XML (extensible markup language) parser is the fundamental software for analyzing and processing XML documents. The implementation of a high performance full-validating XML parser is studied in this paper. This research develops a OnceXMLParser which supports three kinds of parsing models. It passes the rigorous XML conformance testing and API (application programming interface) conformance testing. OnceXMLParser adopts light-weighted architecture and is optimized on many aspects including efficient lexical analysis, statistical automaton implementation, reasonable resource allocation strategies and some fine tunings on language level. Performance testing results show that OnceXMLParser has outstanding parsing efficiency.
PANG Liao-Jun , PEI Qing-Qi , JIAO Li-Cheng , WANG Yu-Min
Abstract:In order to avoid the flaw of the secret shadow distribution method in the existing secret sharing schemes, a secret shadow distribution method is proposed with the ID-based public key technology integrated, which uses the participant's private key as his master shadow. Firstly, security analyses are made on Zheng's signcryption scheme, which shows his scheme does not offer forward secrecy. Then, an improvement is made on Zheng's signcryption scheme and a new scheme is proposed. Based on the proposed signcryption scheme and the ID-based public key cryptosystem, a new threshold multi-secret sharing scheme is proposed. The problem of the secret shadow distribution is well resolved, and no information exchange is needed between the secret dealer and each participant in advance. The secret shadow distribution can be processed during the secret distribution. At the same time, the proposed scheme offers forward secrecy. That is to say, even if the private key of the secret dealer is exposed, the security of the shared secrets will not be threatened. Therefore, the proposed ID-based secret sharing scheme is more secure and effective than others, and it can be more applicable.
SHI Jin , GUO Shan-Qing , LU Yin , XIE Li
Abstract:System incentive and alternation of attacker's strategies are not taken into full consideration in current intrusion response research. An intrusion response model (intrusion response based on attack graph, IRAG) based on the attack graph is proposed to solve this problem. IRAG model deals well with the attack's intent and alternation of strategies, and takes account of incentives of system and attacker across-the-board. The experimental results show that the IRAG model can effectively improve the accuracy and effectiveness of alert response.
FU Chang-Sheng , XIAO Nong , ZHAO Ying-Jie , CHEN Tao
Abstract:This paper proposes an approach to data access across multi-Vos (virtual organizations), using dynamic role transition under role-based access control model in data grid. A negotiation mechanism is introduced to establish the data access based on the trust of both VOs, which also sets up the role transition relationship between them. Once the negotiation succeeds, the dynamic role transition is controlled by several local policies according to the role owned by the user.
YANG Wei-Dong , MA Jian-Feng , LI Ya-Hui
Abstract:A performance analytic model based on packet arrival rate is proposed for IEEE 802.11 DCF (distributed coordination function) in infrastructure-based WLAN (wireless local area network). The model does not only take into account the factors of number of terminals, traffic loads, the binary exponential back-off mechanism, but also analyzes the impact of the finite queue system at the MAC (medium access control) layer. Each terminal is modeled as an M/M/1/K queue, and the virtual time slot is introduced to discretize the state of the queue system. Based on this, a three dimensional discrete time Markov chain is used to model the system. By using the model, normalized throughput, packet delay and packet losing ratio are obtained. Simulation results show that the proposed model can validly predict the performance of the DCF under various packet arrival rates.
ZHANG De-Ping , NIE Chang-Hai , XU Bao-Wen
Abstract:This paper demonstrates an approach to optimize software testing by minimizing the expected cost with given software parameters of concern. Taking software testing process as a Markov decision process, a Markov decision model of software testing is proposed in this paper, and by using a learning strategy based on the cross-entropy method to optimize the software testing, this paper obtains the optimal testing profile. Simulation results show that the testing profile with the learning strategy performs significantly better than the random testing strategy with respect to the expected cost. Moreover, this learning strategy is more feasible and can significantly reduce the number of test cases required to detect and remove a certain number of software defects in comparison with the random testing strategy.
ZHOU Ming-Jun , XU Li-Shuang , TIAN Feng , DAI Guo-Zhong
Abstract:Pen-Based user interface is a primary Post-WIMP (window icon menu pointer) interface, and it provides natural interactions for users. However, the current toolkits for constructing pen-based user interface are built for single-user tasks, which can not support collaborative work well. This paper analyzes the features of pen-based interaction and collaboration environment. Furthermore, the design and implementation of a toolkit, CoPen Toolkit, which can support the development of collaborative pen-based user interfaces are discussed. CoPen Toolkit offers flexible architecture and extended components, which can support ink-based data, event processing, networked collaboration, etc. Based on CoPen Toolkit, several typical prototypes have been built, and the results show that it can support construction and fast-prototyping of collaborative pen-based user interfaces.