CHANG Liang , SHI Zhong-Zhi , CHEN Li-Min , NIU Wen-Jia
Abstract:As the extension of the description logic, many dynamic description logics are proposed for modeling and reasoning about Semantic Web Services. These dynamic description logics provide decidable reasoning mechanisms for the result of action execution. However, the procedure of action execution can not be described and reasoned about according to these logics. Inspired by an extended propositional dynamic logic studied by Pratt, this paper proposes two improvements to the dynamic description logic. One is that, semantics of actions in the dynamic description logic are redefined in such a way that each action is interpreted as a set of trajectories, where each trajectory is a sequence of possible worlds of the semantic model. The other is that, assertions on the procedure of action execution are introduced to the logic so that not only the result but also the procedure can be described for the execution of actions. As a result, a family of extended dynamic description logics named EDDL(X) is presented in this paper, where X represents well-studied description logics ranging from ALC (attributive language with complements) to SHOIN(D). Taking the description logic ALCQO (attributive language with complements, qualified number restrictions and nominals) as an example of the X of EDDL(X), this paper proposes a tableau decision algorithm for the logic EDDL(ALCQO) and proves that this algorithm is terminating, sound and complete. With EDDL(X), both the result and the procedure of action execution can be described and reasoned about. Therefore, compared with dynamic description logics, EDDL(X) provides further support for modeling and reasoning about semantic Web services.
YANG Dong-Dong , JIAO Li-Cheng , GONG Mao-Guo , YU Hang
Abstract:The difficulty of current multi-objective optimization community lies in the large number of objectives. Lacking enough selection pressure toward the Pareto front, classical algorithms are greatly restrained. In this paper, preference rank immune memory clone selection algorithm (PISA) is proposed to solve the problem of multi-objective optimization with a large number of objectives. The nondominated antibodies are proportionally cloned by their preference ranks, which are defined by their preference information. It is beneficial to increase selection pressure and speed up convergence to the true Pareto-optimal front. Solutions used to approximate the Pareto front can be reduced by preference information. Because only nondominated antibodies are selected to operate, the time complexity of the algorithm can be reduced. Besides, an immune memory population is kept to preserve the nondominated antibodies and ε dominance is employed to maintain the diversity of the immune memory population. Tested in several multi-objective problems with 2 objectives and DTLZ2 and DTLZ3 as high as 8 objectives, PISA is experimentally effective.
Abstract:PSL (property specification language) is a property specification language to describe parallel systems and can be divided into two parts, FL (foundation language) and OBE (optional branching extension). Since OBE is essentially the temporal logic CTL (computation tree logic), and PSL formulas with clock statements can be easily rewritten to unclocked formulas, this paper plays an emphasis on the unclocked FL logic. In order to be model-checked, each FL formula needs to be translated into a verifiable form, usually as an automaton (nondeterministic automaton). The translation into nondeterministic automata can be realized mainly by the construction of alternating automata. The translation rules for the two-way alternating automata from unclocked FL logic are explained in detail in this paper. The core logic of the construction rules is not only limited to an extension of LTL (linear temporal logic) with regular expressions, but considers overall FL operators adequately. A translation method from two-way alternating automata to nondeterministic automata is also provided. Finally, a translation tool from PSL formulas to the above two automata has been written. The complexity of the construction rules for the two-way alternating automata grows linearly with the length of the FL formulas, and at the same time, the correctness of the rules is verified. It is also proved that the two-way alternating automata and its corresponding nondeterministic automata accept the same language. The work above has important theoretical and application values for the modeling and model checking for the complex parallel systems.
Abstract:Classical belief revision theory and iterated belief revision theory are both developed in the framework of complete valuations. This paper extends the research to the incomplete valuation. Different from complete valuation, incomplete valuation may assign the unknown state to some atomic propositions. Adopting incomplete valuation as possible world, this paper establishes a model-based representation theorem which characterizes the proposed postulates and constrains.
HU Hua , ZHUANG Yi , HU Hai-Yang , Dickson CHIU
Abstract:This paper proposes a multi-query optimization algorithm for pipeline-based distributed similarity query processing (pGMSQ) in grid environment. First, when a number of query requests are simultaneously submitted by users, a cost-based dynamic query clustering (DQC) is invoked to quickly and effectively identify the correlation among the query spheres (requests). Then, index-support vector set reduction is performed at data node level in parallel. Finally, refinement of the candidate vectors is conducted to get the answer set at the execution node level. By adopting pipeline-based technique, this algorithm is experimentally proved to be efficient and effective in minimizing the response time by decreasing network transfer cost and increasing the throughput.
LIU Guo-Liang , WEI Jun , FENG Yu-Lin
Abstract:Component containers play a key role as the infrastructure of component-based distributed applications at deployment and running time. In recent years, various kinds of component models are emerging and evolving, this brings great challenges to the development component container. Product line engineering is one of the most promising techniques to improve the quality and productivity of software. Study on product line architecture (PLA) for component containers is the most important, and of great help to improve the reusability of architectural design. Since component models are cornerstone of container design, an analyzing framework of component models is proposed integrated with domain analysis. This paper builds the domain model of component container by establishing mapping between component model elements and domain requirement elements. Based on the design principles of component container and variability encapsulation rules, this paper proposes a component container PLA, named PLACE, which meets domain requirements of component container by introducing optionality, hierarchical module structuring and decision model. PLACE is also applied to the development of several component containers on ONCE platform, which proved the effectiveness of this approach.
YAN Hua , ZHANG Wei , ZHAO Hai-Yan , MEI Hong
Abstract:The feature model is a reusable requirements model generated from the domain analysis. The reuse of feature models is usually achieved by a customizing-based approach. One important issue in feature models’ customization is the verification problem, caused by the fact that there are usually constraints among features, and that a valid customizing result must satisfy all these constraints. Because of the NP-hard nature of this problem, it is usually difficult to verify feature models in an efficient way. This paper presents a BDD (binary decision diagram)-based approach to verifying feature models by only traversing once to the nodes in BDDs, an approach that makes an efficient use of the BDD data structures based on the unique characteristics of feature models’ verification. It should be pointed out that this approach does not attempt to resolve the NP-hard difficulty of the verification problem in a general sense, but just tries to improve the scalability and efficiency of methods for feature models’ verification based on the utilization of this problem’s uniqueness. Experimental results show that this BDD-based approach is more efficient and can verify more complex feature models than the previous method.
ZHOU Zhou-Yi , HE Ye-Ping , LIANG Hong-Liang
Abstract:Commercial application requires protection of integrity policy. Biba model provides a simple multi-level integrity access control scheme but it needs the introduction of trusted subject to ensure the usability. Clark-Wilson model provides a complete integrity protection by means of controlled state transaction, but its entire implementation is hindered by its complication. This paper proposes a model that enforces Biba strict integrity policy as basic access control mechanism, at the same time enforces Biba low-water-mark policy on trusted subjects according to the state in their lifecycle. Clark-Wilson model is used to control and audit subject’s state transition and run time adjustment of low-water-mark policy parameters. This paper solves the usability problem introduced by Biba policies and high configuration burden and runtime overload introduced by massive supervising task of Clark-Wilson, while at the same time borrows their merits. This policy composition scheme is proved to be applicable and secure.
YI Ye-Qing , LIN Ya-Ping , LI Xiao-Long , YANG Si-Qing , YOU Zhi-Qiang
Abstract:In this paper, a data authenticate algorithm based on cooperation watermarks is proposed for recognition and filtering false data and replayed packets. In each packet, two kinds of watermarks are embed into data packets. One is robust watermark for the authentication of sender’s identification and the freshness of data. The other is Semi-Fragile watermark for the authentication of content generated by t witnesses. The proposed algorithm has several advantages. Firstly, different watermarks have no interaction with one another. Secondly, single sensor node can independently extract the watermark to validate data packets while nodes have no ability to modify and fabricate watermark. Theoretical and experimental results show that the algorithm has good performance in the peak value signal to noise ratio and signal to noise ratio by embedding watermarks into packets under most circumstances. And the algorithm is of high sensibility to malicious modified data, of robustness to some noise disturbance, lossy compression and so on. The new algorithm has less communication cost and more capability for the false data recognition and filtering compared with the existing algorithm based on MAC (message authentication code) schemes.
SU Jin-Shu , HU Qiao-Lin , ZHAO Bao-Kang , PENG Wei
Abstract:In the recent years, as a new emerging network architecture, Delay/Disruption Tolerant Networks (DTNs) have received extensive attention and are widely investigated. Since the major application scenarios of DTN are extremely particular routing protocols designed for the traditional networks are not suitable for DTN. Therefore, numerous routing protocols have been proposed for DTN. In this paper, the formulation issues of DTN routing protocols are firstly studied, and the classification concerns are then proposed for those protocols. Thereafter, this paper focuses on analyzing and comparing existing representative DTN routing protocols. Finally, current research status and the existing problems about DTN routing protocols are summarized, the key issues and future trends in DTN routing are pointed out.
Abstract:Network tomography is a novel technique of network measurement in recent years. It combines network measurement with statistical inference, which can solve some problems that can not be solved by normal network measurement techniques. Network topology inference is one of the main applications of network tomography, which can infer the topology of network through end to end measurement and does not need the cooperation of internal nodes. In this paper, the technique of topology inference based on network tomography is summarized systematically, and the topology inference techniques and algorithms are compared and analyzed. Finally, the problems of current research and the future directions are discussed.
GAO Shuai , ZHANG Hong-Ke , XU Huai-Song
Abstract:In sensor networks with a path-fixed mobile sink, due to the limited communication time of the mobile sink and random deployment of the sensor nodes, it is quite difficult to increase the amount of data collected and reduce energy consumption simultaneously. To address this problem, this paper proposes a data collection scheme called maximum amount shortest path (MASP) to optimize the mapping between members and sub-sinks. MASP is formulated as an integer linear programming problem which is solved by a genetic algorithm. A communication protocol is designed to implement MASP, which is also applicable in sensor networks with low density and multiple sinks. Simulations under OMNET++ shows that MASP outperforms shortest path tree (SPT) and static sink methods in terms of energy utilization efficiency.
Abstract:In an open system, trust is one of the most complex concept in social relationships, involving many decision factors, such as assumptions, expectations and behaviors, etc. So, it is very difficult to quantify and forecast accurately. Combined with human social cognitive behaviors, a new dynamic trust forecasting model is proposed. Firstly, a new and adaptive trusted decision-making method based on historical evidences window is proposed, which can not only reduce the risk and improve system efficiency, but also solve trust measurement and forecasting problem when the direct evidences are insufficient. Then, a feedback trust information aggregating algorithm is used based on DTT (direct trust tree). Finally, Induced Ordered Weighted Averaging (IOWA) operator is introduced to construct the new combined direct dynamic trust forecasting model, to make up the shortage of traditional method, and the model hence can have a better rationality and a higher practicability. Simulations’ computing results show that compared with the existing trust forecasting metrics, the model in this paper is more robust on trust dynamic adaptability, and has more remarkable enhancements in the forecasting accuracy.