JIANG Yun-Cheng , WANG Ju , SHI Zhong-Zhi , TANG Yong
2009, 20(3):477-490.
Abstract:Terminological cycles have been a very hard problem in description logics for a long time, and their essential problems, i.e. semantics and reasoning problems, have not been solved reasonably. Current research progress and the existing problems of terminological cycles in description logics are analyzed in this paper. Based on the work of Baader, a new direction in terminological cycles is put forward. Aiming at more expressive description logic, the semantics and reasoning mechanism of terminological cycles are studied. The number restrictions are added to description logic εL, and the descripion logic εLN is presented. The semantics (fixpoint semantics and descriptive semantics) of εLN are given. To meet the requirement of εLN, the description graphs (syntax description graph and semantics description graph) are redefined. The satisfiability and subsumption reasoning algorithms of terminological cycles in description logic εLN w.r.t. fixpoint semantics are presented with simulation between description graphs. It is proved that the satisfiability and subsumption reasoning algorithms of terminological cycles in εLN are polynomial.
JIANG Yun-Cheng , WANG Ju , TANG Yong , DENG Pei-Min
2009, 20(3):491-504.
Abstract:Terminological cycles have been a very hard problem in description logics for a long time, and their essential problems, i.e. semantics and reasoning problem, have not been solved reasonably. Based on hybrid graded μ-calculus, the description logic μALCQO which may include terminological cycles is presented, and the μALCQO is derived form the description logic ALCQO which includes the nominal constructor by adding the least and greatest fixpoint constructors. The syntax, semantics and properties of the fixpoint constructors of description logic μALCQO are given. The equality between satisfiability of description logic μALCQO and that of hybrid graded μ-calculus is proved. Based on the satisfiability reasoning algorithm of hybrid graded μ-calculus, the satisfiability reasoning algorithm of description logic μALCQO is presented using fully enriched automata. The correctness of the satisfiability reasoning algorithm is proved, and the complexity property of the reasoning algorithm is given.The theoretical foundation for reasoning algorithms of more expressive description logics including fixpoint constructors and nominal constructor is provided through μALCQO.
PAN Li , ZHAO Wei-Dong , WANG Zhi-Cheng , ZHOU Xin-Min , LIU Xian-Hui
2009, 20(3):505-514.
Abstract:In verification methods based on Petri nets, the step has been widely applied to the decrease of the interleaving of transition firings. To study the computational complexity of the algorithm for building a reachabilitygraph based on steps, a new decision problem, the step problem, is proposed for Petri nets. The NP-completeness ofthis problem is proved by the reduction from the independent set problem. A polynomial-time algorithm for themaximal step problem is then presented. Furthermore, the maximum step problem is proved to be NP-equivalent.Finally, two kinds of sub-problems solvable in polynomial time are also discussed and analyzed.
2009, 20(3):515-523.
Abstract:Compactness is an important property of fuzzy logic systems. It was proved that ?ukasiewicz propositional logic, G?del propositional logic, Product propositional logic and the formal deductive system L* are all compact. The aim of the present paper is to prove the compactness of the fuzzy logic system NMG by characterizing maximally consistent theories and by proving the satisfiability of consistent theories over NMG.
LI Ming-Shu , YANG Qiu-Song , ZHAI Jian
2009, 20(3):524-545.
Abstract:Nowadays it has been widely accepted that the quality of software highly depends on the process that iscarried out in an organization. As part of the effort to support software process engineering activities, the researchon software process modeling and analysis is to provide an effective means to represent and analyze a process and,by doing so, to enhance the understanding of the modeled process. In addition, an enactable process model canprovide a direct guidance for the actual development process. Thus, the enforcement of the process model candirectly contribute to the improvement of the software quality. In this paper, a systematic review is carried out tosurvey the recent development in software process modeling. 72 papers from 20 conference proceedings and 7journals are identified as the evidence. The review aims to promote a better understanding of the literature byanswering the following three questions: 1) What kinds of paradigms are existing methods based on? 2) What kinds of purposes does the existing research have? 3) What kinds of new trends are reflected in the current research? Afterproviding the systematic review, we present our software process modeling method based on a multi-dimensionaland integration methodology that is intended to address several core issues facing the community.
FAN Xiao-Qin , JIANG Chang-Jun , WANG Jun-Li , PANG Shan-Chen
2009, 20(3):546-556.
Abstract:In the service-oriented environment, a single Web service can hardly satisfy the given request, so thecomposition of multiple Web services is required to fulfill the goal. Without considering the inherent stochastic anddynamic nature of Web service, the existing composition methods mostly generate static plans. As a result, Webservice composition often terminates with failure inevitably. In this paper, metrical methods of several random QoSdimensions and QoS Management Architecture are presented, and one reliable Web service composition algorithmis also designed based on markov decision process (MDP)—only dynamic controlling method of stochastic discreteevent system (SDES). Experimental results demonstrate the success rate of Web service composition has beenimproved greatly.
LI Nao , LI Ming-Shu , WANG Qing , ZHAO Chen , DU Shuan-Zhu
2009, 20(3):557-566.
Abstract:Most Software process models are predefined. When applied in changing environments, they have to be adapted manually. To this end, this paper proposes an adaptive multilateral negotiation model for software processmodeling, namely AMNM-PA. AMNM-PA uses Agents to represent the entities involved in software processes, suchas organizations, teams, persons, etc. and dynamically and adaptively constructs software process models for givensoftware projects by negotiating among the Agents. AMNM-PA is based on non-stationary finite-horizon Markovdecision processes and uses the model-independent Q learning algorithm to choose negotiation strategies, thussupports the dynamic and adaptable negotiation in changing and unknown environments meeting the requirementfor environmental adaptability of the software process modeling. AMNM-PA has been implemented in the softwareprocess management system SoftPM.
LAN Yu-Qing , ZHAO Tong , GAO Jing , JIE Hui , JIN Mao-Zhong
2009, 20(3):567-582.
Abstract:The research on the software quality model and software quality evaluation model has always been a hot topic in the area of software quality assurance and assessment. A great amount of domestic and foreignresearches have been done in building software quality model and quality assessment model, and so far certainaccomplishments have been achieved in these areas. In recent years, platform building and systematization havebecome the trends of developing basic softwares based on operating systems. Therefore, the quality evaluation ofthe foundational software platform becomes an essential issue to be solved. This article analyzes and concludes thecurrent development of researches on software quality model and software quality assessment model focusing onsummarizing and depicting the developing process of quality evaluation of foundational software platform. It alsodiscusses the future development of researches on quality assessment of foundational software platform in brief, trying to establish a good foundation for it.
LI Zhen , YANG Fang-Chun , SU Sen
2009, 20(3):583-596.
Abstract:Choosing the global optimal execution plan is an important process in the semantic Web service composition. The plan selection based on QoS is still challenging because the heterogeneous QoS values make dataaggregation and decision making hard. This paper presents a novel Fuzzy Multi-attribute decision making-basedsemantic Web service Composition algorithm (FuMuCom) to solve the above difficulties for the first time.FuMuCom takes all possible QoS expression types (real number, interval and linguistic expression) intoconsideration. It includes three main steps: defuzzifying linguistic data, normalizing the decision matrix andevaluating alternatives synthetically. Other contributions of the paper include an extensible QoS ontology to expressthe heterogeneous QoS values, an ontology evolution strategy for aggregating QoS and a set of experiments thatdemonstrate the benefits and effectiveness of our approach.
ZHOU Tian-Lin , SHI Liang , XU Bao-Wen , ZHOU Yu-Ming
2009, 20(3):597-607.
Abstract:This paper proposes physical refactoring and digs into its process and methods. Physical refactoring is adisciplined technique for restructuring the physical structure of a software system, to improve the efficiency ofsoftware development, while preserving the system’s external behavior. It follows the best practices of refactoringto change the system in small and iterative steps, and applies refactorings according to the standards of physicaldesign. Case studies demonstrate that physical refactoring may continuously improve software quality from theviewpoint of the physical structure.
BAI Xiang , MAO Yu-Ming , LENG Su-Peng , MAO Jian-Bing , XIE Jun
2009, 20(3):608-619.
Abstract:This paper proposes an adaptive p-persistent MAC scheme, named QDA-MAC (QoS differentiation based adaptive MAC scheme), for WLAN to maximize the channel utilization and provide the service differentiation among different traffic stations. Specifically, different from the previous work, the proposed schemedoes not need to estimate the number of active stations for each priority class but still achieves the channelutilization close to its optimal value by exploiting a new parameter, persistent factor, whose optimal value candynamically follow the change of the load based on a simple estimation of the network status. At the same time, thetransmission probability of each priority class can be updated by the optimal persistent factor. Simulation andnumerical results show that QDA-MAC can achieve much higher channel utilization and have shorter delay thanstandard IEEE 802.11 DCF and IEEE 802.11e EDCA in all different WLAN environments.
LIU Xin , ZHU Pei-Dong , PENG Yu-Xing
2009, 20(3):620-629.
Abstract:Based on the registering idea of IRR (internet routing registry), the concept of Prefix Policy is proposed,and a new Internet registry mechanism, E-IRR, is presented. E-IRR offers an efficient and defensive method toprevent hijacked routes. In E-IRR, every participant declares its prefix policies, and makes use of the others’registered prefix policies to validate BGP routes. Measures are designed to guarantee the validity of prefix policies,and evaluate its capabilities of security and performance. The major benefits of the method are the balance itreaches between the capability of preventing prefix hijacks and the security mechanism’s requirements of practicaldeployment, the facts that the method lends itself to stepwise deployment and needs not any BGP extension for security. These properties, not shared by alternative approaches, make it an attractive and viable solution to theprefix hijacking problem.
FU Yong-Quan , WANG Yi-Jie , ZHOU Jing
2009, 20(3):630-643.
Abstract:To deal with the scalable and fast unbiased sampling problems in unstructured P2P systems, a sampling method based on multi-peer adaptive random walk (SMARW) is proposed. In the method, based on the multi-peerrandom walk process, a set of provisional peers are selected as agents which start the sampling processes, by whichthe sampling process is speeded up with receiving a set of tunable number samples each time; Meanwhile, afterreceiving new samples earlier agents are replaced with these new samples which repeat the sampling process. Withthis simple replacement, it can be guaranteed with high probability that the system can reach the optimal loadbalance; furthermore, SMARW adopts an adaptive distributed random walk adjustment process to increase theconvergence rate of the sampling process. A detailed theorical analysis and performance evaluation confirm thatSMARW has a high level of unbiased sampling and near-optimal load balancing capability.
WEN Jun , JIANG Jie , DOU Wen-Hua
2009, 20(3):644-659.
Abstract:To meet the coverage challenges arising in directional wireless sensor networks, this paper presents twodistributed direction optimizing algorithms and a node scheduling: enhanced greedy algorithm (EGA), equitabledirection optimization (EDO) and neighbors sensing scheduling (NSS) protocol. EGA algorithm optimizes directionmerely according to the amount of uncovered targets. It is used as the baseline for comparison. EDO adjusts thedirections of nodes to cover the critical targets superiorly and allocates sensing resource among nodes fairly tominimize the coverage differences between nodes. The utility function is introduced in EDO to assess the value of adirection contributed to overall networks sensing. The factors which affecting the utility are composed of the targetsin per direction, the coverage of targets and the neighbor’s decision of direction. EDO always selects the directionwith the maximum utility as the working direction. NSS arranges all sensors into multiple cover sets and allows anode to join several cover sets. Through employing local cover set, NSS identifies a redundant node and decideswhether it can sleep while taking residual energy to account. Nodes are activated in turn and the energy is consumedevenly to prolong the network life. The simulation shows that EDO outperforms EGA up to 30% in terms of criticalcoverage, and the combination of EDO and NSS prolongs the lifetime distinctly.
XIONG Wei+ , XIE Dong-Qing , JIAO Bing-Wang , LIU Jie
2009, 20(3):660-670.
Abstract:This paper presents a self-adaptive load balancing algorithm, in which each node creates a local load distribution view with a passive load statistic method and a local file requested view with a file requested statisticmethod. When the load imbalance exists in the system, the heavily loaded node will make the logical links pointingto itself point to a lighty loaded node in its local load distribution view, with the indegree of the heavy loaded nodedecreasing and that of the lighty loaded node increasing, the load imbalance magnitude will decrease. When therequest load of the heavy loaded node is high, the node will use its local file request view to get the popular file andcache the file to corresponding target node. Results from simulation experiments indicate that the system has a good load balance under Zipf-like requests distribution if it runs the self adaptive load balancing algorithm. To someextent, caching requires some extra messages, but fewer than the cache hit messages under some condition, socaching can reduce the overall load of the system.
MING Liang , ZHAO Gang , XIE Gui-Hai , WANG Chun-Lei
2009, 20(3):671-681.
Abstract:Smart space is a result of pervasive computing embodying the integration of computer, communication and digital media technology, which makes it possible to integrate the physical world and the virtual world in theinformation space together as a whole. Location awareness is a key technology of smart space, and is the basicservice needed by other applications. Multidimensional scaling (MDS) is a technique in mathematical psychology,which can the distance or dissimilarity measures between points and produce a representation of the data in a smallnumber of dimensions. In the paper, MDS is used to derive node locations that fit those estimated distances, and asmart space oriented location awareness method (SSOLA) is proposed, which can position all the nodes of thenetworks accurately only by means of the connectivity information—who is within communications range of whom.Provided with known positions for several anchor nodes, the absolute positions for all nodes can be got by SSOLA. Simulation studies demonstrate that SSOLA is more robust to measurement error, and has less positioning error, lesstime cost and better scalability than previous proposals in the same conditions. Furthermore, it can achievecomparable results using much fewer anchor nodes than previous methods, and even yields relative coordinateswhen no anchor nodes are available. SSOLA can be used in large and heavy traffic wireless environment, such asintelligent battlefield, tactical internet, etc.
LIN Pin , WU Wen-Ling , WU Chuan-Kun
2009, 20(3):682-691.
Abstract:In this paper, a hash function with lower rate but higher efficiency is proposed and it can be built oninsecure compression functions. The security of this scheme is proved under black-box model and somecompression function based on block ciphers are given to build this scheme. It is also shown that key schedule is amore important factor affecting the efficiency of a block-cipher-based hash function than rate. The new schemeonly needs 2 keys and the key schedule of it can be pre-computed. It means the new scheme need not re-schedulethe keys at every step during the iterations and its efficiency is improved.
CHEN Hu , ZHANG Fu-Tai , SONG Ru-Shun
2009, 20(3):692-701.
Abstract:This paper studies proxy signatures in the newly proposed certificateless public key setting. The authorspresent a very strong security model for certificateless proxy signature schemes against both Super Type IAdversary and Super Type II Adversary. And also an efficient construction of certificateless proxy signature schemeusing bilinear maps is put forward. The security of this scheme is based on the infeasibility of the ComputationalDiffie-Hellman problem and is formally proven under the security model of certificateless proxy signature schemes.Due to its security, high efficiency and freedom from certificate management, it may have practical applications inelectronic commerce and mobile agent systems, etc.
CHI Xiao-Yu , SHENG Bin , YANG Meng , CHEN Yan-Yun , WU En-Hua
2009, 20(3):702-712.
Abstract:This paper presents a method for capturing and efficiently simulating of a large number of plant leaves,realistically showing their weathering process in autumn time with various texture patterns and appearances. Ingenerating the texture patterns of leaves, different from the existing technique that models the appearance manifoldsfrom a single input material sample, this paper captures the BRDF and BTDF properties of leaf surface frommultiple various samples, which contribute to a final complete aging manifold representing the appearance changethrough the aging process of the leaves. Combining this aging manifold representing with botanical knowledge, itcan synthesize many different leaves’ appearance and textures and extrapolate texture patterns out of input samples.The effectiveness of this method is proved in several applications, including subjects with different ages and types.In most cases, it is able to re-produce texture patterns consistent with those observed in nature.
LU Wei , ZENG Ding-Hao , PAN Jin-Gui
2009, 20(3):713-723.
Abstract:This paper analyzes current mesh simplification methods, and proposes a algorithm based on the quadric error metric (QEM) for feature preserving. It adopts a Half-edge collapse method for mesh simplificationand modifies QEM to remove the discontinuities of appearance attributes. By analyzing the relationships betweenvertices and the discrete appearance seam, a new formula is obtained which enables the edge contraction topostpone the appearance; meanwhile a proper replacer is selected for the wedge in the triangle that has beenaffected by half-edge collapsing operation to avoid material distortion. Experimental results demonstrate that author’s algorithm achieves a similar high efficiency as QEM with desirable geometry and feature-preserving.
2009, 20(3):724-733.
Abstract:The traditional 3D deformable model is inefficient in face reconstruction. To address this problem, a linear deformable model is proposed based on facial landmarks. First, a 2D template-based alignment algorithm isdeveloped to process the correspondence between faces automatically, and a facial linear model is built. Then, adynamic component-based deforming model is proposed to select the most correlative components as the basicspace. Finally, the facial shape is reconstructed by a double deformation framework, a combination of the globaldeformation and local modification. Experimental results show that this method outperforms the conventionalmethods in modeling precision, and the 3D faces generated from real-world photos are rather realistic based on afew landmarks.
SONG Ming-Li , WANG Hui-Qiong , CHEN Chun , YE Xiu-Qing , GU Wei-Kang
2009, 20(3):734-743.
Abstract:In this paper, a probabilistic model is proposed for high dynamic image’s tone reproduction. This novelmethod learns a distribution for local pixel energy of the tone. With the constraint of the gradient variation on theHDR image, an energy distribution is set up based on the similarity between the gradient variation on the HDR andthe LDR image. The probabilistic framework for the tone mapping operation is formulated into an energyminimization process by a Maximum A posteriori (MAP) deduction. It turns out that, the proposed methodgenerates LDR image with more visual information than the previous ones. Experimental results show that thisapproach is convincible and competitive, which can be applied in areas like advanced image editing, displayerdevelopment, etc.
ZHU Rui , GUO Chang-Guo , WANG Huai-Min
2009, 20(3):744-753.
Abstract:Transaction of service composition has long-lived feature which a global-transaction is divided into several distributed sub-transactions. Atomicity property is preserved by using compensating transactions, whichsemantically undo the effects of the completed sub-transactions, in case of global-transaction abort. However, thecost of compensation may be expensive and methods may be complex. To overcome this limitation, a novelscheduling algorithm named STCD (SubTransaction committing delay) is presented based on analysis ofcompensation. Different from traditional methods, sub-transactions determine the time of committing according toboth the cost of compensation and the state of execution dynamically. The correctness of proposed algorithm isproved. Simulations show that STCD algorithm can confine the compensation sphere and reduce the cost ofcompensation.
XIANG Xiao-Jia , SHU Ji-Wu , ZHENG Wei-Min
2009, 20(3):754-765.
Abstract:A snapshot-based fine granularity versioning technique is presented to retain history data only for a single directory or a single file, and bring flexibility to multi-version file systems. Adopting the strategy to search inname space and version space separately, this paper also presents backward inheriting path-finding mechanism inversion space. This mechanism is beneficial to the performance and management, because it can utilize the couplingrelationship between versions to optimize the data layout of versions and build hierarchy in version space toaccelerate the path-finding procedure. In addition, fast index structures for directory versions and file versions aredesigned. This prototype file system——THVFS can achieve both good performance and high availability withthese technologies mentioned above. The experimental results show that the average time of searching old versions in THVFS was reduced by 34.4% than in ext3cow, the famous multi-version file system. In the trace experiment, theaverage read response time in THVFS was 12% less than in ext3, and only 80% extra space was needed to retain allhistory data when snapshots are taken every 72 minutes in THVFS.
2009, 20(3):766-778.
Abstract:In this paper, according to the characteristics of signal processing in cluster-based SR systems, the following scheduling issues are investigated: First, a universal scheduler model suitable for signal processing incluster-based SR systems is proposed. The model is simple, the efficient and it avoids the bottleneck problem.Second, a novel three-step scheduling strategy-RQBB is put forward, where the existing DASAP algorithm is usedin step 1. Third, two heuristic algorithms MQB and MSD are proposed. They are used in step 2 and step 3 of RQBB,respectively. The MQB is a fair algorithm that strives to make all the accepted tasks have high QoS benefit (highmean of QoS levels and small difference of QoS levels). The MSD is designed to guarantee the system with highthroughput and achieve load balancing without violating the timing constraints of accepted tasks. Extensivesimulation experiments are performed to compare RQBB with RQRB, DASAP and DALAP. Experimental resultsindicate RQBB improves QoS benefit better than others and achieve load balancing while guaranteeing highschedulability.