• Volume 18,Issue 12,2007 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • A Hierarchical Service Composition Framework Based on Service Overlay Networks

      2007, 18(12):2967-2979.

      Abstract (7972) HTML (0) PDF 866.54 K (7350) Comment (0) Favorites

      Abstract:As the amount of Web services over the Internet grows continuously,these services can be interconnected to form a service overlay network(SON).On the basis of SON,building value-added services by service composition is an effective method to satisfy the changeable functional and non-functional QoS (quality of service) requirements of customers.However,the previous research on QoS-aware service composition in SON mainly focuses on the context where services have simple interactions,and it can not support application scenarios with complex business collaboration in electronic commerce.This paper proposes the HOSS(hierachical service composition framework based on Service overlay networks) scheme based on active service overlay network (ASON) which is a kind of programmable SON.HOSS can be used to construct more general-purpose SON through describing the relations among services using business protocols.In HOSS,business protocols instead of interactive messages are adopted to simplify the description of service composition requirements and are mapped to dynamical user perspective of SON to implement service composition on demand.

    • An Extended Deterministic Finite Automata Based Method for the Verification of Composite Web Services

      2007, 18(12):2980-2990.

      Abstract (7885) HTML (0) PDF 678.17 K (7625) Comment (0) Favorites

      Abstract:To simplify and automate the verification of composite Web services,a method based on extended deterministic finite automata(EDFA) is presented.EDFA can describe Web services in an accurate way:the nodes represent states maintained by a service during the interactions between the service and its clients;the state transitions represent message exchanges between the service and its clients.Therefore,the automaton depicts the temporal sequences of messages,i.e.the behavior of the service.With the EDFA-based method for the verification of composite Web services,whether the capabilities of a service meet system requirements and whether there exist logic errors in the interactions between a service and its clients can be verified.Compared with other methods,this method is more suitable for the verification of composite Web services in an open environment.

    • A Petri Net-Based Semantic Web Service Automatic Composition Method

      2007, 18(12):2991-3000.

      Abstract (8315) HTML (0) PDF 579.95 K (7826) Comment (0) Favorites

      Abstract:Web service composition allows developers to create applications rapidly.But due to the tremendous growth in the number of Web services available,the Web service composition problem is still a challenging research issue.This paper introduces an automatic Web service composition method which considers both services' input/output type compatibility and behavioral constraint compatibility.The services available are translated into a set of Horn clause-like rules.User's input and output requirements are modeled as a set of facts and a goal statement in the Horn clauses respectively.Then Petri net is chosen to model the Horn clause set and T-invariant technique is used to determine the existence of composite services fulfilling the user's input/output requirements. Two algorithms are presented for obtaining the Petri net models of the composite Web services which satisfy not only the user's input/output requirements but also the user's behavioral constraints.

    • Determination and Computation of Behavioral Compatibility for Web Services

      2007, 18(12):3001-3014.

      Abstract (8542) HTML (0) PDF 829.95 K (7700) Comment (0) Favorites

      Abstract:How to ensure services compatible at the behavioral level is an important issue for services integration and collaboration in a seamless way.Based on the proposed concept of service view,a formal definition of behavioral compatibility between services is proposed.Then,aπ-calculus-based method is proposed to qualitatively determine and quantitatively compute behavioral compatibility.First,it transforms service behaviors and interactions between services into n-calculus processes using an algorithm automatically.Second,it determines qualitatively whether two services are behavioral compatible with the help of operational and transitional semantics and a formal deduction.After that it proposes an algorithm based on the Expansion Law to compute the compatibility degree between services quantitatively.The application of the method in the scenarios of composing and replacing services dynamically shows that it is very useful for correctly building and reliably executing service compositions.

    • Specification and Verification of Service-Oriented Enterprise Application Integration System

      2007, 18(12):3015-3030.

      Abstract (8374) HTML (0) PDF 1.27 M (8597) Comment (0) Favorites

      Abstract:On the basis of the research findings on service-oriented architecture(SOA),this paper presents a formal systematic SOA analysis,verification and validation methodology called SOARM(SOA reference model) which is the ESB (enterprise service bus)-centric model based on Petri nets and temporal logic.SOARM is consumer-centric,in which the consumers can publish their application specifications/requirements for the service providers to follow when producing or customizing services to support the application.Service interface and enterprise service bus for service realization in service-oriented design are two key parts.When a service is provided or required via the Internet,the semantic consistency becomes the critical issue in the virtualized computing environments.This architecture model tackles the issue by proposing a novel scheme:Petri nets are used to visualize the structure and model the behavior of service architectures while temporal logic is used to specify the required semantic consistency of a service.It is suggested to introduce compositionality in SOA model checking and refinement checking based on the idea of divide-and-conquer,by which the verification task of the whole system is decomposed to several smaller subtasks on the subsystems and shown how to apply it to specify an integrated front-banking system and to analyze its constraints.

    • Modeling and Reasoning of the Software Component Based System Recovery Based on Survivability Specification

      2007, 18(12):3031-3047.

      Abstract (7297) HTML (0) PDF 882.82 K (6279) Comment (0) Favorites

      Abstract:The component-based system will provide a predefined survivability specification which consists of corresponding degraded services in the presence of various kinds of malicious attacks,system failures or accidents. The main contributions of this paper are(1) presenting the method to represent service core based on component families and installation orders,which can precisely capture the system services perceived by users;(2) proposing the reasoning rules of system recovery based on component compatibility and installation execution,which are used to judge the success property(the newly started service works well) and safety property (formerly started services are not damaged);and(3) presenting the algorithms to simplify installation execution based on the concept of projection,which supports the reasoning analysis of system recovery of big scale.By the analysis process based on survivability specification,the corresponding reasoning rules can be systemically applied in practice.A component-based system named MVoD(mobile video-on-demand) is illustrated to demonstrate the practicability and efficiency of the formal model and the analysis method.

    • BGP Extension to Support Inter-Domain Distributed Packets Filtering

      2007, 18(12):3048-3059.

      Abstract (5030) HTML (0) PDF 661.35 K (5051) Comment (0) Favorites

      Abstract:To be trustworthy is an important characteristic of the next generation Internet.The routing system of the present Internet forwards packets only according to the destination IP address.Forged packets with spoofed source IP address will also be forwarded to the destination,which impairs the security of receiver and conceals the real identity of the sender.The trustworthy Internet requires the routing system not only forward packets correctly, but also validate the packets from the real sender.Inter-domain distributed packet filtering is an effective method to filter out spoofed packets.This paper proposes to extend BGP with route selection notice to provide filtering criteria. With the support,border routers can validate incoming packets and filter the spoofed packets form false autonomous systems.Simulation result indicates BGP route selection notice does not impair the routing function of BGP,and both proper design acceptable bandwidth cost and fast convergence may be achieved simultaneously.

    • Identifying Heavy Hitters in High-Speed Network Monitoring

      2007, 18(12):3060-3070.

      Abstract (4866) HTML (0) PDF 619.25 K (5643) Comment (0) Favorites

      Abstract:Due to the deficiency of traffic measurement capability in high-speed network,it's valuable for detecting large-scale network security incident to identify heavy hitters precisely in time.An algorithm of identifying heavy hitters based on two-level replacement mechanism is proposed in this paper.In this algorithm, LRU replacement and LEAST replacement are combined together to improve its accuracy.The heavy hitters can be identified accurately in small constant memory space,so the data can be treated more rapidly in limited space of SRAM.It's unnecessary to provide more memory space for more network data,so the algorithm is scalable.

    • A Mathematical Model for Network Self-Organized Evolvement

      2007, 18(12):3071-3079.

      Abstract (4836) HTML (0) PDF 605.25 K (5704) Comment (0) Favorites

      Abstract:This paper develops a self-organized dynamic network model based upon the network's self-organization natures.In this new model,network behavior is treated with nodes' trade-off between the value of information and the cost of establishing link,and the evolvement of network is described as a convergent stochastic process.The paper gives a detailed deduction for the possible result of network evolution.It should be pointed out that PGP(pretty good privacy) certificate network is a good example of this model.Furthermore,according to the change of parameters of the model,the self-organized evolvement exhibits multiform possible results.This phenomenon is consistent with self-organized criticality theory.The work provides a new method for the topological model research and self-organization theory in computer network.

    • A Tunable Interdomain Egress Selection Algorithm Based on the Failure Duration

      2007, 18(12):3080-3091.

      Abstract (4233) HTML (0) PDF 676.09 K (4775) Comment (0) Favorites

      Abstract:Hot-Potato routing is a mechanism widely employed in the border gateway protocol (BGP) interdomain egress selection in large internet service provider(ISP).Recent work has shown that hot-potato routing is convoluted,restrictive so that it can impact the robustness of interdomain routing.Though a lot of research have been done to replace it with new mechanisms,these methods often ignore the issue of link failures or the failure duration,which arise as part of everyday network operations.In this paper,a tunable interdomain egress selection algorithm based on the IP link failure duration is proposed.The algorithm is tunable with the change of traffic engineering goals and routing stability in routers.It can also satisfy the purpose of real time in routers.Simulation results show that the algorithm can reach good balance among multiple goals.

    • Optimal Design of AQM Routers with D-Stable Regions Based on ITAE Performance

      2007, 18(12):3092-3103.

      Abstract (4658) HTML (0) PDF 721.22 K (5284) Comment (0) Favorites

      Abstract:Active queue management(AQM) is a hotspot in the current studies on network congestion control. Moreover,the feedback control strategy is the most pivotal.This paper applies a optimization method for proportional-integral-differential(PID) controller design with D-stable regions based on the integral of time-weighted absolute error(ITAE) performance to AQM routers that permits the designer to control the desired dynamic performance of a closed-loop system.A set of desired D-stable regions in the complex plane is first specified and then a numerical optimization algorithm based on ITAE performance is run to find the controller parameters such that all the roots of the closed-loop system are within the specified regions.This controller for AQM routers can detect and control the congestion effectively and predictively.Compared with the random early detection(RED) and proportional-integral(PI) algorithms via experimental simulations,the proposed method, called DITAE-PID method,is indeed more efficient and robust in achieving the lower packet loss rate and higher link utilization.

    • Optimization of the Proxy Node Selection for In-Network Data Processing in Wireless Sensor Networks

      2007, 18(12):3104-3114.

      Abstract (4818) HTML (0) PDF 644.46 K (5132) Comment (0) Favorites

      Abstract:Energy is identified as the most crucial resource of the wireless sensor networks.A significant amount of energy is often consumed by message passing.Many researches have focused on the issues about how to minimize the communication overhead.In-network data processing is a commonly adopted technique,in which an intermediate node called proxy node is chosen to process the source data streams and forward the result to the sink node,thereby reducing the transmission energy consumption.As a result,the optimal selection of the proxy node can minimize the transmission overhead.This paper formulizes the transmission energy consumption of the proxy node selection and proposes an energy efficient selection strategy(EESS),which can greatly reduce the transmission energy consumption of the query without knowing the whole topology of the network.Compared with the method proposed by others,EESS uses fewer control messages and achieves better performance.Simulation results show that EESS performs well even in low-density network and long-distance query,which potentially improves the lifetime of the sensor networks.

    • Software-Based Route Cache Algorithm for Network Processors

      2007, 18(12):3115-3123.

      Abstract (4414) HTML (0) PDF 513.74 K (5048) Comment (0) Favorites

      Abstract:Routers require fast and flexible route table lookup for incoming packets at relatively low cost.This paper describes a software-based route cache algorithm for network processors.Part of the on-chip high-speed memory space is allocated and programmed into a caching table for temporal storage of route lookup results.A suitable hash function can make a good balance between cache miss rate and update complexity,which shortens the average search time,reduces the contentions on memory bus and leaves more headroom for other network applications.Experiments with real-life packet traces show that the packet throughput of a network processor can be greatly improved with only a small number of route cache entries per processing element.

    • A Bi-Level Programming Model and Solution Algorithm on Optimal Parameter Setting in Wireless Sensor Networks

      2007, 18(12):3124-3130.

      Abstract (4223) HTML (0) PDF 434.56 K (4708) Comment (0) Favorites

      Abstract:Both energy efficiency and robustuess are critical design challenges to large scales wireless sensor networks.Applications such as query propagation rely regularly on network-wide flooding as a robust way while frequent flooding consumes too much energy and bandwidth.The effect of packet size on the energy efficiency,and the impact of the transmission radius on the average settling time in which all nodes finish transmitting the flooded packet are analyzed in this paper.A bi-level programming model is imported:the upper level model aims to minimize the average settling time of flooding and the lower level model maximizes the energy efficiency of the whole network.Furthermore,one numerical example is introduced to validate the programming model which shows that the result is feasible and efficient.

    • A Layered Interest Based Topology Organizing Model for Unstructured P2P

      2007, 18(12):3131-3138.

      Abstract (4392) HTML (0) PDF 523.05 K (5302) Comment (0) Favorites

      Abstract:There are two pivotal problems in unstructured P2P system:Topology self-organization and query algorithms,however,the former is more important because a well-organized topology can dramatically improve the performance of query algorithms.This paper develops a new mechanism to organize topology based on a hierarchical representation of interest—SACM(self adaptive community-based model).In this model,each node forms an interest which is dendriform from all resources it possesses.Then a bit sequence CID(community ID) is determined by the interest,and CID is the main metric to organize topology—nodes with the close CID will form a community which is a dense subgraph.SACM not only provides a mechanism to organize topology but also constructs the relationship between resources and topology which is the essential difference between structured and unstructured P2P.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063