• Volume 18,Issue 7,2007 Table of Contents
    Select All
    Display Type: |
    • A Reduction Technique of Petri Nets Based on Logic Circuit

      2007, 18(7):1553-1562.

      Abstract (4392) HTML (0) PDF 734.05 K (6092) Comment (0) Favorites

      Abstract:In traditional methods, the local structure of Petri net is required to compare with all reduction rules. The process is complicate and does not fit for nets with inhibitor arcs. This paper presents a new reduction method. Firstly, Petri net is divided into several maximal acyclic subnets and each one is expressed with logic form. Then, logic algebra is used to reduce the logic form. Finally, the result is reconstructed and embedded in the original net. This paper establishes a method to find and reduce the maximal acyclic subnets and presents the correlative proofs. This method can be applied to Petri nets or subnets with inhibitor arcs and acyclic.

    • Reasoning Within Extended Fuzzy Description Logic Supporting Terminological Axiom Restrictions

      2007, 18(7):1563-1572.

      Abstract (4550) HTML (0) PDF 479.43 K (5078) Comment (0) Favorites

      Abstract:Extended fuzzy description logics are fuzzy extensions of description logics, which support representation and reasoning for expressive fuzzy knowledge. But they lack reasoning algorithms with the terminology axioms. This paper defines restricted TBoxes (terminological boxes) to describe terminology axioms in EFALCR+(extended fuzzy attributive concept description language with complements and transitive roles), proposes and optimizes reasoning algorithms for EFALCR+ with respect to restricted TBoxes. The optimized reasoning algorithm is proved sound, complete and with a worst complexity of exponential time. Its complexity has reached the lower bound, since the reasoning problem for EFALCR+ with respect to restricted TBoxes is proved exponential time complete. So it is an efficient algorithm of reasoning for fuzzy knowledge bases with terminology axioms.

    • The Complexity of Dual Models Problem of Propositional Linear Temporal Logics

      2007, 18(7):1573-1581.

      Abstract (4119) HTML (0) PDF 598.30 K (4822) Comment (0) Favorites

      Abstract:In this paper, the concept of dual models of a propositional linear temporal logic formula is defined: A formula f has dual models if it has two models (namely two w-sequences of states) such that the assignments to atomic propositions at each position of them are dual. Then for various propositional linear temporal logics, the complexity of the problem deciding whether a formula f has dual models (denoted by DM) and the problem of determination of dual models in a Kripke-structure for a formula f (denoted by KDM) are investigated. It is shown that DM and KDM are NP-complete for the logic with F("Future") operator, and they are PSPACE-complete for the logic with F,X("Next") operators, the logic with U("Until") operator, the logic with U, S, X operators, and the logic with regular operators given by Wolper (known as extended temporal logic, ETL).

    • Feature Model Driven Web Services Composition Approach and Its Support Tool

      2007, 18(7):1582-1591.

      Abstract (3980) HTML (0) PDF 564.42 K (5245) Comment (0) Favorites

      Abstract:Web Services are gradually becoming main platform-independent software components and will widely exist in the distributed environment, INTERNET. This kind of software developing method that the creation of new software systems will mainly depend on Web Services composition is attracting more and more researchers’ attention. There is still not a systematic and mature approach for Web Services composition which can be used to instruct the whole composition process. This paper provides a “feature model driven Web Service composition” approach, which makes Feature Model as the modeling tool to cross through the whole Web Services composition process and can be used to drive and direct all the phases in this process, including requirement analysis, business flow modeling, services composition, business flow deployment and execution. Then, combining with a concrete example, the basic idea, rationale and key steps of FWSC (feature model driven Web Services composition) solution are introduced. The work presented in this paper enhances the speed of modeling, development and quality of Web Services composition, and increases the abilities which can make the system dynamically adjust and quickly evolve when requirement changes.

    • Sequential Patterns-Based Cache Replacement in Servlet Container

      2007, 18(7):1592-1602.

      Abstract (4726) HTML (0) PDF 699.64 K (5627) Comment (0) Favorites

      Abstract:Servlet cache can effectively improve the throughput of Servlet container and reduce the response time experienced by the users. But the cache effect is dependent on the hit rate determined by the cache replacement algorithms. Servlets represent some business functions, so mining the business association among Servlets can improve the hit rate of cache replacement algorithms which in turn exhances the performance of Servlet container consequently. However existing literatures such as LRU (least recently used), LFU (least frequently used), GDSF (greedy dual size frequency) rarely take into account the relationships between the Servlets. This paper denotes the business associations as sequential patterns of Servlet container, and presents a k-steps transfer probability graph to denote the access sequential patterns of Servlet container and designs a sequential patterns discovery algorithm KCTPG_Discovery. Two cache replacement algorithms KP-LRU and KP-GDSF are introduced based on the research of the sequential patterns of the Servlets. Comparing with the traditional algorithms such as LRU and GDSF, the experimental results confirm that the hit radio of the cache can be enhanced by using the above algorithms, the two algorithms effectively improve the performance of Servlet container.

    • EfLA Algorithm Based on Dynamic Feedback

      2007, 18(7):1603-1611.

      Abstract (4424) HTML (0) PDF 562.14 K (4871) Comment (0) Favorites

      Abstract:Binary translation is applied for the legacy code porting. Binary code can be executed in different hardware platforms by binary translation. If the source platform uses condition code to change the execution flow, it is an important performance issue to handle the condition code translation. This paper presents the algorithm of Eflag linear analysis. The complexity of the algorithm is linear and the algorithm reduces much of the flag computing and increases the performance of the dynamic execution. Through dynamic profiling, the algorithm solves to eliminate the Eflag calculation in the basic indirect jump block. Some integer test cases are analyzed in spec 2000. The experimental results prove the efficiency of the EfLA (Eflag linear analysis) for large calculation program.

    • MDA-Based Model Management Method and Its Application for TRISO-Model

      2007, 18(7):1612-1625.

      Abstract (4792) HTML (0) PDF 758.25 K (5520) Comment (0) Favorites

      Abstract:TRISO-Model (tridimensional integrated software development model) is a 3-D integrated software engineering methodology proposed to deal with the problems caused by the increasing complexity and dynamic in current software development. In TRISO-model, to maintain the semantic consistency among multiple models and reuse the common model operations, there is the need for semantic-consistent model management. This paper proposes an MDA-based model management method: MDA-MMMethod (MDA based model management method). Based on the common semantic definition—MOF (meta object facility), the implementations of MDA (model driven architecture) standards can be reused in the development of model applications in TRISO-model. The corresponding supporting system, MDA-MMSystem (MDA based model management system), is developed. Based on the system, this method is implemented in SoftPM project. Compared with the traditional solutions, the development efficiency is greatly improved and the cost is reduced. Finally, one example of the model applications and model merging is provided.

    • Manufacturing Supply Chain Planning Based on Extended State Task Network

      2007, 18(7):1626-1638.

      Abstract (3601) HTML (0) PDF 971.52 K (5963) Comment (0) Favorites

      Abstract:Manufacturing supply chain planning is a key factor of manufacturing supply chain management. Assignment of the production tasks, inventory control and transportation between the plants or the enterprises needs to be considered in manufacturing supply chain planning. Extended state task network (ESTN) is proposed in order to abstractly describe the production tasks with complex product production processes (assembly process, distilling process, process with many kinds of input material and many kinds of output material), the store tasks, and the transportation tasks with different modes (one mode is that transfering only one kind of material and another several kinds of material). In the extended state task network, proportion transform task is used to abstractly describe the production task, the store task and the transportation task that transfers only one kind of material. The combination of virtual proportion transform task and the combination move task are applied to describe the transportation task that transfers several kinds of material. It is easier to encode and operate the solution of the manufacturing supply chain planning if metaheuristic methods are used to solve the problem based on ESTN than on other models. A path relinking algorithm is developed to solve manufacturing supply chain planning model based on ESTN. The algorithm maintains a reference set of solution with good quality in evolutionary process. A list of the new solution (path) is created by inserting properties of a guide solution into an initiate solution. The list is used to update reference set in order to evolute the solution. The solution update method of the reference set with diversification detection and decentralization mutation strategy is proposed in the path relinking algorithm. The results of computations demonstrate that the path relinking algorithm can obtain better solutions than the standard genetic algorithm, the standard Tabu search procedure and the general path relinking algorithm, and prove that the solution update method of the reference set with diversification detection is very important to path relinking. Moreover, the decentralization mutation strategy can enhance the capacity of searching a better solution.

    • >Review Articles
    • Lightweight Intrusion Detection System Based on Feature Selection

      2007, 18(7):1639-1651.

      Abstract (7609) HTML (0) PDF 845.09 K (10161) Comment (0) Favorites

      Abstract:The intrusion detection system based on feature selection deals with huge amount of data which contains redundant and noisy features causing slow training and testing process, high resource consumption as well as poor detection rate. Feature selection, therefore, is an important issue in intrusion detection and it can delete redundant and noisy features. In order to improve performances of intrusion detection system in terms of detection speed and detection rate, a survey of intrusion detection system based on feature selection is necessary. This paper introduces the concepts and algorithms of feature selection, surveys the existing lightweight intrusion detection systems based on feature selection algorithms, groups and compares different systems in three broad categories: filter, wrapper, and hybrid. This paper concludes the survey by identifying trends of feature selection research and development in intrusion detection system. Feature selection is not only useful for intrusion detection system, but also helpful to provide a new research direction for intrusion detection system.

    • Evaluation and Improvement of Vertical Handoff Algorithms in Heterogeneous Wireless Networks

      2007, 18(7):1652-1659.

      Abstract (4533) HTML (0) PDF 535.17 K (6560) Comment (0) Favorites

      Abstract:Vertical handoff is the basic requirement for convergence of different access technologies and has received tremendous attention from the academia and industry. In recent years, many research works have focused on vertical handoff decision algorithms. However, the evaluation scenarios in different papers are quite different. Thus it is difficult to compare different algorithms in a fair way. This paper analyzes the general motion models of mobile nodes and identifies a set of novel performance evaluation models. Equipped with the models, this paper analyzes the performance of two types of decision algorithms: Hysteresis based algorithm and dwelling-timer based algorithm. Based on the above, a novel vertical handoff decision algorithm, self-adaptive vertical handoff algorithm (SAVA) is presented. Simulations show that SAVA can achieve better performance compared with the conventional methods.

    • Online Model Estimating-Based Performance Guarantee Mechanism for Dynamic Web Site

      2007, 18(7):1660-1671.

      Abstract (4251) HTML (0) PDF 787.71 K (5528) Comment (0) Favorites

      Abstract:Dynamic multi-tiered Web system can be affected by many unexpected factors during runtime. At the same time, it has different performance characteristic under different workload pattern condition. Accordingly, different performance models are needed to describe the system. Currently, performance guarantee mechanism based on feedback control theory, which aims to eliminate the effect of unexpected factors, often employs single and fixed system performance model which can not adapt to changing performance characteristics. Under fluctuant and unexpected workload condition of Internet, the method can decrease the precision and stability of performance. To guarantee anticipated response time goal, this paper presents an adaptive performance guarantee mechanism based on online estimation of performance model. Using two complementary transactional Web benchmarks to evaluate the techniques experimentally shows that the mechanism can efficiently reduce the squared departure of performance goal under varying workload.

    • A Location-Independent Connected Coverage Protocol for Wireless Sensor Networks

      2007, 18(7):1672-1684.

      Abstract (5196) HTML (0) PDF 862.84 K (5775) Comment (0) Favorites

      Abstract:This paper solves the problem of how to energy-efficiently schedule sensor nodes and meet both constraints of the connected connectivity without accurate location information. Based on the theoretical analysis of the sensing coverage property of the minimal dominating set, the relationship between point coverage and area coverage in geometric graph is established. Based on the analytical results and Rule K algorithm which constructs the connected dominating set, a location-independent connected coverage protocol (LICCP) is proposed. Every sensor node adjusts the appropriate communication range. LICCP can provide the high quality of connected coverage using Rule K algorithm. Extensive simulation results show that the proposed connected coverage protocol can effectively maintain both high quality sensing coverage and connectivity in a long time.

    • A Collaborative Filtering Recommendation Algorithm Based on Influence Sets

      2007, 18(7):1685-1694.

      Abstract (4873) HTML (0) PDF 669.61 K (6479) Comment (0) Favorites

      Abstract:The traditional user-based collaborative filtering (CF) algorithms often suffer from two important problems: Scalability and sparsity because of its memory-based k nearest neighbor query algorithm. Item-Based CF algorithms have been designed to deal with the scalability problems associated with user-based CF approaches without sacrificing recommendation or prediction accuracy. However, item-based CF algorithms still suffer from the data sparsity problems. This paper presents a CF recommendation algorithm, named CFBIS (collaborative filtering based on influence sets), which is based on the concept of influence set and is a hot topic in information retrieval system. Moreover, it defines a new prediction computation method for this new recommendation mechanism. Experimental results show that the algorithm can achieve better prediction accuracy than traditional item-based CF algorithms. Furthermore, the algorithm can alleviate the dataset sparsity problem.

    • TCP Proxy for Satellite Network

      2007, 18(7):1695-1704.

      Abstract (4435) HTML (0) PDF 671.08 K (6180) Comment (0) Favorites

      Abstract:When used with satellite links, traditional TCP suffers from a number of performance problems, especially for higher propagation delay, bit error rate, asymmetrical bandwidth and signal attenuation due to rain. A new active proxy without breaking TCP semantic is proposed to improve TCP performance of satellite network. The congestion control algorithm in the proxy sets the window size according to the measurement information on bandwidth changes from the two probe packet pairs with different priorities. A stability analysis of the hybrid proxy system is performed. The loss recovery based on selective negative acknowledgement (SNACK) can work even if the clocks is not synchronized. The results demonstrate that proxy in combination with SNACK allows TCP to fully exploit the possible throughput of the satellite network.

    • A Semantic Data Coordination Based Approach for Application Integration in Portals

      2007, 18(7):1705-1714.

      Abstract (3879) HTML (0) PDF 529.10 K (5255) Comment (0) Favorites

      Abstract:When used with satellite links, traditional TCP suffers from a number of performance problems, especially for higher propagation delay, bit error rate, asymmetrical bandwidth and signal attenuation due to rain. A new active proxy without breaking TCP semantic is proposed to improve TCP performance of satellite network. The congestion control algorithm in the proxy sets the window size according to the measurement information on bandwidth changes from the two probe packet pairs with different priorities. A stability analysis of the hybrid proxy system is performed. The loss recovery based on selective negative acknowledgement (SNACK) can work even if the clocks is not synchronized. The results demonstrate that proxy in combination with SNACK allows TCP to fully exploit the possible throughput of the satellite network.

    • Capture and Storage of Digital Evidence Based on Security Operating System

      2007, 18(7):1715-1729.

      Abstract (4692) HTML (0) PDF 762.96 K (6293) Comment (0) Favorites

      Abstract:In this paper, a kind of security operating system with the mechanism of real-time forensics (called SeFOS) is presented, the general architecture of SeFOS is described, the model of its forensics behaviors is analyzed with some formal method descriptions, and the method of completely collecting and safely storing for the digital evidences is presented. The forensics model of SeFOS is inside the kernel and the evidences are obtainted from system processes, system calls, resources assigning inside the kernel and network data. Finally, a simulated experiment is designed to validate the efficiency of SeFOS.

    • A Prestige Reporting Mechanism Based on Gray System Theory

      2007, 18(7):1730-1737.

      Abstract (4519) HTML (0) PDF 538.32 K (5358) Comment (0) Favorites

      Abstract:Trust is an important causation between seller and customer in the successful network business, and the prestige of merchant is a vital factor that customers take into account when they select bargainer. In the traditional trust evaluation methods, various hypotheses are put forward for the evaluated entities, and the inveracious recommendation of malicious customers can not be avoided, so the objectivity and creditability of the evaluation result is affected. To the above questions, a mechanism scheme based on the gray system theory is put forward in this paper. It is proved that in the t time, the existence of the entity’ gray relation degree is sufficient and necessary. In this scheme, original data is firstly collected through using taste concourse method, the evaluated entities are regarded as clustering entities, and the grade point value of their key attributes given by customers are regarded as the basis of valuation. Using a gray relation analytical method, the evaluated vectors of the entities can be gotten. Through inducing and adjusting according to the arranged gray level, gray level of the clustering entity is judged, and trust degree of the entity can be obtained. The actual mark of the entity’ key attributes given by customers are regarded as the basis of valuation in this scheme. Gray number close to fact is adopted as the expressive mode so that the various hypotheses are avoided. Original data is collected through using taste concourse method and it can effectively avoid malicious recommendation. In the example, it is explained that entity’s trust is quantified and trust among entities can be compared. The scheme has some advantages, such as credible evaluation, strong maneuverability, and suiting for the automatic processing of software, etc .It is a practical and valuable new method to evaluate entity’s trust degree in network circumstance.

    • Applying Dixon Resultants in Cryptography

      2007, 18(7):1738-1745.

      Abstract (4473) HTML (0) PDF 673.01 K (5403) Comment (0) Favorites

      Abstract:Based on extended Dixon resultants, a new algorithm DR (Dixon resultants) is proposed to solve the system of multivariate polynomial equations. The basic idea of DR is to apply the extended Dixon resultants method to the system of multivariate polynomial equations, by taking x1,x2,…,xn-1 as variables and xn as parameter. The time complexity of DR technique is evaluated. It seems to be polynomial when the system is sparse and m=n, and the mixed volume is polynomial. Moreover, DR technique is compared with Buchberger’s algorithm and XL technique in this paper. It is shown that DR is far more efficient than Buchberger’s algorithm and XL when m=n. DR is a quite efficient algorithm and makes a good use of the sparsity of the sparse system. Besides its efficiency, another advantage of DR is that its complexity is easy to determine.

    • An Improved Formal Model of Cryptographic Protocol

      2007, 18(7):1746-1755.

      Abstract (4155) HTML (0) PDF 824.71 K (5858) Comment (0) Favorites

      Abstract:MSR (multiset rewriting) model is a technique of protocol modeling based on multiset rewriting. According to the results of the current study, this model is not perfect. Since the intruder model of MSR model is not suitable for the verification of the protocol, it was improved. Using this model, the secrecy and authentication of the cryptographic protocols are described. The practice proves the improvement of the former model.

    • A Parallelizable Message Authentication Code

      2007, 18(7):1756-1764.

      Abstract (5038) HTML (0) PDF 505.24 K (5389) Comment (0) Favorites

      Abstract:This paper defines and analyzes a fully deterministic parallelizable block-cipher mode of operation for message authentication — DPMAC (deterministic parallelizable message authentication code). DPMAC is constructed based on a 128-bit block cipher, works for strings of any bit length, and employs a single block-cipher key. Its security is proved, using the Game-Playing technique to quantify an adversary’s forgery probability in terms of the quality of the block cipher as a pseudo-random permutation.

    • Coverage-Ensured Reliable Links with Accurate Timing in Mobile Ad Hoc Network

      2007, 18(7):1765-1773.

      Abstract (3676) HTML (0) PDF 536.53 K (5056) Comment (0) Favorites

      Abstract:In practical Mobile Ad-Hoc NETworks (MANETs), the network topology changes while the nodes move quickly. Each node cannot ensure the broadcasting coverage to all other nodes because it cannot update its information on the network topology instantly. This is because the self-pruning based broadcasting can not calculate a valid connected dominating set promptly in the quickly moving environment. For ensuring the broadcasting coverage in MANETs, the period of validity for routing links is taken into account. It is supposed that each node in the network has different transmission ranges, different “Hello” intervals, and moves in different directions and at different velocities. The period of validity of connection can be determined according to the relative speed and the transmission ranges of the nodes. The accurate timing in each node for counting the period of validity of the connection is used to provide the correct and instant information about the network topology. This method is called the Reliable Links with Accurate Timing (RELAT). RELAT guarantees the conditions of connectivity of the virtual network and the availability of the physical links, and in the most cases the consistency of the local view so that it ensures the broadcasting coverage in the network. Simulation results show that this algorithm is sufficient to ensure the broadcasting coverage in MANET and it also has high probabilities of coverage in the case of high node density even the conditions for the algorithm are not fully satisfied.

    • Construction of Elliptic Curves over Finite Fields with a Point of Given Order

      2007, 18(7):1774-1777.

      Abstract (4441) HTML (0) PDF 250.29 K (5552) Comment (0) Favorites

      Abstract:The elliptic curves over a finite field with q elements are constructed. Let l be a prime, it is proved in this paper that if the equation U2-D(x)V2=ε(x-a)l defined over GF(q)[x] has a primitive solution over GF(q)[x], where D(x)∈GF(q)[x] is a monic squarefree degree three polynomial, then the elliptic curve y2=D(x) has a point (a,b) with order l. This result provides an algorithm on constructing elliptic curves with a point of the prescribed order.

    • Debt Relationship Based Fair File Exchange in Distributed Hash Table Network

      2007, 18(7):1778-1785.

      Abstract (4502) HTML (0) PDF 505.20 K (5376) Comment (0) Favorites

      Abstract:The selfishness of nodes degrades the system usability of P2P network. Debt relationship based file exchange network constructs an incentive mechanism which induces cooperation and guarantees fairness in file exchange. The key point of the mechanism is finite neighbors, an inherent characteristic in DHT (distributed hash table) networks and so is the interacting between nodes form a repeated games. DFFE (debt relationship based fair file exchange in DHT network) protocol only needs to maintain a little local interacting information, so the protocol cost is low and scalable for large network. In routing, one-hop information based greedy arithmetic is used. Game among rational nodes exists a Nash equilibrium and the approximate algorithm of strategy selection gradually converges. Simulations indicate the validity of incentive mechanism and the steady performance in dynamic networks.

    • A Cluster-Based QoS Multipath Routing Protocol for Large-Scale MANET

      2007, 18(7):1786-1798.

      Abstract (4184) HTML (0) PDF 887.59 K (5456) Comment (0) Favorites

      Abstract:To support QoS routing in MANET (mobile ad hoc networks) is a core issue in the research of MANET. Numerous studies have shown the difficulty for provision of quality-of-service (QoS) guarantee in Mobile Ad hoc networks. This paper proposes a scheme referred to a cluster-based QoS multipath routing protocol (CQMRP) that provides QoS-sensitive routes in a scalable and flexible way in mobile Ad Hoc networks. In the strategy, each local node just only maintains local routing information of other clusters instead of any global ad hoc network states information. It supports multiple QoS constraints. The performance of the protocol is evaluated by using the OPNET simulator and the result shows that this protocol can provide an available approach to QoS multipath routing for mobile Ad Hoc networks.

    • Compression of Tate Pairings on Elliptic Curves

      2007, 18(7):1799-1805.

      Abstract (4459) HTML (0) PDF 466.54 K (5190) Comment (0) Favorites

      Abstract:In this paper, utilizing maps between cyclic groups contained in a finite field, two efficient methods for compressing a Tate pairing defined on a supersingular elliptic curve with prime characteristic p and MOV degree 3 are presented. They compress a pairing value from a string of length of 6logp bits to ones of 3logp and 2logp bits, respectively, and an implementation for both the compressed pairings makes use of the codes for the optimized algorithm of the original pairing and no new code is needed. Both the compressed pairings achieve the speed of the original implementation.

    • Hardware/Software Interface Design of Godson-2 Simultaneous Multithreading Processor

      2007, 18(7):1806-1817.

      Abstract (4223) HTML (0) PDF 591.80 K (5305) Comment (0) Favorites

      Abstract:With the development of VLSI (very large scale integrated circuit) technology, a single chip can contain over one billion transistor. Multithreading technique is the developing trend of high performance processor in the future. How to design hardware/software interface has become a major problem for multithreading processor. According to simultaneous multithreading processor requirement and the original Godson-2 architecture, the hardware/software interface for Godson-2 simultaneous multithreading processor is advanced and defined. Based on this interface, the corresponding Linux for Godson-2 SMT (simultaneous multithreading) processor is designed and implemented on Linux 2.4.20. The SPEC CPU2000 and some other benchmark programs are used to compare the performance of Godson-2 SMT processor with the superscalar processor. The results show that the Godson-2 SMT processor designed with the interface can get a much better performance. The hardware/software interface and operating system design in this paper are also very useful for the multi-core processor design.

    • An NPV-Based Optimal Fault-Tolerant Routing Algorithm for Generalized Hypercube

      2007, 18(7):1818-1830.

      Abstract (4225) HTML (0) PDF 1.35 M (5025) Comment (0) Favorites

      Abstract:Optimal fault-tolerant routing is imperative for large hypercube systems in the existence of large number of faulty links or nodes. This paper first defines hypercube network with a large number of faulty nodes and/or links to be generalized hypercube and illustrates that many non-hypercube systems can be transformed to Generalized Hypercube systems. It then proposes node path vector (NPV) to capture the complete optimal and sub-optimal routing information for a generalized hypercube system. To reduce the computation iterations in solving NPV, it also introduces the concept of relay node technique. Based on NPV and relay node technique, this paper further proposes optimal fault-tolerant routing scheme (OFTRS) to derive shortest path for any communication pair in a generalized hypercube system. With an example of large number of faulty nodes or faulty links, it is illustrated that the previous algorithm could omit up to 60% routing paths, while this approach achieves all optimal and sub-optimal routing paths. Compared to previous work, OFTRS has a significant improvement in obtaining routing information for optimal and sub-optimal, i.e. no matter how many faulty nodes or links exist, it is guaranteed to route through the optimal or sub-optimal path as long as a path between the source-destination pair exists. In addition, the proposed scheme is distributed and relying only on non-faulty neighboring nodes since it only requires the information of non-faulty neighbor nodes in computing NPV, thus it has high applicability, especially when some non-hypercube systems can be transformed into Generalized Hypercube systems.

    • Stream Task Scheduling Method for Deadline-Sensitive Applications

      2007, 18(7):1831-1843.

      Abstract (4252) HTML (0) PDF 527.59 K (5242) Comment (0) Favorites

      Abstract:Most of the existing real-time processing systems over data streams focus on minimizing average tuple latency while less attention has been paid to deadline of each individual tuple. This paper presents a real-time adaptive batch task scheduling (ATS) mechanism to support the strict deadline requirements of mission-critical applications over time-varying and bursting data streams. The ATS strategy aims at maximizing task throughput and minimizing deadline miss ratio by minimizing both scheduling overheads and deadline miss overheads. The paper proposes a concept of the optimal scheduling unit—batch granularity, and designs a closed-loop feedback control mechanism to adaptively select the dynamic optimal batch size in a non-predictable data stream environment. The theoretical analyses and experimental results show the efficiency and effectiveness of the ATS batching technique.

    • A Static Priority Assignment Algorithm with the Least Number of Priority Levels

      2007, 18(7):1844-1854.

      Abstract (3973) HTML (0) PDF 734.26 K (5680) Comment (0) Favorites

      Abstract:With the increased penetration of real time systems into rapidly evolving systems, especially, in on-chip systems such as PDA (personal digital assistant) and PSP (play station portable) etc., the performance/price ratio is becoming a major concern for the system designers. At present, to maximize the performance/price ratio, these systems provide a limited number of priority levels to reduce price and the same priority level is assigned to multiple tasks. Such a trade off makes the most widely used static priority assignment algorithms such as RM (rate monotonic) and DM (deadline monotonic), impractical. As an alternative approach, static limited priority assignment assigns priority to tasks in such a way that the system feasibility is maintained by employing only a few priority levels. To date, static limited priority assignments can be classified into two categories, fixed-number priority and least-number priority assignment. This paper proposes a necessary and sufficient condition for analyzing task set feasibility under static limited priority assignment to make it suitable for a wide rage of applications. In addition, the superiority of low-to-high priority assignment strategy and the concept of saturated task group/assignment are highlighted along with several important properties for transformation among the limited priority assignments. A formal proof is drawn in favor of the least-number priority assignment when tasks are staticpriority schedulable. Finally, a least-number priority assignment algorithm with low time complexity is presented and its efficiency is verified by the experimental results.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063