• Volume 24,Issue 6,2013 Table of Contents
    Select All
    Display Type: |
    • Multi-Objective Cooperative Co-Evolutionary Algorithm for Negotiated Scheduling of Distribution Supply Chain

      2013, 24(6):1165-1176. DOI: 10.3724/SP.J.1001.2013.04288

      Abstract (3701) HTML (0) PDF 706.02 K (5656) Comment (0) Favorites

      Abstract:It is investigated that multiple distributors simultaneously negotiate with a manufacturer to improve themselves schedules on a distribution supply chain in which manufacturer has stronger power than distributors and does scheduling decision prior to distributors. A compensation based negotiation scheduling model is built. A novel multi-objective cooperative co-evolutionary algorithm (GLCCEC) that concurrently implements local evolutionary computing of distributors and global evolutionary computing of manufacturer is proposed. Global elite solution combination strategy with gradually gene skipping change and real time updating of global non-dominated solution set are designed for manufacturer. A dynamic programming algorithm with constraint of retaining sequence of local schedule is designed in order to get global solution from a local solution of distributor. Computational experiments show that GLCCEC algorithm can effectively improve schedule of each distributor with no deterioration of manufacturer’s schedule. Moreover, the non-dominated solutions of GLCCEC not only are better than that of other best cooperative co-evolutionary algorithms: MOCCGA, NSCCGA, GBCCGA, but also has good spread in solution space.

    • Differential Evolution Algorithm for Multimodal Optimization Based on Abstract Convex Underestimation

      2013, 24(6):1177-1195. DOI: 10.3724/SP.J.1001.2013.04323

      Abstract (3654) HTML (0) PDF 1.07 M (5412) Comment (0) Favorites

      Abstract:In this paper, a modified differential evolution algorithm, which is based on abstract convex lower approximation, is proposed for multimodal optimization. First, the original bound constrained optimization problem is converted to an increasing convex along rays (ICAR) function over a unit simplex by using the projection transformation method. Second, based on abstract convex theory, the study builds a lower approximation to original optimization problem by using a finite subset of biased sampling points comes from the population replacement scheme in the basic DE algorithm. Some properties of underestimation model are analyzed theoretically, and an N-ary tree data structure have also been designed and implemented to solve them. Furthermore, considering the difference between the original and its underestimated function values, the paper proposes a niche identify indicator based on biased DE sampling procedure, and also design a regional phylogenetic tree replacement strategy to enhance the exploitation capacity in niche. Experimental results confirm that the proposed algorithm can distinguish between the different attraction basins, and safeguard the consequently discovered solutions effectively. For the given benchmark problems, the proposed algorithm can find all the global optimal solutions and some good local minimum solutions.

    • Hybrid Parallel Method for XML Parsing

      2013, 24(6):1196-1206. DOI: 10.3724/SP.J.1001.2013.04281

      Abstract (3833) HTML (0) PDF 617.53 K (5422) Comment (0) Favorites

      Abstract:This paper presents a hybrid parallel method for XML parsing, which consists of a lightweight events partition stage, followed by an event-level parallel parsing stage, and a final post-processing stage. SIMD instructions are used to speed up the processing in the events partition stage. Software pipelined processing is achieved at stage level. The study combined event-level data parallel parsing technique and pipelined processing technique to create a hybrid parallel method. Compared to other parallel solutions, the method has the advantage of a much more efficient parallel processing with low synchronization overhead. The method is tested on a Linux machine with Intel Xeon X7560 CPU for 8 cores, and the results show the method can achieve a much higher speed up and better scalability than other software implementations done to date.

    • Path Balance Based Heuristics for Cost Optimization in Workflow Scheduling

      2013, 24(6):1207-1221. DOI: 10.3724/SP.J.1001.2013.04259

      Abstract (3462) HTML (0) PDF 749.30 K (4710) Comment (0) Favorites

      Abstract:In order to address the trade-off problems between time and cost in the grid workflow scheduling with deadline constraints, this study proposes a new algorithm named path balance (PB) to adjust the length for each path in workflow and develop a heuristic referred as path balance based cost optimization (PBCO). PBCO makes full use of the workflow cost optimization space by setting a deadline for each task, based on the results of PB, and enlarges the optimization space for multitude tasks by distributing the redundancy time, based on the quantities of tasks at each level, this is divided by using the bottom level strategy. According to the experimental results, the average execution cost of PBCO decreases by about 35% relative to other famous heuristics, such as DET and DBL.

    • >Review Articles
    • Survey of RDF Query Processing Techniques

      2013, 24(6):1222-1242. DOI: 10.3724/SP.J.1001.2013.04387

      Abstract (9060) HTML (0) PDF 1.06 M (13945) Comment (0) Favorites

      Abstract:With the advance of the semantic Web and information extraction techniques, more and more RDF data emerge on the Web. Managing massive RDF data has been a hot research topic for both academic and industrial communities. In the paper, we give a thorough analysis and comparison of the existing techniques from the view of the accumulation and organization of RDF data. Topics on query representation, query processing and query optimization are extensively discussed. We finally identify challenges and future work of the area.

    • Uncertain Moving Range Query Techniques in Road Networks

      2013, 24(6):1243-1262. DOI: 10.3724/SP.J.1001.2013.04320

      Abstract (3199) HTML (0) PDF 1.16 M (5394) Comment (0) Favorites

      Abstract:With the continuous development of the mobile positioning technology and the Internet, spatio-temporal query processing has drawn more and more attention. In the real situation, the directions and trajectories of mobile objects are usually restricted by an underlying spatial network, and the position information is usually uncertain. Based on the general probability distribution function (PDF) used to represent the uncertainty of the positions, incremental processing model and optimization methods for probabilistic query based on split intervals are proposed. By taking the probability distribution approximate center as the estimated position of the target objects, the general position uncertainty problem is solved and the efficiency is improved with a minor cost of accuracy. Finally, based on the real-life road network dataset and synthetic object distribution, the accuracy and efficiency of the proposed models and algorithms are verified.

    • Method of Detecting Application-Layer DDoS Based on the Out-Linking Behavior of Web Community

      2013, 24(6):1263-1273. DOI: 10.3724/SP.J.1001.2013.04274

      Abstract (3531) HTML (0) PDF 652.65 K (5356) Comment (0) Favorites

      Abstract:Distributed denial of service (DDoS) attacks have become more and more difficult to detect due to various hiding techniques that have been adopted. Application-Layer the DDoS attack is becoming a major threat to the current network. This paper analyzes the stability of out-linking behavior on the level of Web community and proposes an approach for detecting application-layer DDoS aimed at Web server. CUSUM is used to detect the offset of out-linking parameters and determine the attack occurring. Rather than the individual behavior, out-linking parameters are about the group behavior of Web community, so it is difficult to circumvent detecting by simulating normal accesses. This approach can not only detect the anomaly of accessing behavior, but can also distinguish flash crowd and application-layer DDoS. The results of simulated experiments show that this approach can detect various types of DDoS attacks aiming at Web servers effectively.

    • Secure Routing Architecture Based on Accountability Realm

      2013, 24(6):1274-1294. DOI: 10.3724/SP.J.1001.2013.04284

      Abstract (3359) HTML (0) PDF 1.14 M (5877) Comment (0) Favorites

      Abstract:The study proposes a novel routing architecture, accountability realm based routing architecture (Arbra for short), to sovle prefix hijacking, routing forgery and source address spoofing. In Arbra, the accountability realm (AR) is an independently administered network operated by distinct administrative unit and also the basic element of network topology. Because AR should be responsible for the network actions of users in it, the paper calls it the accountability realm. This paper first designs a mapping method from automous system to AR, and then proposes a two-level routing architecture based on AR. Further, the study builds a routing design framework, which mainly includes a hybrid addressing scheme, core routing protocol, identifier mapping protocol, packet transmitting process and public key management mechanism. Finally, Arbra and other famous routing architecture (such as LISP, AIP, etc) are compared, and the study analyzes the security, scalability, performance and deployment of Arbra. Analysis and evaluations show that: (1) Arbra can solve prefix hijacking, route forgery and source address spoofing; (2) the routing table needed by Arbra is smaller, so we can say that Arbra has better scalability; (3) the performance and deployment cost of Arbra is reasonable. Above all, it is clear that Arbra is a feasible secure routing architecture.

    • Low-Interference Multicast in Wireless Mesh Networks

      2013, 24(6):1295-1309. DOI: 10.3724/SP.J.1001.2013.04291

      Abstract (3484) HTML (0) PDF 792.31 K (4820) Comment (0) Favorites

      Abstract:Compared with wireless sensor networks and mobile ad hoc networks, wireless mesh networks mainly focus on improving the throughput of multicast, while interference severely limits the network throughput. When building a multicast topology, the minimum cost or shortest path is generally taken into account in the traditional methods, and only a few works have tried to improve the performance by reducing interference. However, they calculate the interference by the method for unicast topology, which is not suitable for multicast. For example if n nodes will receive simultaneously packets from one node, among these n nodes there is interference in unicast, but not in multicast. Therefore this tudy proposes the multicast conflict graph to calculate interference of the multicast topology, and then the concise definition of interference of multicast trees is provided. The study shows that building minimum interference multicast trees (MITs) is a NP-complete problem and proposes a gravitation-based heuristics to approximate such optimal trees. To apply to the environment of multi-channel, the study also proposes the multi-hop channel algorithm (MH) for multicast, which can meet different interference ranges. Simulation results reveal that the algorithms can reduce interference and increase throughput in both single-interface single-channel and multi-interface multi-channel wireless mesh networks.

    • Dynamic Hierarchical Power Control in Multi-User Cognitive Wi-Fi 2.0 Wireless Networks

      2013, 24(6):1310-1323. DOI: 10.3724/SP.J.1001.2013.04313

      Abstract (3369) HTML (0) PDF 916.75 K (5714) Comment (0) Favorites

      Abstract:Cognitive Wi-Fi 2.0 wireless networks with the typical characteristics of cognition, reconfirmation and self-organizing abilities as the novel open spectrum sharing technology, that are used to improve the network performance, have promisingly captured a great attention from the wireless community. In this paper, to capture the hierarchical decision-making properties of the power control, the study proposes a dynamic hierarchical power control for the cognitive Wi-Fi 2.0 wireless networks. The spectrum sharing issue is formulated as the Stackelberg game with the more foresighted cognitive users as the leaders and short-sighted users as follower, and the paper derives the closed-form power policy, respectively. Meanwhile, the study concludes that this policy can achieve the optimal performance in the weak interference environment after analysis. Simulations results show that the proposed algorithm converges after limited iterations, which also effectively guarantees the optimal trade-off between the individual performance and the network overall performance.

    • Guess and Determine Attack on SNOW3G and ZUC

      2013, 24(6):1324-1333. DOI: 10.3724/SP.J.1001.2013.04287

      Abstract (4289) HTML (0) PDF 631.92 K (9593) Comment (0) Favorites

      Abstract:SNOW3G stream cipher is the core of the standardized 3G Partnership Project (3GPP) confidentiality and integrity algorithms UEA2 & UIA2 while ZUC stream cipher is the core of the standardized 3GPP confidentiality and integrity algorithms 128-EEA3 & 128-EIA3. So far, there have been no Guess and Determine attacks applied to SNOW3G. In this paper, a Guess and Determine attack on SNOW3G is proposed with a computational complexity of 2320, requiring 9 keystream words (each word consists of 32 bits). After analyzing the design of ZUC, a half-word-based Guess and Determine attack on ZUC is introduced, based on transforming the word-based nonlinear function of ZUC into a half-word-based nonlinear function. The attack on ZUC has a computational complexity of 2392 and requires 9 keystream words, which is better than the previous Guess and Determine attack on ZUC. These results show that ZUC has much better resistance against Guess and Determine attack than SNOW 3G, though the internal state size of ZUC is smaller than SNOW 3G.

    • Protocol Independent Identification of Encrypted Traffic Based on Weighted Cumulative Sum Test

      2013, 24(6):1334-1345. DOI: 10.3724/SP.J.1001.2013.04279

      Abstract (3884) HTML (0) PDF 661.41 K (6872) Comment (0) Favorites

      Abstract:A protocol independent identification algorithm is proposed to identify encrypted traffic from both public and private encryption protocols. The randomness of the packet is evaluated by a cumulative test. In addition, results are weighted conflated. A test is performed when every new packet arrived rather than after all packets have received, so that time consumed computation is avoided. The quantity of packets may vary dynamically according to delay and accuracy requirement. Experiments results show that the algorithm achieves accuracy above 90% for SSL and private encryption protocol traffic.

    • >Review Articles
    • Research on the Technologies of Byzantine System

      2013, 24(6):1346-1360. DOI: 10.3724/SP.J.1001.2013.04395

      Abstract (9414) HTML (0) PDF 1022.34 K (15828) Comment (0) Favorites

      Abstract:Nowadays, in order to resolve the reliability problem in an enlarging distributed system, Byzantine fault tolerant system has researched popularly for its ability of tolerating arbitrary faults. In this paper, the definitions of Byzantine system and the estimation methods of improving the performance of Byzantine system are introduced. After that, some unresolved problems and some future development trends will be indicated. Finally, after analyzing the status of studies, several conclusions are drawn: 1) The cost of running a Byzantine system is still much higher than non-Byzantine system. Plans to increase the performance and decrease the overhead are need to be explored in further study. 2) While detecting Byzantine faults and proactive recovery can keep Byzantine system from breaking down, they still have some drawbacks. How to eliminate the drawbacks should be studied. 3) Different applications require different aspect of optimization. How to make practical Byzantine systems are needed to be studied.

    • Static Analysis for the Placement of Application-Level Checkpoints on Heterogeneous System

      2013, 24(6):1361-1375. DOI: 10.3724/SP.J.1001.2013.04325

      Abstract (3071) HTML (0) PDF 1.24 M (5231) Comment (0) Favorites

      Abstract:Application-Level checkpointing is a widely concerned technique used in large-scale scientific computing fields, and programmers to choose the appropriate place to save crucial data: henceforth, the fault-tolerant overhead can be reduced. There are two key issues in adopting this technique: find the proper place and reduce the scale of global checkpoints saving datum. The same problem is encountered when emerging heterogeneous systems with general purpose computation on GPUs. Towards architecture of heterogeneous system and characterization of application, this paper performs static analysis for the checkpointing configurations and placements, and two novelty approaches are proposed: ‘synchronous checkpoint placement’ and the ‘asynchronous checkpoint placement’. The placement problem of checkpoints can be mathematically modeled and solved. Finally, their performances are evaluated via conducting experiments.

    • Multicore-Oriented Service Optimization of Parallel Discrete Event Simulation

      2013, 24(6):1376-1389. DOI: 10.3724/SP.J.1001.2013.04280

      Abstract (3518) HTML (0) PDF 765.50 K (7387) Comment (0) Favorites

      Abstract:The development of CPU has made its way into the era of multicore. The current parallel simulation kernel utilizes multicore resource by a multiprocess, which leads to inefficiency in synchronization and communication. This study has optimized two services based on hierarchical parallel simulation kernel (HPSK) model to support high performance simulation in multithread paradigm. First, the paper proposes a protocol of EETS computation based on hybrid time management, which can be configured flexibly as asynchronous EETS algorithm according to application’s characteristics. Second, the study proposes an event management algorithm based on characteristics of events interaction, which can create events lock-free, commits events asynchronously, and transfers events based on pointers, to eliminate the overhead of locks and to reduce the usage of memory. Experimental results from phold show that the optimized HPSK works well on different conditions.

    • >Review Articles
    • Deterministic Replay for Parallel Programs in Multi-Core Processors

      2013, 24(6):1390-1402. DOI: 10.3724/SP.J.1001.2013.04392

      Abstract (7868) HTML (0) PDF 728.82 K (8793) Comment (0) Favorites

      Abstract:The deterministic replay for parallel programs in multi-core processor systems is important for the debugging and dissemination of parallel programs, however, due to the difficulty in tackling unsynchronized accessing of shared memory in multiprocessors, industrial-level deterministic replay for parallel programs have not emerged yet. This paper analyzes non-deterministic events in multi-core processor systems and summarizes metrics of deterministic replay schemes. After studying the research for deterministic multi-core processor replay in recent years, this paper introduces the proposed deterministic replay schemes for parallel programs in multi-core processor systems, investigates characteristics of software-pure and hardware-assisted deterministic replay schemes, analyzes current researches and gives the prospects of deterministic replay for parallel programs in multi-core processor systems.

    • Enabling Elasticity of Key-Value Stores in the Cloud Using Cost-Aware Live Data Migration

      2013, 24(6):1403-1417. DOI: 10.3724/SP.J.1001.2013.04352

      Abstract (3959) HTML (0) PDF 1.52 M (7137) Comment (0) Favorites

      Abstract:Key-Value stores play an important role in today’s large-scale, high-performance cloud applications. Elastic scaling and load rebalancing with low cost live data migration are critical to enabling the elasticity of Key/Value stores in the cloud. Learning how to reduce the migration cost is one difficult problem that cloud providers are trying to solve. Many existing works try to address this issue in non-virtual environments. These approaches, however, may not be well suited for cloud-based Key/Value stores. To address this challenge, the study tackles the data migration problem under a load rebalancing scenario. The paper builds a one cost model that could be aware of the underlying VM interference and trade-off between migration time and performance impact. A cost-aware migration algorithm is designed to utilize the cost model and balance rate to guide the choice of possible migration actions. Our evaluation using Yahoo! Cloud Serving Benchmark shows the effectiveness of the approach.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063