• Volume 16,Issue 10,2005 Table of Contents
    Select All
    Display Type: |
    • Approximate-Backbone Guided Fast Ant Algorithms to QAP

      2005, 16(10):1691-1698. CSTR:

      Abstract (4666) HTML (0) PDF 549.96 K (5701) Comment (0) Favorites

      Abstract:Quadratic Assignment Problem (QAP) is one of the classical combinatorial optimization problems and is known for its diverse applications. This paper presents a new fast ant heuristic for the QAP, the approximatebackbone guided fast ant colony algorithm (ABFANT). The main idea is to fix the approximatebackbone which is the intersection of several local optimal permutations to the QAP. After fixing it, the authors can smooth the search space of the QAP instance without losing the search capability, and then solve the instance using the known fast ant colony algorithm (FANT) which is one of the best heuristics to the QAP in the much smoother search space. Comparisons of ABFANT and FANT within a given iteration number are performed on the publicly available QAP instances from QAPLIB. The result demonstrates that ABFANT significantly outperforms FANT.Furthermore, this idea is general and applicable to other heuristics of the QAP.

    • A Practical Algorithm for Haplotyping by Maximum Parsimony

      2005, 16(10):1699-1707. CSTR:

      Abstract (3948) HTML (0) PDF 588.96 K (5014) Comment (0) Favorites

      Abstract:Haplotypes, rather than genotypes are required in some disease susceptibilities and drug response tests.However, it is both time-consuming and expensive to obtain haplotypes experimentally. Therefore usually genotype data are collected in the laboratory at first, then, haplotype data are inferred from them resorting to some computational approaches. Different from Clark's well-known haplotype inference method, Gusfield and Wang et al.proposed a new model according to the maximum parsimony principle. It tries to find a minimum set of haplotypes that can explain the genotype samples. This parsimony model overcomes some weaknesses of Clark's method. For the parsimony this paper presents model a polynomial time greedy algorithm and a compound algorithm that combines the greedy policy with the branch-and-bound strategy in a uniform framework. Compared with the original complete algorithm proposed by Wang et al., the greedy approximation algorithm runs much faster, and in the meanwhile, produces relatively higher accurate results. The compound algorithm is also a complete algorithm.Simulation results show that it is much more efficient and can be applied to instances of much larger scales than the original complete algorithm.

    • Effective Synchronization Removal in Concurrent Java Programs

      2005, 16(10):1708-1716. CSTR:

      Abstract (4085) HTML (0) PDF 487.16 K (5977) Comment (0) Favorites

      Abstract:Synchronization operations make a huge expense for concurrent Java programs. This paper proposes an effective and precise static analysis algorithm for the redundant synchronization removal. The algorithm consists of two phases-basic analysis and inter-thread temporal analysis. Both phases take the effect of control flow relation and thread control relation into count. This paper also constructs a Java compiler-JTool and implements the algorithm on it. To deterministic single-threaded programs, the removal ratio reaches 100% and to multi-threaded programs, the removal ratio is higher than the existing analysis tools.

    • An Extended Resources Model Based on Distributed Cognition Theory

      2005, 16(10):1717-1725. CSTR:

      Abstract (3992) HTML (0) PDF 1.29 M (5877) Comment (0) Favorites

      Abstract:Distributed cognition theory plays a role of instructor in Human-Computer Interaction research by coordinating interaction between human and computer and combining advantages of them. Though Resources Model based on distributed cognition theory has been successfully employed for analyzing human computer interaction, the model, to some extent, leads to confusion in the representative forms because of the absence of support to complex user tasks and correct definitions of elements. Therefore, an extended resources model (ERM) is constructed by using distributed cognition theory to connect actions with representations in Human-Computer Interaction and to guide the design and realization of interfaces. The Extended Resources Model supports actions with static constructions and interactive strategies so as to decrease human cognitive burdens of interaction. This work will be beneficial to designing interfaces according to human’s cognitive characteristics.

    • A Real-Time Dynamic Voltage Scaling Algorithm Based on Syntax Tree for Low Power

      2005, 16(10):1726-1734. CSTR:

      Abstract (4023) HTML (0) PDF 541.03 K (6090) Comment (0) Favorites

      Abstract:Dynamic voltage scaling is an effective low-power technique. Using the technique, compiler-directed dynamic voltage scaling can reduce computer’s energy consumption effectively. Based on programming language’s syntax tree, a real-time dynamic voltage scaling algorithm for low power is presented, and the algorithm assisted with static timing analysis could make intra-task dynamic voltage scaling by automatically inserting dynamic voltage scaling source code. The algorithm has been realized in the real-time low-power system RTLPower, and obtained energy reduction of up to 50 percent over no power management in some real-time embedded benchmarks.

    • Event Constraint Based Software Process Validation

      2005, 16(10):1735-1742. CSTR:

      Abstract (4003) HTML (0) PDF 541.03 K (4981) Comment (0) Favorites

      Abstract:Software process is a human-centered system, with special characteristics in dynamic and continuous evolvement. The physical execution of a defined process will normally deviate from its process model. This paper uses E-CSPE (extended constraints on succeeding and proceeding events) constraints to carry out process validation and deviation measurement. The event constraints are defined based on the process model. The execution of a process instance is recorded down as an event sequence. The event sequence is analyzed to determine how much each event constraint defined is covered or violated. The result can be used to compute the EPD (event constraint based process difference metric) and EAD (event constraint based activity deviation metric) metrics. The EPD metric can reflect the difference between the process execution and its process model, while the EAD metric can provide some evidence for process evolvement.

    • >Review Articles
    • Research on Theory and Approach of Provable Security

      2005, 16(10):1743-1756. CSTR:

      Abstract (10366) HTML (0) PDF 545.62 K (21684) Comment (0) Favorites

      Abstract:This paper presents a survey on the theory of provable security and its applications to the design and analysis of security protocols. It clarifies what the provable security is, explains some basic notions involved in the theory of provable security and illustrates the basic idea of random oracle model. It also reviews the development and advances of provably secure public-key encryption and digital signature schemes, in the random oracle model or the standard model, as well as the applications of provable security to the design and analysis of session-key distribution protocols and their advances.

    • A Formal Method for Analyzing Electronic Commerce Protocols

      2005, 16(10):1757-1765. CSTR:

      Abstract (3862) HTML (0) PDF 412.54 K (5194) Comment (0) Favorites

      Abstract:A formal method which can be used to analyze security properties such as accountability and fairness in electronic commerce protocols is presented. Compared with the previous work, the main contributions are the following. Firstly, a formal definition is given to the possession set of each protocol participant, and the initial possession set depends only on the environment. Secondly, the set of initial state assumptions is divided into three categories: basic assumptions, trust assumptions, and protocol comprehension assumptions, in order to avoid analysis errors caused by informal initial state assumptions. Thirdly, the set of trust assumptions is articulated by formal specification at a lower level of granularity, exposing the essence of the protocol. Fourthly, establishing an axiom system makes the new approach more rigorous and expressive.

    • A Minimum Delay Spanning Tree Algorithm for the Application-Layer Multicast

      2005, 16(10):1766-1773. CSTR:

      Abstract (4541) HTML (0) PDF 368.00 K (7048) Comment (0) Favorites

      Abstract:Real time transmission, which is delay sensitive, is an important aspect of application-layer multicast. It is crucial to build an efficient multicast tree to guarantee the lower delay. This research is focused on the algorithms of the minimum-delay spanning tree for the application-layer multicast. Firstly, it is stated that the total delay is affected by communication delay, processing delay and the degree of nodes. Then the network is modeled into the node-and-edge-weighted directed graph with the limited degree of nodes. In this model the problem is shown to be NP-hard. Therefore, two kinds of heuristic algorithms are proposed, which are based on the maximum degree and the maximal length path respectively. Finally, the simulation demonstrates that the proposed algorithms are valid.

    • A Generic Audit Policy Model on Multilevel Secure DBMS

      2005, 16(10):1774-1783. CSTR:

      Abstract (4187) HTML (0) PDF 524.99 K (5131) Comment (0) Favorites

      Abstract:This paper proposes a generic audit policy model on multilevel secure DBMS. The model is powerful expressively which not only expresses audit policy based on periodical time constraints, but also implements audit policy deduction based on rules. Furthermore, fine-grained audit policies are possible in this model with the introduction of object attribute predicate. The decidability of the model is proven and a decidability algorithm is presented.

    • Extending the Theory of Strand Spaces

      2005, 16(10):1784-1789. CSTR:

      Abstract (3989) HTML (0) PDF 488.57 K (5500) Comment (0) Favorites

      Abstract:Current strand spaces model can not analyze some complex security protocols on account of their poor cryptographic primitives’ abstract. So it is very necessary to extend original theory of strand spaces so that it can be applied to analyze real world protocols. The penetrator’s strands are extended through adding signature, signature verification and HMAC (keyed-hashing for message authentication code) traces to them. A new notion of ideal is defined and the relevant propositions or theorems are therefore modified and proved. The extended honest ideals model not only inherits its original characters, but also is adaptive to the analysis of protocols with more cryptographic primitives such as JFK or IKE2.

    • P2P-Oriented Secure Reliable Storage System

      2005, 16(10):1790-1797. CSTR:

      Abstract (4165) HTML (0) PDF 332.04 K (5549) Comment (0) Favorites

      Abstract:This paper proposes a cooperative secure reliable storage system depending on pure P2P and supporting self-repair. Participant nodes of this system form an overlay by Paramecium protocol, which maintains the organization of system and provides routing service, or other compatible distributed hash table (DHT) protocols. The PKI authentication mechanism is introduced to ensure security in an open environment. Binding user data with replica identifiers supports secure auto-repair. The introduction of replica types provides secure sharing-write. Preliminary analysis and experiments demonstrate that it maintains a high reliability and read/write performance in real settings while keeping the cost of maintenance bandwidth low.

    • A Multiplicative Watermark Detection Algorithm for Digital Images in the DCT Domains

      2005, 16(10):1798-1804. CSTR:

      Abstract (4243) HTML (0) PDF 496.95 K (6420) Comment (0) Favorites

      Abstract:Watermark detection plays a crucial role in digital watermarking. It has traditionally been tackled using correlation-based techniques. However, correlation-based detection is not the optimum choice either when the host media doesn't follow a Gaussian distribution or when the watermark is not embedded in the host media in an additive way. This paper addresses the problem of DCT (discrete cosine transform) domain multiplicative watermark detection for digital images. First, generalized Gaussian distributions are applied to statistically model the AC (alternative current) DCT coefficients of the original image. Then, the imperceptibility constraint of watermarking is exploited, and watermark detection is formulated as the problem of weak signal detection in non-Gaussian noise. A binary hypothesis test concerning whether or not an image is watermarked is established, and an optimum detection structure for blind watermark detection is derived. Experimental results indicate the superiority of the new detector in the case that the embedding strengths are unknown to the detector. Therefore, the proposed detector can be used for the copyright protection of the digital multimedia data.

    • An Electronic Voting Scheme

      2005, 16(10):1805-1810. CSTR:

      Abstract (3859) HTML (0) PDF 297.62 K (6097) Comment (0) Favorites

      Abstract:A dynamic multi-secrets sharing threshold scheme is presented to apply to a large scale electronic voting system with many talliers (tallying authorities). Even if there exist adaptive adversaries, this scheme can guard the ballot’s producing, encrypting, transmitting, decrypting and final tallying in spite of the adversaries’s attack, so the scheme guarantees robustness. In this paper, the verifiability of the voters’ qualification and talliers’ identification will be solved by a dynamic multi-secret sharing scheme without invoking more zero knowledge proof to maintain privacy, universal verifiablitlity, and anonymity of ballots. It holds more communication efficiency and more security than the proposed schemes in early time.

    • A Study of Online Financial Trading Confirmation Scheme

      2005, 16(10):1811-1815. CSTR:

      Abstract (3994) HTML (0) PDF 300.34 K (5011) Comment (0) Favorites

      Abstract:In order to find a way of fast confirmation for the real time clearance of online financial trading, this paper analyzes the typical online trading process and the security characteristics. Based on the principle of elliptic curves signing, a kind of clear confirmation scheme is proposed, which can confirm the trading result step by step, and finish it once a time at the final clear. This scheme can avoid the complicated PKI system and the one by one confirm process, realize the non-reputation efficiently, keep the proof of signing easily, and make straight through process and online clear and possible.

    • An Anonymous Fingerprinting Scheme Based on Additively Homomorphic Public Key Cryptosystem

      2005, 16(10):1816-1821. CSTR:

      Abstract (3953) HTML (0) PDF 250.59 K (6169) Comment (0) Favorites

      Abstract:This paper proposes an anonymous fingerprinting scheme based on the additively homomorphic public key cryptosystems. The proposed fingerprinting scheme enables the merchant to identify the illegal distributors without the help of a trusted third party when he/she finds an illegally redistributed fingerprinted copy. Furthermore, it allows two-party trials, i.e. there is no need for the accused (and possibly innocent) buyer to take part in the dispute resolution protocol and reveal his/her secrets. In addition, the problem of how to construct the anonymous public key and private key pairs is also addressed in the scheme. The security analysis shows that the proposed scheme is secure for both seller and buyer, and has the properties of anonymity and unlinkability for the buyer.

    • Three Algorithms for Improving the Effectiveness of Software Pipelining:Comparison and Combination

      2005, 16(10):1822-1832. CSTR:

      Abstract (3815) HTML (0) PDF 731.74 K (5987) Comment (0) Favorites

      Abstract:Software pipelining is a loop scheduling technique that extracts instruction level parallelism by overlapping the execution of several consecutive iterations. One of its drawbacks is the high register requirements, which may lead to software pipelining failure due to insufficient static general registers in Itanium. This paper evaluates the register requirements of software-pipelined loops and presents three new methods for software pipelining loops that require more static general registers than those available in Itanium processor. They reduce register pressure by either reducing instructions in the loop body or allocating stacked non-rotating registers or rotating register in register stack to serve as static registers. These methods are better than the existing techniques in that they further improve performance gain from software pipelining by increasing software-pipelined loops. These methods have been implemented in open research compiler (ORC) targeted for Itanium processor, and they perform well on loops of the programs in NAS Benchmarks. For some benchmarks, the performance is improved by more than 21%.

    • Hiding Memory Access Latency in Software Pipelining

      2005, 16(10):1833-1841. CSTR:

      Abstract (5217) HTML (0) PDF 619.93 K (5775) Comment (0) Favorites

      Abstract:Software pipelining tries to improve the performance of a loop by overlapping the execution of several successive iterations. As processor gets much higher speed, the memory access latency becomes a bottleneck that restricts higher performance. Software pipelining has been combined with several memory optimization technologies for higher performance by hiding memory access latency. This paper presents a foresighted latency modulo scheduling (FLMS) algorithm which determines the latency of load instructions according to the feature of the loop. Experimental results show that FLMS decreases the stall time and improves the performance of programs.

    • Optimization to Prevent Cache Penalty in Modulo Scheduling

      2005, 16(10):1842-1852. CSTR:

      Abstract (4193) HTML (0) PDF 697.82 K (5300) Comment (0) Favorites

      Abstract:Software pipelining can speedup the loop execution. Modulo scheduling is a widely used heuristic for software pipelining. Cache hierarchies can improve memory systems, but probably all processor implementations introduce additional delays (cache penalty). This paper demonstrates possible cache penalty due to modulo scheduling, and presents an algorithm named Prevent Cache Penalty in Modulo Scheduling (PCPMS), which can prevent cache penalty due to modulo scheduling. Experimental results show that PCPMS can prevent cache penalty and improve the performance of programs.

    • Summarization of Results of Program Funded by NSFC in the Field of Natural Language Processing in Recent Years

      2005, 16(10):1853-1858. CSTR:

      Abstract (4769) HTML (0) PDF 352.46 K (7170) Comment (0) Favorites

      Abstract:In this paper, summarization of results of program funded by NSFC in the field of natural language processing in recent years is given, including summarization of Chinese information processing technology, natural language processing application technology, minority language information processing technology.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063