• Volume 26,Issue 12,2015 Table of Contents
    Select All
    Display Type: |
    • Complexity Metrics for Regression Testing of Component-Based Software

      2015, 26(12):3043-3061. DOI: 10.13328/j.cnki.jos.004876 CSTR:

      Abstract (3087) HTML (1012) PDF 2.34 M (4287) Comment (0) Favorites

      Abstract:Component-based software construction is a widely used approach in software development, aiming to reduce the engineering effort and speed up development cycle. During software maintenance, various software update approaches can be utilized to realize specific change requirements of component-based software. Different update approaches might lead to diverse regression testing complexity. However, there is a lack of research work addressing regression testing complexity in software maintenance. In this paper, a framework is proposed to measure and analyze regression testing complexity based on a set of change and impact complexity models and metrics. The paper presents an approach to complexity metrics for regression testing of component-based software. A graphic model and several measurements for the complexity metrics, which consist of both maintenance and retesting complexity, are also proposed. An experimental study is conducted to compare the complexity of regression testing using the data from several independent groups. The study results indicate the presented approach is feasible in providing visual comparison on various complexity of regression testing from different methods.

    • Method Name Recommendation Based on Source Code Depository and Feature Matching

      2015, 26(12):3062-3074. DOI: 10.13328/j.cnki.jos.004817 CSTR:

      Abstract (3381) HTML (1445) PDF 1.42 M (5340) Comment (0) Favorites

      Abstract:Quality of method names is critical for the readability and maintainability of program. However, it is difficult for software engineers, especially non-English speaking, inexperienced engineers, to propose high quality method names. To address this issue, this paper proposes an approach to recommend method names. First, a method corpus is constructed from open source applications. For a given method f to be named, similar methods are retrieved from the method corpus. Names of these retrieved methods are divided into phrases, and features of these methods are extracted as well. A mapping between these phrases and features is also created to derive a list of candidate phrases and features for the method to be named. These phrases are finally constructed into candidate method names. The proposed approach is evaluated on 1430 methods in open source applications. Evaluation results suggest that 22.7 percent of recommended method names are the same as original ones, and 57.9 percent has the same or almost the same keywords as original ones.

    • Computing Software Reliability Based on Program Invariants

      2015, 26(12):3075-3087. DOI: 10.13328/j.cnki.jos.004803 CSTR:

      Abstract (3227) HTML (1137) PDF 1.28 M (5281) Comment (0) Favorites

      Abstract:Existing methods to compute software reliability use the testing data of input and output. However, the data cannot truly reflect the behaviors of the software. For examples, it is possible that testing has false positive outputs, and one input with multiple faults cannot be revealed in the testing. This paper attempts to use program invariants to compute software reliability. First, test cases are selected to dynamically obtain program invariants. Then, the failure data from these invariants are extracted. Finally, the software reliabilities is computed based on Nelson model. In experiments, the reliabilities of the software in the Siemens set are computed. Program invariants are obtained by applying three different testing methods:random, branch coverage and block coverage. The reliabilities are computed next based on these invariants. To check the correctness of the results, software reliabilities are also computed with traditional methods. A comparison shows that the differences between two types of reliabilities are small no matter which testing method is selected. Further variance analysis shows that reliabilities by the proposed method have lower fluctuation, i.e., more stable, than those with existing methods, thus is closer to the real reliability of the system. This suggests that software reliability can be computed based on program invariants.

    • Exception Handling Approach of Floating Mathematical Functions

      2015, 26(12):3088-3103. DOI: 10.13328/j.cnki.jos.004814 CSTR:

      Abstract (3036) HTML (1366) PDF 1.69 M (6193) Comment (0) Favorites

      Abstract:Floating-point exception usually brings unpredictable errors to applications, as it is fairly difficult to design software free of exceptions. Implementing an efficient exception handling approach is thus important. However, existing techniques, while focusing on handing integer overflow errors, are not floating-point oriented. Considering the fact that floating-point calculation reduces the integer overflow error, this study proposes a floating-point oriented exception handling approach for mathematical function written in assembly language. It first maps various exceptions into 64-bit floating-point numbers, and then stages the handling process into three parts on basis of their kernel computations. These stages are input parameter detection, which handles INV exception, specific code detection, which handles DZE and INF exceptions, and output parameter detection, which handles FPF and DNO exceptions. In the meanwhile, the paper presents a theoretical proof as well to illustrate the validity of such staging technique. More than 600 floating-point functions are extracted from the mathematical function library Mlib to test the performance for different systems. The evaluation shows that the proposed technique is capable to decrease the occupation of functions with floating-point exceptions from 90% to 0%, and the result demonstrates its high efficiency.

    • Programming Method of Dimensional Array Types and High Performance FFT Implementation

      2015, 26(12):3104-3116. DOI: 10.13328/j.cnki.jos.004801 CSTR:

      Abstract (3056) HTML (1049) PDF 1.36 M (5005) Comment (0) Favorites

      Abstract:Because a heterogeneous cluster relies on a heterogeneous storage system, the data needs to be divided in a multidimensionally manner when doing computation on it. Current cluster-level programming languages have no unified representation mechanisms of transmission and transposition of multi-dimensional arrays. This article describes the programming method based on multi-dimensional array types and the Parray language, which can be used to represent the complex multi-dimensional data transposition on heterogeneous clusters in a clear way. A large-scale three-dimensional FFT implementation on Tianhe 1A based on the array type programming method and Parray is also introduced. The final code is very simple but gets a good performance and scalability at the same time.

    • Reconstructive Algorithm Based on Extension Rule for Solving #SAT Incrementally

      2015, 26(12):3117-3129. DOI: 10.13328/j.cnki.jos.004827 CSTR:

      Abstract (2976) HTML (1009) PDF 1.35 M (4422) Comment (0) Favorites

      Abstract:#SAT problem is an important problem and used widely in the field of artificial intelligence. By the in-depth study of model counting algorithm CER based on extension rule, the calculation formula used in CER is reconstructed and its correctness is proved in this paper. Then, the concepts of maximum term intersection and extended maximum term intersection are put forward. An incremental method for #SAT is also proposed, and the extended maximum term intersections are computed preferentially based on the solution of the maximum term intersection. Pruning is performed on the extended maximum term intersections corresponding to the generalized complementary base clause sets to avoid the redundant calculation. A method is provided to decide if a clause set is generalized complementary or not by creating a complementary table for recording the complementarity of all pairs of clauses. Using this complementary table, the base clause set of extended maximum term intersection can be decided incrementally. Based on this method, the redundant calculation for deciding the complementarity of clauses and clause sets is avoided better. Experimental results show that the presented RCER algorithm runs faster than CER algorithm, especially in the instances where complementary factor is small.

    • Subtopic Taxonomy-Aware Framework for Diversity Evaluation

      2015, 26(12):3130-3139. DOI: 10.13328/j.cnki.jos.004744 CSTR:

      Abstract (2829) HTML (976) PDF 1.08 M (4898) Comment (0) Favorites

      Abstract:To evaluate search result diversification, which is supposed to meet different needs behind a same query, a number of evaluation frameworks are proposed. Although most of these frameworks take the probability distribution of information needs(usually called subtopics) underlying a query topic into account, they usually do not consider the subtopic taxonomy information. In this paper, the decay function is first introduced to take the subtopic taxonomy information into account. And then based on the decay function, a novel framework called the subtopic taxonomy-aware(STA) framework is proposed to define the structure that the taxonomy-aware diversity evaluation metrics would have. The decay functions used for the informational and navigational subtopics are also discussed. Experiments based on TREC and NTCIR test collections show the effectiveness of the proposed method.

    • Probabilistic Max Restricted Path Consistency

      2015, 26(12):3140-3150. DOI: 10.13328/j.cnki.jos.004815 CSTR:

      Abstract (3082) HTML (1012) PDF 1.30 M (5426) Comment (0) Favorites

      Abstract:Constraint satisfaction problem is widely applied in many areas of artificial intelligence. This study investigates the max restricted path consistency(maxRPC) which is used to solve constraint satisfaction problems. There are a number of useless operations to search for PC-witness in maxRPC. The efficiency of maxRPC can be improved by identifying and avoiding such operations. The paper first proposes the probability that on a constraint, a PC-witness exists for any two consistent values on a path. Based on the probability, it presents a probabilistic max restricted path consistency(PmaxRPC) and integrates it into backtracking search to solve constraint satisfaction problems. Experimental results show that PmaxRPC avoids some of these useless operations to search for PC-witness and it is more efficient than maxRPC when used in backtracking search. On some benchmark instances, PmaxRPC outperforms both maxRPC and Arc Consistency which is the most popular local consistency used in search.

    • Cross-Lingual Tense Tagging Based on Tree Conditional Random Fields

      2015, 26(12):3151-3161. DOI: 10.13328/j.cnki.jos.004816 CSTR:

      Abstract (2818) HTML (995) PDF 1.10 M (4626) Comment (0) Favorites

      Abstract:In this paper, the concept of a tense tree and its construction method are proposed such that the issue of Chinese-English tense conversion is transformed into the issue of tagging a tense tree. Futher, tree-CRF is used to tag nodes of the untagged tense tree with English tenses. Templates of feature functions are suitable for the need of model inference. Experimental results show that the method of tree-based tense tagging is much better than linear-based tense tagging, and that tense trees can better express tense dependencies between clauses.

    • Construction of Multi-Attributes Decrement for Concept Lattice

      2015, 26(12):3162-3173. DOI: 10.13328/j.cnki.jos.004818 CSTR:

      Abstract (2719) HTML (937) PDF 1.23 M (4084) Comment (0) Favorites

      Abstract:Incremental algorithms are important methods for the construction of concept lattices. But previous incremental algorithms are for the addition of objects or attributes. Concept lattices decrementing some attributes are needed in practical problems. Incremental algorithm decrementing single attribute were studied in 2013. But the algorithm only applies to decrement single attribute. When decrementing multi-attributes, the algorithm must be implemented again and again. This paper studies an incremental algorithm decrementing multi-attributes. This algorithm has as same time complexity as the algorithm decrementing single attribute. But when decrementing multi-attributes,the algorithm decrementing single attribute must be implemented on many times, the presented algorithm only need to be implemented single execution.

    • Identity-Based Hybrid Signcryption Scheme Using ECC

      2015, 26(12):3174-3182. DOI: 10.13328/j.cnki.jos.004819 CSTR:

      Abstract (3209) HTML (1069) PDF 1.09 M (4995) Comment (0) Favorites

      Abstract:Identity-Based hybrid signcryption can efficiently encapsulate symmetric key and securely transmit data. Aiming at high computational complexity problem that exists in the existing identity-based hybrid signcryption schemes, this article integrates identity-based hybrid signcryption and bilinear maps in elliptic curve cryptography(ECC), and constructs a novel identity-based hybrid signcryption(IBHS) scheme using ECC. In the random oracle model, the paper proves that the constructed scheme satisfies the confidentiality under the co-bilinear Diffie-Hellman assumption and unforgeability under the co-computational Diffie-Hellman assumption. Since this scheme has low communication cost and high computational efficiency, it can be more practical in cryptography.

    • Attribute-Based Authenticated Key Agreement Protocol Secure in ABeCK Model

      2015, 26(12):3183-3195. DOI: 10.13328/j.cnki.jos.004825 CSTR:

      Abstract (2684) HTML (2010) PDF 1.31 M (4543) Comment (0) Favorites

      Abstract:Attribute-based authenticated key agreement(ABAKA) protocol is used to establish session key among parties in the communication environment in which the identity information of individual is protected. Attribute-based extended Canetti-Krawczyk(ABeCK) model is a model with more security applying to the security proof of ABAKE protocol. This paper presents gap computational parallel bilinear Diffie-Hellman exponent(GCPBDHE) assumption based on gap computational Diffie-Hellman(GCDH) assumption. Based on Waters scheme, it establishes an ABAKA protocol, and proves its security in ABeCK model under GCPBDHE and CDH assumptions. Compared with the existing ABAKE protocols, the new protocol is more efficient in communication cost.

    • Efficient Revocable Certificateless Signature Against Signing Key Exposure

      2015, 26(12):3196-3203. DOI: 10.13328/j.cnki.jos.004826 CSTR:

      Abstract (3213) HTML (1080) PDF 906.14 K (4524) Comment (0) Favorites

      Abstract:A necessary problem in public key cryptosystem is how to revoke a user when the user's private key is compromised or the permission is prohibited. There have been many effective solutions in both traditional public key cryptosystem(TPKC) and identity based cryptosystem(IBC). But, in certificateless public key cryptosystem the revocation problem still remains to be efficiently solved. As is known, certificateless cryptosystem is a good substitution for TPKC and IBC since it features no certificate and no key escrow with only a little more computation. So, it is very necessary to design efficient revocation solutions in the certificateless setting. This paper gives a revocable certificateless signature scheme in which the system periodically generates new time keys for all non-revoked users via public channels. Compared with the existing Al-Riyami and Paterson revocation mechanism, our scheme is much better in efficiency. Furthermore, the new scheme can resist signing key exposure with very short signing key. In the CDH assumption, our scheme is UF-CMA provably secure.

    • Efficient and Provide Security Certificateless Aggregate Signature Scheme

      2015, 26(12):3204-3214. DOI: 10.13328/j.cnki.jos.004830 CSTR:

      Abstract (2743) HTML (990) PDF 1.14 M (5390) Comment (0) Favorites

      Abstract:Almost all existing aggregate signature schemes are based on bilinear pairing which leads to high computational cost. In order to solve this problem under different network environment, two new certificateless aggregate signature schemes without bilinear pairing CLAS-Ⅰ and CLAS-Ⅱ are proposed in this paper. The proposed schemes are provably unforgeable in the random oracle model under the discrete logarithm assumption, and also have the security properties of public verifiability. Moreover, compared with other existing aggregate signature schemes in the computationally complexity, the proposal are more efficient. Meanwhile, the scheme CLAS-Ⅰ can be used for high bandwidth network environment because the length of signature is long, and the scheme CLAS-Ⅱ can be used in a narrow bandwidth network environment since it is the shortest certificateless aggregate signature and the number of users does not correlate to the length of the signatures generated by CLAS-Ⅱ,.

    • Fast Privacy Preserving RFID Authentication Protocol

      2015, 26(12):3215-3222. DOI: 10.13328/j.cnki.jos.004832 CSTR:

      Abstract (2993) HTML (1144) PDF 858.26 K (4705) Comment (0) Favorites

      Abstract:Privacy-preserving RFID authentication protocols based on symmetric-key cryptography technology have increasingly become popular. However, current RFID authentication protocols are inefficient due to the requirement of full search for all the RFID tags in the systems. In this paper, a new efficient method is proposed to construct highly effective privacy preserving RFID protocols. Based on pseudorandom functions with single output bit, this protocol can significantly improve the performance of tag searching process. Moreover, the protocol satisfies anonymity, authenticity, and efficiency requirements in the large-scale systems.

    • Rate-Monotonic Optimal Design Based on Tree-Like Linear Programming Search

      2015, 26(12):3223-3241. DOI: 10.13328/j.cnki.jos.004853 CSTR:

      Abstract (3640) HTML (1130) PDF 1.95 M (4693) Comment (0) Favorites

      Abstract:Over the last four decades, a critical problem in real-time system is to improve the efficiency of the decision algorithm for the rate-monotonic(RM) scheduling. Nowadays researchers extend the decision problem to a generalized optimal design problem, that is, how to adjust the task execution time in the corresponding interval such that(1) the system is schedulable and(2) certain system performance(e.g. CPU utilization) is optimized. All the existing methods for solving this problem are to formulate the problem as the generalized constrained optimization problem(GCOP). However, these methods run very slowly and cannot be applied to the systems with large numbers of tasks. In this paper, a new method for solving the optimization problem is proposed. The method is called tree-like linear programming search(LPS). First, the problem is transformed into a GCOP. Next, the GCOP is partitioned into several linear programming sub-problems. Then, a linear programming search tree is constructed and the node of linear programming is solved by depth-first-search as the optimal solution. The experimental results illustrate that the new method can save 20%~70% of runtime comparing with other existing methods. This work also relates to the research areas of satisfiability modulo theories(SMT), and is expected to improve the efficiency for solving SMT problems.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063