• Volume 28,Issue 7,2017 Table of Contents
    Select All
    Display Type: |
    • Theory and Property of Γ-k Truth Degree on Axiomatic Extension of Goguen Propositional Logic System

      2017, 28(7):1629-1639. DOI: 10.13328/j.cnki.jos.005110

      Abstract (2325) HTML (1344) PDF 1.13 M (3341) Comment (0) Favorites

      Abstract:Axiomatic extensions of n-valued Goguen propositional logic system denoted as ∏~,Δ is first studied in this paper. Using induced function, the definition of Γ-k truth degree of formula relative to local finite theory Γ under the k conjunction is given. The MP rule, HS rule, and some correlation properties are also discussed. Finally, the definition of Γ-k similarity degree and Γ-k pseudo-metric in ∏~,Δ between two formulas is presented, and some good properties about Γ-k similarity degree and Γ-k pseudo-metric relative to local finite theory Γ under the k conjunction are simultaneously obtained.

    • Heuristic Search Algorithm for the Rectangular Fixed-Size Guillotine Bin Packing Problem

      2017, 28(7):1640-1654. DOI: 10.13328/j.cnki.jos.005100

      Abstract (2839) HTML (1356) PDF 1.32 M (4808) Comment (0) Favorites

      Abstract:This paper proposes a novelty heuristic search algorithm, called BSDBF (bigitem smallitem divide-and-conquer best-fit), to solve the two-dimensional rectangular fixed-size guillotine bin packing problem. First, based on group rules, this algorithm implements big item smalltime divide-and-conquer strategy and efficient group recommendation scheme which are key points to improve the group strategy. Then, the best-fit group is selected for recursive packing, and packing solution is achieved greedily for all bins. Finally, an initial solution is obtained, and a post processing algorithm is used to improve the quality of the solution based on item splitting method. That the solution can be obtained again is the critical characteristic of BSDBF algorithm which is different from others algorithms, because there is not any random factor in BSDBF algorithm. The computational results of many Benchmark problems have shown that BSDBF algorithm outperforms others reported algorithms.

    • ParaC: A Domain Programming Framework of Image Processing on GPU Accelerators

      2017, 28(7):1655-1675. DOI: 10.13328/j.cnki.jos.005241

      Abstract (3445) HTML (1628) PDF 2.08 M (5032) Comment (0) Favorites

      Abstract:Image processing algorithms take the GPU accelerators as the main speedup solution. However, the performance difference between a naïve implementation and a highly optimized one on the same GPU accelerators is frequently an order of magnitude or more. The GPGPU platform features complicated hardware architecture characteristics, such as the large amount of multi-dimension and multi -level threads and the deep hierarchy memory system, while the different part of the latter features different capacity, bandwidth, latency and access authority. Additionally, image processing algorithms have complex operations, border data accessing rules and memory accessing patterns. Therefore, parallel execution model of tasks, organization of threads and parallel tasks to device mapping not only have big impact on the scalability, scheduling, communication and synchronization, but also affect the efficiency of memory accessing. In a word, the algorithm optimization methods on GPGPU platforms are difficult, complicated and less efficient. This paper proposes a domain specific language, ParaC, which can provide high level program semantics through the new language extensions. It obtains the applications' software characteristics, such as the operation information, the data reuse among parallel tasks and the memory access patterns, along with hardware platform information and the domain pre-knowledge driven optimization mechanism, to generate high performance GPGPU code automatically. The source-to-source compiler is then used to output the standard OpenCL programs. Experiment results on test cases show that ParaC automatically generated optimization version has gained 3.22 speedup compared to the hand-tuned version for the best case, while the number of lines of the former is just 1.2% to 39.68% of the latter.

    • Verification Based Decision-Making for Self-Adaptive Systems: A Model-Driven Approach

      2017, 28(7):1676-1697. DOI: 10.13328/j.cnki.jos.005257

      Abstract (2726) HTML (2012) PDF 2.42 M (4646) Comment (0) Favorites

      Abstract:Self-Adaptive systems (SASs) are required to be capable of adjusting their behaviors in response to changes in operational contexts and themselves. To implement automatic adjustment, SASs must be endowed by abilities of monitoring changes in contexts and themselves, analyzing changes of requirement satisfaction and reasoning about adaptation decisions. The behavior of online decision-making needs to assure functional requirements as well as certain non-functional requirements such as reliability and performance. This paper proposes a verification-based optimal decision-making approach for SASs, for assuring the satisfaction of non-functional requirements. This approach models adaptation mechanisms by identifying adjustable goals and maps goal models to corresponding behavior models expressed by label transition systems. It takes reliability requirements as examples and utilizes tagged goal models to specify reliability of tasks. Then, the system behavior model and reliability specifications are integrated into discrete-time Markov chains with variable states while adaptation candidates are characterized by combinations of different variable states. Via online verification of related requirements, the system derives the optimal decision of configurations under a certain type of contexts. The feasibility and effectiveness of the approach are illustrated through a mobile information system.

    • Parallel Code Generation from Synchronous Specification

      2017, 28(7):1698-1712. DOI: 10.13328/j.cnki.jos.005056

      Abstract (4516) HTML (1118) PDF 1.39 M (6304) Comment (0) Favorites

      Abstract:As functional and non-functional requirements on safety critical real time systems stack up, the development of multi-core technology in these systems has become a trend. How to guarantee the credibility and reliability on the multi-core platform, however, is the key problem in both academic and industry. While many theoretical and applied achievements have been accomplished on the single-core platform, there are still a lot of scientific problems need to be solved on the multi-core platform. Suitable for describing concurrency behaviors, synchronous language SIGNAL is a formalism widely used in the functional design of safety critical real time systems. The SIGNAL compiler supports generating the simulation code from the synchronous specification to verify and analyze the properties of the system model. However, existing studies pay less attention to the generation of multi-platform parallel simulation code from SIGNAL specification. This paper proposes a methodology for automatically generating parallel code from SIGNAL specifications. First, equation dependency graph (EDG) is defined and the specification is translated to analyze the global data dependency relations. Then EDG is partitioned to explore the parallelism of the specification. Finally, altogether with clock relations, parallel tasks are mapped into OpenMP structures. A case study is provided to illustrate the proposed methodology.

    • Evidence-Based Software Process Trustworthiness Model and Evaluation Method

      2017, 28(7):1713-1731. DOI: 10.13328/j.cnki.jos.005102

      Abstract (3489) HTML (1604) PDF 1.82 M (4894) Comment (0) Favorites

      Abstract:Today's software is required to be more trustworthy due to its ever more important role in the society. However there is still lack of systematic and objective criteria for the evaluation of software trustworthiness. Existing research focuses on how to get the evidence, with the assumption that system is more trustworthy if the evidence is obtained from a third party test, or from the feedback of past users. Although such study contributes to the objectivity of trustworthiness, the process-oriented nature of system trust is not well addressed. In this case, the sufficiency and necessity of software process related evidence, as well as the coverage ratio of the necessary development process, are critical. This paper attempts to establish the confidence of software product from the trustworthiness of development process. Based on the software development process, software trustworthiness is determined by three aspects:process entity, behavior and products. A software process trustworthiness model is proposed that includes 37 trustworthiness principles, 182 process entities and behaviors evidences, and 108 artifacts evidences. Based on this model, an evaluation method for process trustworthiness is also developed.

    • Kernel Code Reuse Attack Detection Technique for Linux

      2017, 28(7):1732-1745. DOI: 10.13328/j.cnki.jos.005058

      Abstract (3507) HTML (1893) PDF 1.49 M (4465) Comment (0) Favorites

      Abstract:Recently, code reuse attack and defensive techniques have been a hot area in security research. Kernel-Level code reuse attacks use kernel code to bypass traditional defensive mechanisms. Existing code reuse attacks detection and defensive methods mainly focus on user-level code reuse attacks, ignoring kernel-level code reuse attacks. In order to detect kernel-level code reuse attacks effectively, a detection method based on fine-grained control flow integrity (CFI) is proposed. Firstly, CFI constraint rules are constructed according to the code reuse attack principles and the control flows of normal programs. Then, a detection model based on state machine and CFI constraint rules is developed. Next, CFI label checking instructions are instrumented based on GCC-plugin. Furthermore, CFI constraint rules are verified on Hypervisor, boosting the security of the method. The experiment results show that this method can effectively detect kernel-level code reuse attacks, and performance evaluations indicate that performance penalty induced by this method is less than 60%.

    • Characterizing Uncertainty of Software Reliability Growth Model

      2017, 28(7):1746-1758. DOI: 10.13328/j.cnki.jos.005108

      Abstract (2723) HTML (1362) PDF 410.61 K (3666) Comment (0) Favorites

      Abstract:More and more software systems have been developed to provide great flexibility to customers, but they also introduce great uncertainty to software development. The fault detection rate (FDR) within the fault detection process shows an irregular fluctuation and is usually modeled as a white noise. White noise is Markovian, but Non-Markov is the rule while Markov is the exception. In many cases the white noise idealization is insufficient, as real fluctuations are always correlated noise (non-Markovian noise). This study proposes a novel model to quantify the uncertainties associated with the debugging process. Based on the Non-homogeneous Poisson process (NHPP) model for software fault detection process, the environmental uncertainties are considered collectively as a noise of arbitrary distribution and correlation structure. Through a number of comparisons with existing methods, the new model exhibits a closer fitting to observation data. In addition to focusing on the mean value of detected-fault number, this work provides a formula to compute its cumulative density function (CDF) and probabilistic density function (PDF), thus encapsulating full statistical information of the debugging process.

    • Greedy Default Logic

      2017, 28(7):1759-1772. DOI: 10.13328/j.cnki.jos.005066

      Abstract (3014) HTML (1117) PDF 1.37 M (3724) Comment (0) Favorites

      Abstract:A greedy default logic is proposed in this paper, in which pseudo extensions is defined with aim of keeping as much information of the defaults as possible. The paper proves that a GD-extension of a default theory (T,Δ) in the greedy default logic may not be any extension of the default theory, and vice versa. An extension of the default theory may be a subset of a pseudo extension of a default theory (T,Δ) in the greedy default. Hence, the default logic, the greedy default logic and the greedier default logic are different logics.

    • Evolutionary Community Detection in Dynamic Networks

      2017, 28(7):1773-1789. DOI: 10.13328/j.cnki.jos.005114

      Abstract (2844) HTML (1437) PDF 1.72 M (4701) Comment (0) Favorites

      Abstract:The number of communities and temporal smoothness are always the main limitations in the field of evolutionary community detection for dynamic networks. In this paper, a new multi-objective approach based on the label propagation algorithm (LDMGA) is proposed. Employing the idea of multi-objective genetic algorithm, the evolutionary clustering algorithm is transformed into a multi-objective optimization problem, which not only improves the clustering quality, but also minimizes clustering drift from one time step to the successive one. Population initialization based on the label propagation algorithm improves the clustering quality of initial individuals. In addition, applying the label propagation algorithm to the mutation progress enhances the quality of clustering and the convergence rate. At the same time, the combination of the multi-objective genetic algorithm and the label propagation algorithm makes the algorithm more scalable, and the running time increases linearly with the increase of the number of nodes or edges. The experiment on the synthesized datasets and real datasets shows that the proposed algorithm consistently provides good clustering quality and scalability.

    • Multiple Feedback Control Model and Algorithm for Energy Efficient Virtual Network Embedding

      2017, 28(7):1790-1814. DOI: 10.13328/j.cnki.jos.005062

      Abstract (3304) HTML (1605) PDF 2.66 M (3822) Comment (0) Favorites

      Abstract:Network virtualization will be an enabler for intelligent energy-aware network deployment. Since virtual network requests arrive dynamically and stay in the network for an arbitrary period of time before departing, substrate resources are allocated and recycled dynamically, which influence the set range and the number of the active resources. Current network mapping not only determines the setrange and the number of the active resources, but also influences the subsequent virtual network mapping. To address the problems, this paper uses the feedback control theory to investigate the relationship among virtual network embeddings, and the impact of current mapping on active resources of substrate network. A novel multi-feedback control model and an algorithm are proposed for energy-efficient virtual network embedding. In this model, a main feedback control is placed to manage the number of hibernating links of substrate network, eliminating the deviations of the number of the active hibernating links and the passive hibernating links. This method helps eliminate the interferences on the minimum set of active substrate resources. In addition, a local feedback control for mapping virtual nodes and links is designed to reduce the number of active hibernating substrate links. As a result, the minimum set of substrate resource for one virtual network can be searched. Using this model, a smaller set of substrate nodes and links can be found for virtual network requests, which increase the number of passive hibernating nodes and links and decreases the energy consumption of substrate network. Simulation results demonstrate the proposed algorithm to be effective. The proposed model and the corresponding method can cut down the number of hibernating nodes and links of substrate network, and significantly reduce the energy consumption of substrate network in non-saturated state. Moreover, they can improve acceptance and revenue of virtual network in saturated environment with cyclical fluctuations in traffic.

    • Algorithm for Large Scale IP Network Multiple Link Congestion Inference

      2017, 28(7):1815-1834. DOI: 10.13328/j.cnki.jos.005111

      Abstract (2730) HTML (1840) PDF 2.04 M (4247) Comment (0) Favorites

      Abstract:Congested link inference algorithms only infer the set of share links based on methods of smallest set coverage. When some congested path contains more than one congested link, the inference performance is obviously descending. Aiming at this problem, a version of Lagrange relaxation sub-gradient algorithm based on Bayesian maximum a-posterior (LRSBMAP) is proposed. Aiming at the impacts of congested link inference performance in the different link coverage, and the cost problems of probe deployments and additional E2E active detection, the paper proposes a preliminary selection method for transceiver nodes by optimally selecting degree threshold value (DTV) parameter of IP networks. Through introducing the optimization coefficient ρ, problems of cost and link coverage can be both considered to ensure the performance of inference algorithm. In addition, according to the sparsity of coefficient matrix in link prior probability solution equations, a preconditioned conjugate gradient method based on symmetry successive over-relaxation (PCG_SSOR) is proposed to obtain approximate unique solutions, helping to avoid the solution failures in large scale IP networks under the scenarios of multiple link congestion. Experiments demonstrate that the algorithms proposed in this paper have higher accuracy and robustness.

    • Copy-Move Forgery Detection for Pre-Printing Image Using Digital Halftoning Technology

      2017, 28(7):1835-1846. DOI: 10.13328/j.cnki.jos.005067

      Abstract (3083) HTML (1185) PDF 1.23 M (4275) Comment (0) Favorites

      Abstract:Pre-Printing image is referred to as digital print proof validated by copy-editor. This type of image is stored in binary screened form after being processed by raster image processor (RIP). It has indubitable legitimacy once the digital print proof is finalized. Looking at the overall printing flow, however, pre-printing images are still at the risk of malicious tampering in the storage and transmission process. Most of existing copy-move forgery detection methods employing high dimensional feature vector have high computational complexity, thus are not applicable to color separated binary screened proofs. In this paper, a new approach for copy-move forgery detection is proposed based on digital halftoning technology. The original image is binarized via digital halftoning technique in CMYK color space, which is similar to raster image processor. Then the halftone image is tiled to overlapping blocks, and the local screen density features of halftone overlapping blocks in CMYK channels are extracted for copy-move forgery detection. Experimental results demonstrate that the proposed method is able to detect copy-move forgery regions precisely, and it is robust to rotation and small scale zooming attacks to forgery regions.

    • Image Classification Based on Bag of Visual Words Model with Triangle Constraint

      2017, 28(7):1847-1861. DOI: 10.13328/j.cnki.jos.005069

      Abstract (3418) HTML (1422) PDF 1.48 M (5552) Comment (0) Favorites

      Abstract:Bag of visual words model is widely used in image classification and image retrieval. In traditional bag of words model, the statistical method of visual words ignores the spatial information and object shape information, resulting lack of ability to distinguish between image features. In this paper, an improved bag of words method is proposed to combine with salient region extraction and visual words topological structure so that it is can not only produce more representative visual words to certain extent, but also avoid the disturbance of complex background information and position change. First of all, the significant areas of training image are extracted and the bag of visual words model is built on the significant area. Secondly, in order to describe the characteristics of the image more accurately and resist the changing location and the influence of background information, the strategies of visual words topological structure and Delaunay triangulation method are utilized and integrated into the global information and local information. Simulation experiments are performed to compare with the traditional bag of words and other models, the results demonstrate that the proposed method obtained a higher classification accuracy.

    • Enhanced Deep Automatic Image Annotation Based on Data Equalization

      2017, 28(7):1862-1880. DOI: 10.13328/j.cnki.jos.005112

      Abstract (2728) HTML (1167) PDF 1.96 M (4522) Comment (0) Favorites

      Abstract:Automatic image annotation is a challenging research problem involving lots of tags and various features. Aiming at the problem that the image annotation based on the traditional shallow machine learning algorithm has low efficiency and is difficult to apply to complex classification task, this paper proposes an automatic image annotation algorithm based on stacked auto-encoder (SAE) to improve both efficiency and effectiveness of annotation. In this paper, two types of strategies are proposed to solve the main problem of unbalanced data in image annotation. For the annotation model itself, to improve the annotation effect of low frequency tags, a balanced and stacked auto-encoder (B-SAE) that can enhance training for low frequency tags is proposed. Based on this model, a robust balanced and stacked auto-encoder algorithm (RB-SAE) is proposed to increase the annotation stability through enhanced training by group in sub B-SAE model. This strategy ensures that the model itself has a strong ability to deal with the unbalanced data. For the annotation process, taking the unknown image as the starting point, the local equilibrium dataset of the unknown image is constructed, and the high and low frequency attribute of the image is discriminated to determine the different annotation process. The local semantic propagation algorithm (SP) annotates the low frequency images and the RB-SAE algorithm annotates the high frequency images. The framework of attribute discrimination annotation (ADA) is formed to improve the overall image annotation effect. This strategy ensures that the labeling process has a strong ability to deal with unbalanced data. Experimental results generated from three public data sets show that many indicators in the presented model are all improved comparing with the previous models.

    • Runtime Model Based Approach to Managing Hybrid Clouds

      2017, 28(7):1881-1897. DOI: 10.13328/j.cnki.jos.005113

      Abstract (2771) HTML (1456) PDF 1.77 M (3714) Comment (0) Favorites

      Abstract:With the development of cloud computing, cloud platforms of different types and purposes are emerging. Hybrid clouds are needed to manage cross-domain computing and storage resources in a unified manner in order to satisfy management requirements such as legacy system integration and dynamic resource scaling. However, there are differences in management interfaces and mechanism between different cloud platforms, which cause great difficulties and high complexity to construction of hybrid clouds. In this paper, a runtime model based approach to managing hybrid clouds is presented. First, the manageability of cloud platforms is abstracted as runtime models based on their management interfaces. Second, a unified model of cloud software architecture is provided according to the domain knowledge of current cloud platforms. Third, the synchronization between the unified model and cloud runtime models is ensured through model transformation. Thus, all the management logic can be carried out by executing programs on the unified model, which decreases the difficulty and complexity of hybrid cloud construction. The experiment on a real-world hybrid cloud, which consists of CloudStack and Amazon EC2, demonstrates the feasibility, effectiveness and benefits of the new approach in managing hybrid clouds.

    • Research on Task Allocation Based Trust Degree for Social Network of Syncretic System

      2017, 28(7):1898-1925. DOI: 10.13328/j.cnki.jos.005109

      Abstract (2449) HTML (1135) PDF 2.83 M (5683) Comment (0) Favorites

      Abstract:In the multi-agent social network, the traditional task allocation models generally adopt allocation mechanism that directly orients to task performers. They do not consider the huge impact of social network structure on the performance of allocation, and seldom thoroughly research the allocation mechanism for the unreliable society. Aiming at these problems, this paper firstly designs a collaborative organization model according to the hierarchical and layering methodology, then proposes a community oriented task allocation model for software-hardware syncretic systems. In the process of model research, the paper develops a community trust evaluation mechanism based on direct trust and community reputation, a community node selection mechanism based on trust degree and physical ability of community, a load balancing mechanism which is applied to the task allocation in interior community, and a task redistribution strategy based on the context of community resources. The results of experiments show that the proposed model has better allocation performance and robustness compared with other classical models, and also validate that the load balancing allocation mechanism and the redistribution strategy can not only effectively improve the allocation performance but also reduce the communication density of the social network.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063