• Volume 26,Issue 7,2015 Table of Contents
    Select All
    Display Type: |
    • Clustering Search Algorithm

      2015, 26(7):1557-1573. DOI: 10.13328/j.cnki.jos.004613

      Abstract (3756) HTML (1021) PDF 1.33 M (5508) Comment (0) Favorites

      Abstract:A novel evolutionary optimization method, clustering searching algorithm (CSA), is presented. In CSA, a virtual cluster group is constructed among individuals in order to adjust the operation state of simulated evolutionary system dynamically and improve the searching efficiency of population. After introducing the basic model of CSA, this paper presents CSA/DE, a new CSA blending the evolutionary search operators with the differential computing mechanism for solving numerical optimization problem. In simulations, 6 classical multidimensional functions and 6 challenging composition functions are selected to test the performance of CSA/DE. The experimental results show CSA/DE is an efficient and reliable search optimization algorithm for multidimensional continuous problem. The work of this paper verifies the feasibility and validity of CSA. Meanwhile, this research demonstrates, based on CSA, multiple heterogeneous searching mechanisms can be merged into one algorithm to get much more pertinence and harmony in searching process, thus providing a viable way for designing high-performance optimization algorithm.

    • Research of Global Ranking Based Many-Objective Optimization

      2015, 26(7):1574-1583. DOI: 10.13328/j.cnki.jos.004612

      Abstract (3489) HTML (1095) PDF 732.13 K (5889) Comment (0) Favorites

      Abstract:Many-Objective optimization problem (MOP) with more than four objectives are among the most difficult problems in the field of evolutionary multi-objective optimization. In fact, existing multi-objective evolutionary algorithms (MOEAs) can not fulfill the engineering requirement of convergence, diversity and stability. In this paper, a new kind of many-objective evolutionary algorithm is proposed. The algorithm adopts a global ranking technique to favor convergence by improving selection pressure without need of the user's preference or objective information, avoiding loss of rationality and credibility due to the use of relaxed Pareto domination relations. In addition, a new global density estimation method based on the harmonic average distance is presented. Finally, a new elitist selection strategy is designed. Simulation results on DTLZ{1,2,4,5} test problems with 4~30 objectives show that the proposed algorithm consistently provides good convergence as the number of objectives increases, outperforming five state-of-the-art MOEAs.

    • Research on 3D Hexagonal Torus Based on Cayley Graph

      2015, 26(7):1584-1600. DOI: 10.13328/j.cnki.jos.004588

      Abstract (3831) HTML (0) PDF 1.13 M (4786) Comment (0) Favorites

      Abstract:The 3D hexagonal torus is presented as a natural extension of the hexagonal torus. Hexagonal tori are proved to be a type of Cayley graph. This paper develops a new type of 3D hexagonal torus based on Cayley graph. An addressing scheme for this type of network is developed. Based on this addressing scheme, the distance formula between any two nodes is derived. Then, a distributed optimal routing algorithm is developed. This distributed routing algorithm is optimal, which means it can be executed on any node in the network to construct a shortest path between any pair of nodes. A broadcasting algorithm is also presented according to the coset graph theory. The upper bound and lower bound of the diameter are also proposed in this paper.

    • Co-Evolutionary Algorithm for Web Service Matching

      2015, 26(7):1601-1614. DOI: 10.13328/j.cnki.jos.004698

      Abstract (3439) HTML (990) PDF 1.24 M (5099) Comment (0) Favorites

      Abstract:Service matching is a principal process of Web services discovery. Nowadays, the narrow concept of the atomic Web service matching, the high time complexity of the current matching algorithm and the difficult expression of the Web service matching for the intelligent optimization algorithms become the main problems in Web service matching development. To solve the above problems, this article introduces the concept of the compound service matching by extending the concept of the atomic service matching, and abstracts the mathematical expression of the compound matching problem by the fitness function and restriction. The expression of the solution of the Web service matching for the intelligent optimization algorithm is also proposed. Based on the co-evolutionary idea of particle swarm optimization (PSO) and simulated annealing (SA), the study puts forward a co-evolutionary algorithm (PSO-SA) to the compound Web service matching problem. According to the experimental results, PSO-SA achieves better matching precision than other optimization algorithms within the limit iterations on various dimensional matching problems. Also, PSO-SA shows the adaptive ability to the compound service matching and improves the quality of result of Web services discovery.

    • Title Recognition of Maximal-Length Noun Phrase Based on Bilingual Co-Training

      2015, 26(7):1615-1625. DOI: 10.13328/j.cnki.jos.004630

      Abstract (3338) HTML (1014) PDF 796.54 K (4594) Comment (0) Favorites

      Abstract:This article focuses on the problem of weak cross-domain ability on bilingual maximal-length noun phrase recognition. A bilingual noun phrase recognition algorithm based on semi-supervised learning is proposed. The approach can make full use of both the English features and the Chinese features in a unified framework, and it regards the two language corpus as different view of one dataset. Instances with the highest confidence score are selected and merged, and then added to the labeled data set to train the classifier. Experimental results on test sets show the effectiveness of the proposed approach which outperforms 4.52% over the baseline in cross-domain, and 3.08% over the baseline in similar domain.

    • Study on User Tags in Weibo

      2015, 26(7):1626-1637. DOI: 10.13328/j.cnki.jos.004655

      Abstract (4477) HTML (995) PDF 872.85 K (8677) Comment (0) Favorites

      Abstract:Weibo allows users to add text tags in their profiles, which are descriptive to one's personality and interests. The tag information can be very useful to user profiling in applications such as personalized recommendation, expert finding and social influence measuring. This paper first studies the characteristics of users' tagging behavior and content of the tags based on large-scale data. By adopting topic model on users' Weibo posts, it finds that the more tags two users have in common, the more similar their Weibo posts are and vice versa. It also finds that the users with connections to each other have more similar tags and Weibo posts. Based on this observation, this study uses tags and Weibo posts to predict user connections separately on real-world data. The experimental results show that the tag-based approach is significantly better than the approach based on Weibo posts, thus validating the effectiveness of user tags in describing user interests.

    • Quantitative Threat Situational Assessment Based on Contextual Information

      2015, 26(7):1638-1649. DOI: 10.13328/j.cnki.jos.004624

      Abstract (3464) HTML (1206) PDF 858.06 K (5554) Comment (0) Favorites

      Abstract:Traditional network threat situational assessment is based on primary alerts, however, its lack of access to contextual information compromises the accuracy of assessment. This paper proposes a method to quantitatively assess network threat situation based on not only alerts but also contextual information. The new method first verifies alerts along with contextual information to determine the successful possibility of events; then analyzes the loss caused by events according to the risk and the corresponding asset value of events; and finally quantitatively assesses network threat situation based on the successful possibility and the loss of events. Case studies show that the proposed method can evaluate network threat situations accurately.

    • Automatic Term Alignment Based on Advanced Multi-Strategy and Giza++ Integration

      2015, 26(7):1650-1661. DOI: 10.13328/j.cnki.jos.004615

      Abstract (3472) HTML (1399) PDF 796.84 K (4964) Comment (0) Favorites

      Abstract:The quality of cross-phylum term alignment depends on the quality of term extraction and alignment method. This paper proposes an automatic term alignment based on advanced multi-strategy and Giza++ (AGiza) integration. By analyzing the properties of the term extraction performed by using some existing methodologies in the literature, the rules of the first and the last part of speech of strings are designed to increase the recall rate. Methods that are applied for the purpose of increasing the precision of the term extraction include: (1) independence filter; (2) stopping filter; and (3) recognition of the co-occurrence of terms in the sentence pairs. The following steps are also implemmented to increase the alignment quality: (1) design the degree of the independence correspondence based on the degree of independence; (2) construct the degree of the stopping correspondence based on the degree of stopping usage; (3) propose the degree of semantic correspondence that computed by the seed pairs' correspondence and word pairs' similarity based on additivity of probability; (4) construct the alignment correspondence degree of the first part and last part between the term pairs in order to cancel the term pairs whose value is equal to zero; (5) present the similarity degree of the part of speech between the term pairs considering the patterns that define the morphosyntactic structures of terms; and (6) obtain the value of g based on GIZA++. The term-aligned degree (a) is computed by the six attributes of term pairs based on multiplication of probability after analyzing their correlations. Term pairs is extracted by select the term-aligned pairs based on the candidate term pairs whose a is more than the term-aligned threshold that make the tolerance of recall is less than 1%. The simulation results of Chinese-English term alignment show that automatic term alignment based on AGiza can be used to extract cross-phylum term pairs effectively. Furthermore, it outperforms GIZA++, the Dice coefficient, the F2 coefficient, the log-likelihood ratio, K-VEC and DKVEC.

    • Logarithmic and Exponential Morphological Associative Memories

      2015, 26(7):1662-1674. DOI: 10.13328/j.cnki.jos.004620

      Abstract (4988) HTML (977) PDF 895.88 K (6339) Comment (0) Favorites

      Abstract:A novel morphological associative memory method, abbreviated as LEMAM, is constructed by using logarithmic operator and exponential operator. The theoretical analysis shows that auto LEMAM (abbreviated as ALEMAM), which has unlimited storage capacity, one step recall, and a certain ability of resisting erosive noise or dilative noise, can ensure perfect recall memory for either perfect inputs or a certain range of noise. Hetero LEMAM (abbreviated as HLEMAM) does not guarantee perfect recall, even without any input noise. However, when meeting certain conditions, HLEMAM can also achieve perfect recall. HLEMAM contrast experiments show that, in some cases, LEMAM can produce better result. On balance, LEMAM enriches the theory and practice of morphological associative memories, and can serve as a kind of new neural computational model for research and application.

    • Ontology-Based Intelligent Information Retrieval System

      2015, 26(7):1675-1687. DOI: 10.13328/j.cnki.jos.004622

      Abstract (4189) HTML (1385) PDF 830.07 K (7822) Comment (0) Favorites

      Abstract:Recently, ontology-based intelligent information retrieval systems, aiming to further improve the retrieval performance and intelligence by using ontology, have become one of the hottest topics in the domain of intelligent information retrieval systems. This paper presents an overview of the field of ontology-based intelligent information retrieval systems from a process-oriented perspective, including the system framework, ontology knowledge acquisition and use, key technologies, and evaluation. The prospects for future development and suggestions for possible extensions of the ontology-based intelligent information retrieval systems are also discussed.

    • Interior Penalty Rule Based Evolutionary Algorithm for Constrained Optimization

      2015, 26(7):1688-1699. DOI: 10.13328/j.cnki.jos.004623

      Abstract (3642) HTML (1120) PDF 853.84 K (5370) Comment (0) Favorites

      Abstract:In order to combine constraints into the evaluation of feasible solutions, a set of interior penalty rules is proposed to improve the efficiency of evolutionary algorithms in solving the constrained optimization problems. In these rules, interior penalty functions are used to evaluate feasible solutions and constraint violations are used to evaluate infeasible solutions. In addition, an interior penalty rule based evolution strategy algorithm is derived to solve constrained optimization problems. The theory validity of these rules is analyzed based on the successful rate of an (1+1) evolutionary algorithm. The proposed approach is tested with 13 benchmark problems. The results indicate that the presented approach is competitive with two existing state-of-the-art techniques.

    • Study on the Downlink Performance of Roadside Unit in Vehicular Ad-Hoc Networks

      2015, 26(7):1700-1710. DOI: 10.13328/j.cnki.jos.004616

      Abstract (3361) HTML (1125) PDF 865.58 K (7319) Comment (0) Favorites

      Abstract:VANET (vehicular adhoc network) based on 802.11p/WAVE is a comprehensive and systematic field of study which involves many subjects such as intelligent traffic system, wireless communication and self-organizing system. Roadside unit (RSU) works as IEEE 802.11p access points (APs) to open up services to clients from traveling vehicles for anytime and anywhere connection to Internet, thus making the modeling of downlink performance of RSU a key challenge. By introducing the vehicle density probability on freeway, the channel access contention scenarios of CSMA/DCF-based MAC layer well adapted to the time-varying vehicle arrivals are analyzed in detail. Then on that basis, the throughput performance model of VANET, including both the data downlink and the uplink through RSU for travelling vehicles, is presented. To address the issue of intermittent connectivity caused by RSU's inability to cover all the highway areas, VANET cooperative downloading scheme (VCoDS) is constructed to extend the downlink output of RSU. Simulations also verify the effectiveness of the proposed VCoDS scheme.

    • Effective Algorithm for Placement of Directional Wireless Chargers

      2015, 26(7):1711-1729. DOI: 10.13328/j.cnki.jos.004618

      Abstract (3748) HTML (939) PDF 1.42 M (5879) Comment (0) Favorites

      Abstract:Traditional sensor nodes are powered by batteries. The limited battery capacity, however, constrains the lifetime of the wireless sensor networks. Wireless power transfer technology allows energy transfers from a charger to sensor nodes via wireless, and thus solves the problem completely. One fundamental issue in wireless rechargeable sensor networks is the wireless charger placement problem, i.e., how to effectively place the chargers to maximize the overall charging utility of the network. Existing works mainly focus on the deployment issues of omnidirectional chargers, which are confined to positions such as the end point of triangles or lattice point in a grid. These works inevitably have their limitations. This study is to consider the general placement problem in which the charging area of chargers is a sector and the charger can be deployed at any position in the field with arbitrary orientation. First, a charging model for directional chargers is constructed based on trace data. Then, a series of novel techniques is proposed to transform the problem to develop an effective algorithm, CDG (charger deployment-greedy), with approximation ratio (1-1/e)/(1+e) to solve this problem. The simulation results demonstrate the effectiveness of the CDG algorithm. Compared with other two random algorithms, the CDG algorithm has performance gains of nearly 300% and 100%, respectively.

    • Prediction Based Relay Selection Method in Opportunistic Vehicular Networks

      2015, 26(7):1730-1741. DOI: 10.13328/j.cnki.jos.004629

      Abstract (3491) HTML (969) PDF 867.60 K (4300) Comment (0) Favorites

      Abstract:To accommodate the dynamic connectivity and networking condition in DTN, most existing routing schemes require inferring future contact opportunities to select message relays. However, such relay only has an incidental effect on helping opportunistic data delivery. In order to again a further understanding of the intrinsic uncertainty of relay efficiency, this paper engages in an empirical investigation on the relay selection schemes. Firstly, it introduces a more effective opportunistic relay selection strategy based on the estimation of residual message delay, which utilizes pairwise contact records and the elapsed time from last contact. Next, based on the underlying opportunistic vehicular network extracted from large-scale realistic vehicle traces collected from urban areas, it investigates the efficiency of relay selection with an empirical view. The questions this study focuses on are: What is the probability that a selected relay can make the end-to-end delay reduced? How much the latency can be saved by a properly selected relay? Such empirical study is signficant for the protocol design and application deployments in the future opportunities vehicular networks.

    • Method of DDoS Countermeasure Selection Based on Multi-Attribute Decision Making

      2015, 26(7):1742-1756. DOI: 10.13328/j.cnki.jos.004673

      Abstract (3662) HTML (1014) PDF 1.45 M (5747) Comment (0) Favorites

      Abstract:DDoS is one of the biggest threat in the network. Using the proper countermeasure will notably improve the security level of the target network and/or target system. Existing security evaluation methods don't provide sufficient support for countermeasure selection. To solve the problem, this paper builds a DDoS countermeasure selection model (DCSM), and then proposes the DDoS countermeasure selection method. The method not only takes advantage of multi-attribute decision making theory to evaluate multi- dimension metrics, but also uses historical attack preference based method and entropy method to calculate the weight from both attack and defense perspectives, thus reducing the subjective factors in conventional methods. The correctness and the applicability of the method are validated by the experiments.

    • Threshold Multi-Secret Visual Cryptography Scheme with Perfect Recovery

      2015, 26(7):1757-1771. DOI: 10.13328/j.cnki.jos.004611

      Abstract (3602) HTML (1026) PDF 2.30 M (4604) Comment (0) Favorites

      Abstract:To address the issue of information loss in recovering multiple secret images for threshold structure, this paper develops a definition of threshold multi-secret visual cryptography with perfect recovery, which can adapt to various corresponding relationship between the threshold value and the secret number. Based on the given definition, a single-threshold multi-secret visual cryptography scheme with upper and lower thresholds is constructed, and a multiple-threshold scheme is then proposed by designing the rotation rule fusion algorithm and the region merging algorithm. Furthermore, the effectiveness is proved theoretically and verified by experiments.

    • Development Approach Based on Extensible User Interface Description Language

      2015, 26(7):1772-1784. DOI: 10.13328/j.cnki.jos.004584

      Abstract (3375) HTML (981) PDF 1.46 M (4947) Comment (0) Favorites

      Abstract:The division of roles becomes more and more complex in user interface development, and the tools used in the development process vary greatly. These problems result in increase of communication cost and decrease of efficiency. This article introduces a development method based on user interface description language. First, a new user interface description language E-UIDL is proposed. Then, a series of supporting tools based on E-UIDL are designed. Next, the development process of the user interface based on E-UIDL and related tools is presented. Finally, the feasibility of the proposed method is validated.

    • Realistic Cloth Simulation Based on Geometrical Measurement and Deformation

      2015, 26(7):1785-1799. DOI: 10.13328/j.cnki.jos.004614

      Abstract (3471) HTML (1512) PDF 8.09 M (4573) Comment (0) Favorites

      Abstract:This paper presents an efficient and expressive cloth simulation method based on geometrical measurement and mesh deformation for obtaining realistic cloth shapes with various fabric materials. Geometrical measurement can measure the geometric material properties including recovery, stretching and bending of different real cloth behaviors. In accordance with these three geometric properties, three energy terms related to the vertex position, edge length and dihedral angle are embedded into the functional energy optimization model based on differential mesh deformation. The corresponding weights for the three energy terms are learned from the measured data with real cloth. The solution to the energy optimization can be obtained by a numerical solution in the least square sense. Using the weight settings and numerical optimization can simulate realistic cloth behaviors. Experimental results show that the presented method can effectively provide realistic cloth effects with various materials.

    • Feature Selection with Enhanced Sparsity for Web Image Annotation

      2015, 26(7):1800-1811. DOI: 10.13328/j.cnki.jos.004687

      Abstract (3781) HTML (1001) PDF 866.04 K (5368) Comment (0) Favorites

      Abstract:In dealing with the explosive growth of web images, Web image annotation has become a critical research issue in recent years. Sparse feature selection plays an important role in improving the efficiency and performance of Web image annotation. In this paper, a feature selection framework is proposed with enhanced sparsity for Web image annotation. The new framework, termed as semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning (SFSLS), selects the most sparse and discriminative features by utilizing l2,1/2-matix norm and obtains the correlation between different features via shared subspace learning. In addition, SFSLS uses graph Laplacian semi-supervised learning to exploit both labeled and unlabeled data simultaneously. An efficient iterative algorithm is designed to optimize the objective function. SFSLS method is compared to other feature selection algorithms on two Web image datasets and the results indicate it is suitable for large-scale Web image annotation.

    • Local Linear Coding Based on Riemannian Kernel

      2015, 26(7):1812-1823. DOI: 10.13328/j.cnki.jos.004714

      Abstract (3378) HTML (1035) PDF 1017.22 K (5345) Comment (0) Favorites

      Abstract:Recent research has shown that better recognition performance can be attained through representing symmetric positive definite matrices as points on Riemannian manifolds for many computer vision tasks. However, most existing algorithms only approximate the Riemannian manifold locally by its tangent space and are incapable of scaling effectively distribution of samples. Inspired by kernel methods, a novel method, called local linear coding based on Riemannian kernel (LLCRK), is proposed and applied successfully to vision classification issues. Firstly, with the aid of recently introduced Riemannian kernel, symmetric positive definite matrices are mapped into the reproducing kernel Hilbert space by kernel method and a mathematical model of sparse coding and Riemannian dictionary learning is constructed by local linear coding theory. Secondly, an efficient algorithm of LLCRK is presented for dictionary learning according to the convex optimization methods. Finally, an iterative updating algorithm is constructed to optimize the objective function, and the test samples are classified by nearest neighbor classifier. Experimental results on three visual classification data sets demonstrate that the proposed algorithm achieves considerable improvement in discrimination accuracy.

    • Ripple-RAID: A High-Performance and Energy-Efficient RAID for Continuous Data Storage

      2015, 26(7):1824-1839. DOI: 10.13328/j.cnki.jos.004606

      Abstract (3344) HTML (1093) PDF 1.09 M (5325) Comment (0) Favorites

      Abstract:The applications, such as video surveillance, backup and archiving, have inherent workload characteristic and I/O access pattern, and require specialized optimization for storage energy saving. Partial parallelism in RAID is beneficial to storage energy saving, but generally makes RAID perform small writes, which heavily deteriorates the performance. A high-performance and energy-efficient RAID, Ripple-RAID, is proposed for these applications. A new partial-parallel data layout is presented, and by comprehensively employing strategies such as address mapping, out-place update, generating parity data progressively based on pipeline, and segmented data recovery, Ripple-RAID not only obtains energy efficiency of partial parallelism but also eliminates the small writes incurred by partial parallelism while providing single disk fault tolerance. When write workload is 80% sequential and transfer request size is 512KB, the write performance of Ripple-RAID is 3.9 times that of S-RAID 5, 1.9 times that of Hibernator and MAID, and 0.49 times that of PARAID and eRAID 5; meanwhile its energy consumption is 20% less than S-RAID 5, 33% less than Hibernator and MAID, 70% less than eRAID 5, and 72% less than PARAID.

    • Power Estimation Model on Multi-Core Platforms

      2015, 26(7):1840-1852. DOI: 10.13328/j.cnki.jos.004706

      Abstract (3181) HTML (1398) PDF 904.72 K (8029) Comment (0) Favorites

      Abstract:Accurate power consumption estimation can provide a significant guidance for OS scheduling and software/hardware power efficiency optimization. Previous researches have indicated that power consumption can be estimated by monitoring the related hardware events inside the CPU, such as instruction submission times and caches access times. However, those models which are based on hardware are not able to provide accurate results; they often come with an error over 5%. This study first analyzes the hardware events provided by the CPU, then chooses a set of events that are closely related to power consumption, and finally uses step by step multi-element linear regression analysis to build our run-time estimation model. This model is not related to any applications and can be directly transformed into the platforms that support SMT. The model is verified with the two benchmark suites PARSEC and SPLASH2, resulting in estimated errors of 3.01% and 1.99% respectively. To address the issue of high time consuming in modeling, an optimization scheme with two-step cluster is also presented in this article. The proposed estimation model can serve as a foundation for the intelligent power consumption perception systems that dynamically balance power assignment and smooth peak power consumption at run-time.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063