• Volume 24,Issue 10,2013 Table of Contents
    Select All
    Display Type: |
    • Hybrid Immune Algorithm with EDA for Multi-Objective Optimization

      2013, 24(10):2251-2266. DOI: 10.3724/SP.J.1001.2013.04341

      Abstract (3494) HTML (0) PDF 1.09 M (5636) Comment (0) Favorites

      Abstract:The estimation of distribution algorithm (EDA) is a new type of evolutionary computation approach which reproduces offspring individuals by modeling and sampling the probability distribution of the evolving population. In this paper, the idea of EDA is introduced into the immune multi-objective optimization algorithm to form a hybrid algorithm termed as HIAEDA (hybrid immune algorithm with EDA for multi-objective optimization). It is proposed for solving complex multi-objective optimization problems (MOPs). In HIAEDA, two types of reproducing strategies are combined. One is a recombination and mutation based immune clonal selection operator. It performs a local search around the parent population and develops new searching areas. The other is a EDA based modeling and sampling operator. It learns the variable linkages and promotes the algorithm's capability of solving complex problems. By analyzing the searching behavior of the two operators, the paper comes to the conclusion that their functions are complementary to each other. The convergence of HIAEDA is proved using the theory of the finite Markov chain. Experimental results on benchmarking and real problems show that HIAEDA outperforms the outstanding NSGAII and the EDA based RM-MEDA in terms of both convergence and diversity, especially when solving complex MOPs with nonlinear relationship between decision variables.

    • Algorithm for Locating All Local Minima of Multidimensional Function

      2013, 24(10):2267-2274. DOI: 10.3724/SP.J.1001.2013.04337

      Abstract (2919) HTML (0) PDF 529.52 K (5698) Comment (0) Favorites

      Abstract:This paper focuses on locating all local minima of box-constrained, non-linear optimization problems. A new algorithm based on Multistart method is proposed. A quality measure called G-measure is constructed to measure the local minima of a multidimensional continuous and differentiable function distribution inside bounded domain. This paper measures the distribution of local minima in three facets: Gradient, convexity and concavity, and rate of decline. Feasible region is divided into several small regions, and each is assigned a set of initial points in proportion to its G-measures. More initial points can be allocated in the region which includes more local minima. A condition judging whether an initial point is effective is aimed to decrease the run times of local optimal technique. The approximate computing method is constructed to reduce computational complexity of G-measure. Several benchmarks with large quantities of local minima are chosen. The performance of this new method is compared with that of Multistart and Minfinder on benchmark problems. Experimental results show that the proposed method performs better in search efficiency.

    • Fast Addition Formulae for Montgomery Ladder Scalar Multiplication on Hyperelliptic Curves

      2013, 24(10):2275-2288. DOI: 10.3724/SP.J.1001.2013.04370

      Abstract (3245) HTML (0) PDF 727.01 K (5205) Comment (0) Favorites

      Abstract:Comparing with elliptic curve (EC) cryptosystem, hyperelliptic curve (HEC) cryptosystem offers high level of security with shorter key size. Scalar multiplication is the most important and key operation in cryptosystems built on HEC and EC. Montgomery Ladder algorithm is an efficient and important algorithm to implement scalar multiplications for defending against side channel attacks. While Montgomery Ladder algorithm on elliptic curve is being improved in recent years, there is not much advance on hyperelliptic curves. Lange proposed a way to design faster addition formula on hyperelliptic curves but did not result in a practical solution. This paper improves the addition for divisor classes for the first time to implement faster Montgomery Ladder algorithm. New technique is applied for improving the formulae on various coordinates. The analysis and experimental results show that the new formulae are faster than previous ones. Over fields of character two and Type II curves, the new formulae is 4%~8.4% faster than the ones known before. And the Montgomery Ladder algorithm implemented in this paper is secure against side channel attacks.

    • Highly Scalable MHP Analysis Algorithm

      2013, 24(10):2289-2299. DOI: 10.3724/SP.J.1001.2013.04372

      Abstract (3073) HTML (0) PDF 572.77 K (4946) Comment (0) Favorites

      Abstract:May happen in parallel (MHP) analysis decides whether a pair of statements in a parallel program can be executed concurrently; it plays an important part in parallel analyses. This paper proposes a novel may-happen-in-parallel analysis algorithm for Java programs. Compared with the existing MHP algorithm, the new alogrithm discards the unnecessary assumption that a child thread can only be join-synchronized by its parent thread, and processes start-synchronization and join-synchronization independently in a decoupled way. This makes the processing logic of the algorithm more concise and more complete than that of the existing algorithm. When computing dominator information, the new algorithm has better scalability for it does not need to construct the global flow graph for the program by inlining, which is needed by the existing algorithms. The new MHP analysis algorithm is used to sift false warnings reported by a static data race detection tool. The experimental results on 14 Java programs show that the time of computing dominator information of the new MHP analysis is much shorter than that of the existing algorithm.

    • Word Sense Disambiguation Based on Dependency Fitness with Automatic Knowledge Acquisition

      2013, 24(10):2300-2311. DOI: 10.3724/SP.J.1001.2013.04373

      Abstract (2967) HTML (0) PDF 642.51 K (4908) Comment (0) Favorites

      Abstract:A word sense disambiguation (WSD) method based on dependency fitness is proposed to solve the problem of knowledge acquisition bottleneck in the development of WSD techniques. The method achieves automatic knowledge acquisition in WSD by taking full advantage of dependency parsing. First, a large-scale corpus is parsed to obtain dependency cells whose statistics information is utilized to build a dependency knowledge base (DKB); then, the ambiguous sentence is parsed to obtain the dependency constraint set (DCS) of ambiguous words. For each sense of ambiguous word, sense representative words (SRW) are obtained through WordNet. Finally, based on DKB, dependency fitness of all kinds of SRW on DCS is computed to judge the right sense. Evaluation is performed on coarse-grained English all-words task dataset of SemEval 2007. Compared with unsupervised and knowledge-based methods which don't utilize any sense-annotated corpus, the proposed method yields state-of-the-art performance with F1-measure of 74.53%.

    • Support Vector Regression for Large Domain Adaptation

      2013, 24(10):2312-2326. DOI: 10.3724/SP.J.1001.2013.04375

      Abstract (3533) HTML (0) PDF 844.30 K (4727) Comment (0) Favorites

      Abstract:Incomplete data collection in regression analysis would lead to low prediction performance, which aises the issue of domain adaptation. It is well known that support vector regression (SVR) is equivalent to center-constrained minimum enclosing ball (CC-MEB). Also in solving the problem of how to effectively transfer the knowledge between the two fields, new theorems reveal that the difference between two probability distributions from two similar domains only depends on the centers of the two domains' minimum enclosing balls. Based on these developments, a fast adaptive-core vector regression (A-CVR) algorithm is proposed for large domain adaptation. The proposed algorithm uses the center of the source domain's CC-MEB to calibrate the center of the target domain's in order to improve the regression performance of the target domain. Experimental results show that the proposed domain adaptive algorithm can make up for the lack of data and greatly improve the performance of the target domain regression.

    • Decomposed Planning Guided by Landmark Counting Heuristic

      2013, 24(10):2327-2339. DOI: 10.3724/SP.J.1001.2013.04383

      Abstract (2828) HTML (0) PDF 677.05 K (4901) Comment (0) Favorites

      Abstract:Landmarks can capture the features of the solution space of planning tasks precisely. In this paper, a decomposed planning method guided by landmark information is proposed. The method executes an enforced hill-climbing procedure guided by the landmark-counting heuristic towards the goal, searching for a plan along with completions of the landmarks. Globally the hill-climbing procedure achieves the landmarks one after another. A decrease in the landmark counting heuristic estimation causes task decomposition and whenever the search encounters a state with a lower estimation, a hill-climbing fragment is extracted. Such "search-extract" procedure is repeated until the estimation of the landmark counting heuristic decreases to zero eventually, and then all the extracted fragments are connected into the final plan. Experiment results show that the decomposed planning method guided by landmark-counting heuristic makes use of the landmark information in a more flexible way, usually cutting down the search space dramatically and find the plan much faster.

    • Theoretical Analysis on Structured Learning with Noisy Data and its Applications

      2013, 24(10):2340-2353. DOI: 10.3724/SP.J.1001.2013.04393

      Abstract (3128) HTML (0) PDF 776.77 K (5142) Comment (0) Favorites

      Abstract:Performance of supervised machine learning can be badly affected by noises of labeled data, as indicated by existing well studied theories on learning with noisy data. However these theories only focus on two-class classification problems. This paper studies the relation between noise examples and their effects on structured learning. Firstly, the paper founds that noise of labeled data increases in structured learning problems, leading to a higher noise rate in training procedure than on labeled data. Existing theories do not consider noise increament in structured learning, thus underestimate the complexities of learning problems. This paper provides a new theory on learning from noise data with structured predictions. Based on the theory, the concept of "effective size of training data" is proposed to describe the qualities of noisy training data sets in practice. The paper also analyzes the situations when structured learning models will go back to lower order ones in applications. Experimental results are given to confirm the correctness of these theories as well as their practical values on cross-lingual projection and co-training.

    • Convergence Proof for Central Force Optimization Algorithm and Application in Neural Networks

      2013, 24(10):2354-2365. DOI: 10.3724/SP.J.1001.2013.04391

      Abstract (3105) HTML (0) PDF 1.01 M (4747) Comment (0) Favorites

      Abstract:Central force optimization (CFO) is a new deterministic multi-dimensional search metaheuristic based on the metaphor of gravitational kinematics. CFO is a deterministic algorithm that explores a decision space by "flying" a group of probes whose trajectories are governed by Newton's laws. Based on in-deph studies on the probes movement governed by the equations of gravitational motion, this paper utilizes Celestial Mechanics theory to deduce moving formulas, establishes the relationship between CFO algorithm and Celestial Mechanics, and analyzes CFO convergence through mathematics analysis of Celestial Mechanics. It concludes that no matter how initial probe distribute, all probes converge deterministically in CFO space with optimal solution. To test CFO's effectiveness, a hybrid CFO-BP algorithm is proposed for joint optimization of three-layer feed forward artificial neural network (ANN) structure and parameters (weights and bias). The experimental results show that the proposed hybrid CFO-BP algorithm is better than other algorithms in convergent speed and convergent accuracy.

    • Web News Oriented Event Multi-Elements Retrieval

      2013, 24(10):2366-2378. DOI: 10.3724/SP.J.1001.2013.04382

      Abstract (3260) HTML (0) PDF 678.58 K (5494) Comment (0) Favorites

      Abstract:To meet the demand of effectively acquiring event information, a method of Web news-oriented event multi-elements retrieval is studied through analyzing characteristics of Web news and event multi-elements retrieval process. Firstly, a model of Web news-oriented event multi-elements retrieval is proposed. Secondly, event multi-elements query terms are formally defined by using the BNF (Backus-Naur form). Finally, incorporating the importance of event action element, Web news title and the distance between event terms and constrained terms, a method of computing the relevance between query terms and the document is proposed. Sixteen event query topics are created to implement the experiments. With the proposed method, this paper evaluates the index P@n based on the Baidu search engine, getting average P@10 of 0.85 and average P@20 of 0.83. This paper also evaluates the index F-measure through manually labeling the corpus with same method, obtaining average F-measure of 0.74. The results show that the proposed method offers more effective performances.

    • Cage Generation Based on Visual Hull

      2013, 24(10):2379-2390. DOI: 10.3724/SP.J.1001.2013.04413

      Abstract (3027) HTML (0) PDF 1.31 M (5650) Comment (0) Favorites

      Abstract:Cage, which is adopted as the proxy geometry, plays an important role in processing high-precision and complex models in computer animation and geometric modeling. Currently, cage generation algorithms are heavily dependent on representation and complexity of the shape, which could not be applied to various models universally. To decouple the cage generation from the representation and complexity of the shape, the paper proposes a cage generation algorithm based on visual hull in computer vision. The algorithm can generate tight and coarse cage by inverse simulation of visual hull construction procedure. The results demonstrate usability and efficiency of the proposed algorithm.

    • Coarse-to-Fine Hair Segmentation Method

      2013, 24(10):2391-2404. DOI: 10.3724/SP.J.1001.2013.04423

      Abstract (3307) HTML (0) PDF 1.88 M (6205) Comment (0) Favorites

      Abstract:Segmenting hair regions from human images facilitates many tasks like hair analysis and hair style trends forecast. However, hair segmentation is quite challenging due to large within-class pattern diversity and between-class confusion resulted from complex illumination and similar appearance. To solve these problems to some extent, this paper proposes a novel coarse-to-fine hair segmentation method. Firstly, the recently published "active segmentation with fixation (ASF)" is used to coarsely define a candidate region with high-recall (but possibly low-precision) of hair pixels and exclude considerable part of the backgrounds which are easily confused with hair. Then the graph cuts (GC) method is applied to the candidate regions to perform more precise segmentation, by incorporating image-specific hair information. Specifically, Bayesian method is employed to select some reliable hair and background regions (seeds) among the ones over-segmented by mean shift. SVM classifier is then learnt online from these seeds and explored to predict hair/background likelihood probability, which is used as an initialization for performing GC algorithm. Experimental results demonstrate the approach outperforms existing hair segmentation methods. To validate the generality, the paper extends the method and achieves good results on the public databases of horse, car and aeroplane classes.

    • Image Semantic Annotation Method for Weakly Labeled Dataset

      2013, 24(10):2405-2418. DOI: 10.3724/SP.J.1001.2013.04424

      Abstract (3230) HTML (0) PDF 1.29 M (7355) Comment (0) Favorites

      Abstract:Automatic semantic annotation, which automatically annotates images with semantic labels has received much research interest. Although it has been studied for years, image annotation is still far from practical. The effectiveness of traditional image annotation techniques heavily relies on the availability of a sufficiently large set of correct, complete and balanced labeled samples, which typically come from users in an interactive manual process. However, in real world environment, image labels are often incomplete, noisy and imbalanced. This paper investigates the usefulness of weakly labeled information and proposes an image annotation method for weakly labeled dataset. First, the missing labels are automatically filled by a transductive method which incorporates label correlation and semantic sparsity, along with the consistency of visual and semantic similarity. Then approximate semantic balanced neighborhood is constructed. A distance metric learning method for large margin nearest neighbor embedded in multiple labels is supplied, making the retrieved neighbors by this metric appear in the same semantic subspace. Local semantic consistent neighborhood is obtained by local nonnegative sparse coding. Meanwhile, an iterative denoising method for label inference is proposed to simultaneously handle the noise and annotate images under the guidance of semantic nearest neighbors and contextual information. Experimental results demonstrate the effectiveness and capability of the proposed method.

    • Real-Time Approach for Dynamic Liquid Simulation Using Semi-Lagrangian

      2013, 24(10):2419-2431. DOI: 10.3724/SP.J.1001.2013.04436

      Abstract (3378) HTML (0) PDF 1015.95 K (6176) Comment (0) Favorites

      Abstract:In recent years, natural phenomena simulation attracts a spurt of research attention and interest in computer graphics and virtual reality domain. How to obtain realistic natural phenomena by simulation in an efficient way is the main purpose of this research field. This paper summarizes the main research achievements on the topic of liquid simulation. A real-time liquid simulation method based on semi-Lagrangian is proposed for liquid simulation, followed by a surface reconstruction method using simulated results. In this approach, Navier-Stokes equations are first discretized in both spatial and temporal dimensions. Second, by solving the constructed Poisson equation, numerical solution for each time step is obtained. Third, particle movements are accurately driven by the solved velocity field in order to simulate the dynamics of water. By applying surface tracking and Marching Cubes algorithm, water surface is finally extracted. Experimental results show that the simulation speed of this method is high enough for real time use. Satisfactory visual effect is also obtained, facilitating the usage of this application in computer games, movie making and virtual simulation in medical area.

    • Data Structure Directed Thread Partitioning Method and Execution Model

      2013, 24(10):2432-2459. DOI: 10.3724/SP.J.1001.2013.04353

      Abstract (2805) HTML (0) PDF 1.64 M (5480) Comment (0) Favorites

      Abstract:While parallelizing a program, to ensure the correctness of the result, a parallel compiler can only adopt a conservative pol icy that if it cannot accurately determine whether two pieces of code wi ll conflict while being executed in parallel, it does not permit them to be executed in parallel. Although this approach can ensure correctness, it also limits the amount of parallelism that can be exploited. A number of speculative multithreading (SpMT) approaches gained wide popularity because they can get more parallelism by allowing two pieces of potentially conflicting code to execute in parallel while ensuring the correctness of the results by recovering from conflicts. However, the dividing-along-the-control-flow-path method employed by the traditional SpMT approach is unsuitable for a number of programs in which operations on different data structures are interleaved in the control flow path. If the program is linearly divided into threads along the control flow path, the operations on the same data structure will be divided among the different threads that are doomed to be prone to conflict when being executed concurrently. To effectively parallelize these programs, this paper proposes a data structure centric approach, in which the objects of a program are organized into different groups and the operations on the objects wit hin the same group are dispatched to the same thread for execution, thereby reducing the likelihood of conflict on the same data structure.

    • Peak Power Management Method for Heterogeneous Parallel System

      2013, 24(10):2460-2472. DOI: 10.3724/SP.J.1001.2013.04357

      Abstract (2988) HTML (0) PDF 683.95 K (4206) Comment (0) Favorites

      Abstract:High power consumption has become one of the top considerations in high performance computing field. Recently, there are many studies focused on optimizing the system performance within a given power budget. However, most existing solutions are explored for homogeneous system without considering the differences in power consumption and processing speed between heterogeneous processors, and therefore could not be adapted for accelerator-based heterogeneous parallel system effectively. This paper first summarizes the execution model of modern accelerator-based parallel system, and then introduces the power control framework consisting of two power management hierarchies, i.e. system-level power controller and heterogeneous processing engine power controller from top to bottom. In the lower level controller, targeted for OpenMP-like parallel loop, the paper first theoretically analyzes the conditions for the maximum performance given a power budget for heterogeneous processors. Based on this result, the paper provides a power-constrained parallel loop scheduling algorithm which coordinates parallel loop partition and voltage/frequency scaling for heterogeneous processers to achieve the optimal performance given a system power budget. In the upper level controller, the paper establishes the evaluation metrics for heterogeneous processing engine to allocate power budget, in order to keep fairness between concurrent applications and improve the whole system efficiency. Finally, the paper evaluates the proposed method in a typical CPU-GPU system.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063