• Volume 30,Issue 4,2019 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Visual Scene Description and Its Performance Evaluation

      2019, 30(4):867-883. DOI: 10.13328/j.cnki.jos.005665

      Abstract (3748) HTML (3933) PDF 1.83 M (7012) Comment (0) Favorites

      Abstract:As a cross-domain research topic related to Computer Vision, Multimedia, Artificial Intelligence and Natural Language Processing, the task of visual scene description is to produce automatically one or more sentences to describe the content of visual scene from an image or a video snippet. The richness of the content in the visual scene and the diversity of the expression of natural language make visual scene description a challenging task. This paper gives a review about the generation methods and performance evaluation on the recently developed visual scene description methods. Specifically, the research object and main tasks of visual scene description are firstly defined; the relationships between visual scene description and multi-modal retrieval, cross-modal learning, scene classification, visual relationship detection and other related technologies are discussed sequentially. And then, main methods and research progress of visual scene description are summarized in three categories, while the increasing benchmark datasets are discussed. Besides, some widely-used evaluation metrics and the corresponding challenges on the visual scene description are discussed. Finally, some potential applications in future are suggested.

    • Cross-media Deep Fine-grained Correlation Learning

      2019, 30(4):884-895. DOI: 10.13328/j.cnki.jos.005664

      Abstract (3642) HTML (3042) PDF 1.43 M (6380) Comment (0) Favorites

      Abstract:With the rapid development of the Internet and multimedia technology, data on the Internet is expanded from only text to image, video, text, audio, 3D model, and other media types, which makes cross-media retrieval become a new trend of information retrieval. However, the "heterogeneity gap" leads to inconsistent representations of different media types, and it is hard to measure the similarity between the data of any two kinds of media, which makes it quite challenging to realize cross-media retrieval across multiple media types. With the recent advances of deep learning, it is hopeful to break the boundaries between different media types with the strong learning ability of deep neural network. But most existing deep learning based methods mainly focus on the pairwise correlation between two media types as image and text, and it is difficult to extend them to multi-media scenario. To address the above problem, Deep Fine-grained Correlation Learning (DFCL) approach is proposed, which can support cross-media retrieval with up to five media types (image, video, text, audio, and 3D model). First, cross-media recurrent neural network is proposed to jointly model the fine-grained information of up to five media types, which can fully exploit the internal details and context information of different media types. Second, cross-media joint correlation loss is proposed, which combines distribution alignment and semantic alignment to exploit both intra-media and inter-media fine-grained correlation, while it can further enhance the semantic discrimination capability by semantic category information, aiming to promote the accuracy of cross-media retrieval effectively. Extensive experiments on 2 cross-media datasets are conducted, namely PKU XMedia and PKU XMediaNet datasets, which contain up to five media types. The experimental results verify the effectiveness of the proposed approach.

    • Automatic Makeup with Region Sensitive Generative Adversarial Networks

      2019, 30(4):896-913. DOI: 10.13328/j.cnki.jos.005666

      Abstract (3742) HTML (2902) PDF 2.17 M (5289) Comment (0) Favorites

      Abstract:Automatic makeup refers to the editing and synthesis of face makeup through computer algorithms. It belongs to the field of face image analysis, and plays an important role in interactive entertainment applications, image and video editing, and face recognition. However, as a face editing problem, it is still difficult to ensure that the editing result of the image is natural and satisfies the editing requirements. Makeup still has some difficulties such as precisely controlling the editing area is hard, the image consistency before and after editing is poor, and the image quality is insufficient. In response to these difficulties, this study innovatively proposes a mask-controlled automatic makeup generative adversarial network. Through a masking method, this network can edit the makeup area with emphasis, restrict the area that does not require editing, and maintain the key information. At the same time, it can separately edit the eye shadow, lips, cheeks, and other local areas of the face to achieve makeup on specific areas and enrich the makeup function. In addition, this network can be trained jointly on multiple datasets. In addition to makeup dataset, it can also use other face datasets as an aid to enhance the model's generalization ability and get a more natural makeup result. Finally, based on a variety of evaluation methods, more comprehensive qualitative and quantitative experiments are carried out, the results are compared with the other methods, and the performance of the proposed method is comprehensively evaluated.

    • Multi-object Classification of Remote Sensing Image Based on Affine-invariant Supervised Discrete Hashing

      2019, 30(4):914-926. DOI: 10.13328/j.cnki.jos.005661

      Abstract (3552) HTML (2711) PDF 1.48 M (5910) Comment (0) Favorites

      Abstract:The multi-object classification of remote sensing images has been a challenging task. Firstly, due to the complexity of the data and the high requirement of storage, the traditional classification methods are difficult to achieve both the accuracy and speed of the classification. Secondly, the affine transformation caused by the remote sensing imaging process, the real-time performance of the object interpretation is difficult to be realized. To solve the problem, a multi-object classification of remote sensing image is proposed based on affine-invariant discrete hashing (AIDH). This method uses supervised discrete hashing with the advantage of low storage and high efficiency, jointed with affine-invariant factor, to construct affine-invariant discrete hashing. By constraining the affine transform samples with the same semantic information to the similar binary code space, the method achieves the enhancement on classification precision. Experiments show that under the two datasets of NWPU VHR-10 and RSDO-dataset, the method presented in this paper is more efficient than classical hash method and classification method, and it is also guaranteed in accuracy.

    • Improved Deep Correlation Filters via Conditional Random Field

      2019, 30(4):927-940. DOI: 10.13328/j.cnki.jos.005662

      Abstract (3323) HTML (2474) PDF 3.02 M (7112) Comment (0) Favorites

      Abstract:Object tracking is one of the most important tasks in numerous applications of computer vision. It is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. Therefore, it is important to build a robust object appearance model for visual tracking. Discriminative correlation filters (DCF) with deep convolutional features have achieved favorable performance in recent tracking benchmarks. The object in each frame can be detected by corresponding response map, which means the desired response map should get a highest value at the location of the object. In this scenario, considering the continuous characteristics of the response values, it can be naturally formulated as a continuous conditional random field (CRF) learning problem. Moreover, the integral of the partition function can be calculated in a closed form so that the log-likelihood maximization can be exactly solved. Therefore, here a conditional random field based robust object tracking algorithm is proposed to improve deep correlation filters, and an end-to-end deep convolutional neural network is designed for estimating response maps from input images by integrating the unary and pairwise potentials of continuous CRF into a tracking model. With the combination between the initial response map and similarity matrix which are obtained through the unary and pairwise potentials respectively, a smoother and more accurate response map can be achieved, which improves the tracking robustness. The proposed approach against 9 state-of-the-art trackers on OTB-2013 and OTB-2015 benchmarks are evaluated. The extensive experiments demonstrate that the proposed algorithm is 3% and 3.5% higher than the baseline methods in success plot, and is 6.1% and 4.8% higher than the baseline ones in precision plot on OTB-2013 and OTB-2015 benchmarks respectively.

    • Deep Residual Network in Wavelet Domain for Image Super-resolution

      2019, 30(4):941-953. DOI: 10.13328/j.cnki.jos.005663

      Abstract (4465) HTML (3142) PDF 1.85 M (5792) Comment (0) Favorites

      Abstract:Single Image Super Resolution (SISR) refers to the reconstruction of high resolution images from a low resolution image. Traditional neural network methods typically perform super-resolution reconstruction in the spatial domain of an image, but these methods often ignore important details in the reconstruction process. In view of the fact that wavelet transform can separate the "rough" and "detail" features of image content, this study proposes a wavelet-based deep residual network (DRWSR). Different from other traditional convolutional neural networks, the high-resolution image (HR) is directly derived. This method uses a multi-stage learning strategy to first infer the wavelet coefficients corresponding to the high-resolution image and then reconstruct the super-resolution image (SR). In order to obtain more information, the method uses a flexible and scalable deep neural network with residual nested residuals. In addition, the proposed neural network model is optimized by combining the loss function of image space and wavelet domain. The proposed method is carried out on Set5, Set14, BSD100, Urban100, and other datasets. The experimental results show that the proposed visual effect and peak signal-to-noise ratio (PNSR) are better than the related image super-resolution method.

    • Instance Segmentation with Separable Convolutions and Multi-level Features

      2019, 30(4):954-961. DOI: 10.13328/j.cnki.jos.005667

      Abstract (3533) HTML (2968) PDF 947.25 K (5519) Comment (0) Favorites

      Abstract:Instance segmentation is a challenging task for it requires not only bounding-box of each instance but also precise segmentation mask of it. Recently proposed fully convolutional instance-aware semantic segmentation (FCIS) has done a good job in combining detection and segmentation. But FCIS cannot make use of low level features, which is proved useful in both detection and segmentation. Based on FCIS, a new model is proposed which refines the instance masks with features of all levels. In the proposed method, large kernel separable convolutions are employed in the detection branch to get more accurate bounding-boxes. Simultaneously, a segmentation module containing boundary refinement operation is designed to get more precise masks. Moreover, the low level, medium level, and high level features in Resnet-101 are combined into new features of four different levels, each of which is employed to generate a mask of an instance. These masks are added and refined to produce the final most accurate one. With the three improvements, the proposed approach significantly outperforms baseline FCIS as it provides 4.9% increase in mAPr@0.5 and 5.8% increase in mAPr@0.7 on PASCAL VOC.

    • >Review Articles
    • Code Clone Detection: A Literature Review

      2019, 30(4):962-980. DOI: 10.13328/j.cnki.jos.005711

      Abstract (4048) HTML (4674) PDF 1.88 M (8370) Comment (0) Favorites

      Abstract:Code clone refers to more than two duplicate or similar code fragments existing in a software system. Code clone is a common phenomenon during software development which can facilitate development and has positive impacts on software system. However, research shows that code clone will also do harm to the development and maintenance of software system, including but not limited to the decline of stability, redundancy of source code repository, and propagation of software defects. Code clone is one of the most active research areas in software engineering. Therefore, various detection techniques are proposed to automatically detect code clone in software systems, which help improve software quality. There are a lot of achievements in this area, and these techniques can be categorized to text-based, lexis-based, syntax-based, and semantic-based categories. Current techniques have obtained effective results in text-based clone detection, but still challenges in detecting other types of code clone. More advanced and unified theoretic and technical guidelines are needed to improve code clone detection techniques. Therefore, in this paper, a literature review for code detection is presented especially from the perspective of source code representation. In summary, the contributions of this study are:(1) current code clone detection techniques are concluded and classified from the perspective of code representation; (2) the model validation and performance measures in model evaluation are concluded; and (3) the key issues of code clone research are summarized from three aspects:scientific, practical, and technical difficulties. The possible solutions to the problems and the future development of the research are elaborated, focusing on data annotation, characterization methods, model construction, and engineering practice.

    • Parallelizing Compilation Framework for Heterogeneous Many-core Processors

      2019, 30(4):981-1001. DOI: 10.13328/j.cnki.jos.005370

      Abstract (2595) HTML (1452) PDF 2.37 M (4555) Comment (0) Favorites

      Abstract:Heterogeneous many-core processors become an important trend in high-performance computing area, but the issue that the sophisticated architecture complicates the programming is more significantly. To solve this problem, this study proposes a parallelizing compilation framework for heterogeneous many-core processors based on the open source Open64 compiler, automating the transformation from a sequential program to heterogeneous parallel code. The framework mainly comprises a work scheduling module that identifies the parallelizable regions and achieves a multi-dimensional parallelization recognition for nested loops; a data mapping module that maps data between the main memory and SPM and realizes array boundary analysis and pointer range analysis; a transmission optimizing module that implements optimizations by merging, hoisting and packaging data transmission, and transposing array; and a performance estimation module that proposes a dynamic-static hybrid method to analyze benefit based on the cost model for SW26010. The compilation framework is implemented on top of Sunway SW26010 processors, and experimental evaluations are conducted on numerous benchmarks. The experimental results show that the proposed framework can parallelize these applications and obtain a promising performance improvement on heterogeneous many-core platforms.

    • Feature Learning of Weight-distribution for Diagnosis of Alzheimer's Disease

      2019, 30(4):1002-1014. DOI: 10.13328/j.cnki.jos.005371

      Abstract (2504) HTML (1637) PDF 1.47 M (4053) Comment (0) Favorites

      Abstract:In the field of medical imaging analysis using machine learning, the challenge is lack of training sample. In order to solve the problem, a weight-distribution based Lasso (Least absolute shrinkage and selection operator) feature learning model is proposed and applied to early diagnosis of Alzheimer's Disease (AD). Specifically, the proposed diagnosis method is consisted of two components:weight-distribution based Lasso feature selection (WDL) and large margin distribution machine (LDM) for classification. Firstly, in order to capture data distribution information among multimodal features, the WDL feature selection model was built, to improve on the conventional Lasso model via adding a regularization item of weight-distribution. Secondly, in order to achieve better generalization and accuracy on classification, and also to keep complementary information among multimodal features, the LDM algorithm is used for the training of the classifier. To evaluate the effectiveness of the proposed learning model, 202 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database with multimodal features were employed. Experimental results on the ADNI database show that it can recognize AD from Normal Controls (NC) with 97.5% accuracy, recognize Mild Cognitive Impairment (MCI) from NC with 83.1% accuracy, and recognize progressive MCI (pMCI) patients from stable MCI (sMCI) ones with 84.8% accuracy, which demonstrate that it can significantly improve the performance of early AD diagnosis and achieve feature ranking in terms of discrimination via optimized weight vector.

    • Joint Chinese Event Extraction Based Multi-task Learning

      2019, 30(4):1015-1030. DOI: 10.13328/j.cnki.jos.005380

      Abstract (3050) HTML (1633) PDF 1.70 M (6728) Comment (0) Favorites

      Abstract:Event extraction aims to extract the interesting and structured information from unstructured text. Most Chinese event extraction methods use a continuous pipeline model which first identify event trigger word, and then identify the event arguments. Thus, it is prone to produce cascading errors, and the information contained in downstream task cannot be fed back to the upstream task. In this study, event extraction is considered as a sequence labeling task, and a multi-task learning with CRF enhanced Chinese event extraction model is proposed. Two extensions on the CRF based event extraction model are performed:(1) the separate training strategy to solve multi-label problem for an event argument in the joint model (i.e., when an event scope includes multiple events, the same entity tends to play different roles in different events); (2) considered event arguments of sub-events under the same class have the high correlation, a multi-task learning approach is proposed to jointly learn sub-events, which can alleviate the corpus sparsity to some extent. The experiment results on ACE 2005 Chinese corpus show the effectiveness of the proposed method.

    • Sales Forecasting Based on Multi-dimensional Grey Model and Neural Network

      2019, 30(4):1031-1044. DOI: 10.13328/j.cnki.jos.005510

      Abstract (2584) HTML (1458) PDF 1.60 M (4025) Comment (0) Favorites

      Abstract:Accurate sales forecasting is important to the fashion enterprise, such as apparel and accessories, handbags, wallets. However, it is a challenging problem since the requirements from consumers can be influenced by many factors. In this paper, the sales are forecasted based on an improved multidimensional grey model (IGM(1,N)) and artificial neural network (ANN), where the multi-dimensional grey model is used to model sales data while the neural network is used to correct the errors. The advantage of the proposed hybrid model is that it considers the relation between the sales and the factors that influence the customer requirements. The performance of the proposed hybrid model is evaluated with sales data from Ali-TianMao, and the experimental results demonstrate that the proposed hybrid model is superior to the existing sales forecasting models.

    • Community Detection Algorithm Based on Node Embedding Vector Representation

      2019, 30(4):1045-1061. DOI: 10.13328/j.cnki.jos.005387

      Abstract (2394) HTML (1507) PDF 1.82 M (5991) Comment (0) Favorites

      Abstract:Community detection is very important in theoretical and practical for complex research. According to the principle of distributed word vector, a community detection algorithm based on node embedding vector (CDNEV) is proposed in this study. In order to construct the distributed vector of network nodes, a heuristic random walk model is put forward. The node sequence obtained by the heuristic random walk model is used as the context for nodes, and the distributed vector of nodes is learned by SkipGram model. Based on the distributed vector of nodes that are selected from the local node as the center of the K-Means clustering algorithm center, all nodes in a network are clustered with K-Means algorithm, and the community structure are conclude by clustering result. Based on real complex networks and artificial networks used in other state-of-the-art algorithms, comprehensive experiments are conducted. For comparison purpose, typical community detection algorithms are selected to be evaluated. On real networks, the F1 value of CDNEV algorithm is increased 19% on average. The F1 value can be increased by 15% on artificial networks. Experimental results demonstrate that both accuracy and efficiency of CDNEV algorithm outperform other state-of-the-art algorithms.

    • Study on Keyword Retrieval Based on Keyword Density for XML Data

      2019, 30(4):1062-1077. DOI: 10.13328/j.cnki.jos.005390

      Abstract (2144) HTML (1202) PDF 1.92 M (3284) Comment (0) Favorites

      Abstract:Keyword search has a friendly user experience; the method has been widely used in the field of text information retrieval. Keyword search on XML data is a hot research topic presently. The XML keyword search method based on query semantics have two problems:(1) a large number of query fragments which are not related to the user's query intention have been returned; (2) the fragments which are consistent with the user's query intention have been missed. Aiming at these problems, two rules of user query intention and LCA correlation are proposed on the basis of the two (horizontal and vertical) dimensions of LCA. The edge density and path density of LCA are defined according to the two rules, and a comprehensive scoring formula on LCA nodes is established, finally, the TopLCA-K algorithm is designed to rank LCA. To improve the efficiency of the algorithm, center location index is designed. Experimental results show that the nodes returned by this method are more in line with the needs of users.

    • Distributed Mining of Frequent Co-occurrence Patterns across Multiple Data Streams

      2019, 30(4):1078-1093. DOI: 10.13328/j.cnki.jos.005419

      Abstract (2246) HTML (1216) PDF 1.88 M (3687) Comment (0) Favorites

      Abstract:A frequent co-occurrence pattern across multiple data streams refers to a set of objects occurring in one data stream within a short time span and this set of objects appear in multiple data streams in the same fashion within another user-specified time span. Some real applications, such as discovering groups of cars that travel together using the city surveillance system, finding the people that are hanging out together based on their check-in data, and mining the hot topics by discovering groups of frequent co-occurrence keywords from social network data, can be abstracted as this problem. Due to data streams always own tremendous volumes and high arrival rates, the existing algorithms being designed for a centralized setting cannot handle mining frequent co-occurrence patterns from the large scale of streaming data with the limited computing resources. To address this problem, FCP-DM, a distributed algorithm to mine frequent co-occurrence patterns from a large number of data streams, is proposed. This algorithm first divides the data streams into segments, and then constructs a multilevel mining model in the distributed environment. This model utilizes multiple computing nodes for detecting massive volumes of data streams in a parallel pattern to discover frequent co-occurrence patterns in real-time. Finally, extensive experiments are conducted to fully evaluate the performance of the proposal.

    • Genomic Privacy Preserving Framework for SNP Linkage Disequilibrium

      2019, 30(4):1094-1105. DOI: 10.13328/j.cnki.jos.005367

      Abstract (2395) HTML (2562) PDF 1.45 M (3976) Comment (0) Favorites

      Abstract:The cost of sequencing is substantially decreasing with the rapid development of human genome sequencing technologies. The generated genome data are supporting various applications. The genome-wide associated analysis study between the single nucleotide polymorphisms and diseases may lead to more privacy breaches for considering single nucleotide polymorphisms linkage disequilibrium, because of sensitive information related to single nucleotide polymorphisms including individual identity, phenotype, and kinship. To this end, the matrix differential privacy preserving framework is proposed based on the correlated coefficient of single nucleotide polymorphisms linkage disequilibrium. Therefore, this framework can preserve privacy of genome data and single nucleotide polymorphisms linkage disequilibrium, while ensures a certain genome data utility. And it achieves the trade-off between genome data privacy and utility for single nucleotide polymorphisms linkage disequilibrium in genome-wide association studies. Furthermore, the proposed framework plays an important role for promoting genomic privacy preserving under single nucleotide polymorphisms linkage disequilibrium.

    • Privacy Preserving Algorithms of Uncertain Graphs in Social Networks

      2019, 30(4):1106-1120. DOI: 10.13328/j.cnki.jos.005368

      Abstract (2846) HTML (1391) PDF 1.56 M (5083) Comment (0) Favorites

      Abstract:The rapid popularization of the social network platform is causing growing concern among users of personal privacy disclosure in social networks, and due to the characters of social network which have the large number of users and with complicated relationships, the traditional privacy preserving method cannot be applied to the social network privacy protection which have a number of users and complicated. Graph modification technique is a series of privacy preserving methods proposed for the privacy preserving of social network data. Uncertain graph is a privacy preserving method, which converting a deterministic graph into a probability graph. In this study, the edge probability assignment algorithm is mainly focused on in the uncertain graph, and an algorithm for assigning the edge probability assignment is proposed based on differential privacy. The algorithm has a double privacy protection, which is suitable for social networks with high privacy requirements. Meanwhile, a different algorithm of uncertain graphs' edge probability assignment is presented based on the triadic closure, which achieves privacy preserve while maintains high data utility and suitable for simple social networks. The analysis and comparison show that the algorithm for assigning the edge probabilities of uncertain graph based on differential privacy can achieve a higher privacy preserving which was compared with obfuscation algorithm, and the algorithm of uncertain graphs' edge probability assignment based on triadic closure has higher data utility. Finally, in order to measure the distortion of the network structure, a data utility measure is proposed based on network structure entropy. The algorithm can measure the similarity between the uncertain graph and the original structure.

    • Analysis of Inter Vehicles Communication Process and Performance in VANETs Based on Platoon

      2019, 30(4):1121-1135. DOI: 10.13328/j.cnki.jos.005374

      Abstract (2197) HTML (1445) PDF 1.74 M (4238) Comment (0) Favorites

      Abstract:Stable driving of autonomous vehicles platoon is ensured by reliable real-time information transmission between vehicles. Aiming at the platoon architecture, which uses dedicated short range communication technology (DSRC) in VANETs, an analysis method of vehicle-to-vehicle communication networks performance is proposed. It mainly studies inter autonomous vehicles communication process of intra-platoon and the inter-platoon communication process in multi-platoon marshalling. It uses M/G/1/K queue model with limited length to analyze the queuing process of packet arriving at media access control layer, which is described through steady distribution of buffer states under different network loads. Subsequently, it derives communication characteristics under different vehicles location through the analysis method of Markov model considering idle state of buffer. Research results show that various factors including the network data flow, channel conditions, MAC layer buffer queuing process, contention channel process, and platoon parameters, have a substantial influence on packet transmission delay and packet loss probability of inter vehicles communication, and the delay of vehicle-to-vehicle communication based on DSRC satisfies string stability demand of platoon.

    • Performance Analysis Model of Heterogeneous Traffic Sources under IEEE 802.11 DCF

      2019, 30(4):1136-1147. DOI: 10.13328/j.cnki.jos.005381

      Abstract (2216) HTML (1448) PDF 1.47 M (3604) Comment (0) Favorites

      Abstract:Most IEEE 802.11 DCF analysis models concentrate solely on the performance of homogeneous traffic sources (i.e., with the same arrival rate), only a small number of literatures focus on heterogeneous mixed service networks (i.e., saturated or nonsaturated). In current research, the network nonsaturated and backoff freezing analyses are not accurate. This study proposes a new and improved bi-dimensional Markov chain model to analyze the performance of the DCF mechanism under heterogeneous traffic sources with the M/G/1 queuing model. Moreover, it extends the existing models to take into account previously-ignored MAC layer factors such as backoff freezing and limited times of retry. By solving the steady-state of this model, it allows the calculation of the three important parameters:per-station and network throughput, mean delay, and transmission packet loss. Through the relevant theoretical simulation and analysis, it is proved that the model can well analyze the performance of the DCF mechanism, taking into account the actual application scenarios, thus can analyze the performance of the DCF mechanism better than others under heterogeneous traffic sources.

    • Circular Features Description: Effective Method for Leaf Image Retrieval and Classification

      2019, 30(4):1148-1163. DOI: 10.13328/j.cnki.jos.005389

      Abstract (2761) HTML (1231) PDF 1.96 M (4516) Comment (0) Favorites

      Abstract:Leaf image recognition is a significant application of computer vision. Its key issue is how to effectively describe the leaf images. A method, called circular features description, is proposed. In this method, a circular centered at the contour is put on the image plane and the central angle, the spatial distribution of the region points, and the gray statistics characteristics are derived from its intersection to the leaf contour and region for describing the contour, region and gray features of the leaf image. By varying the size of the circle, a coarse to fine descriptor is yielded and a local multiscale arrangement is developed in which the range of the radius of the circles and the values of various scales taking for each contour point are determined by the distance of the remaining contour points to it. The proposed method naturally integrates the contour, region, and grayscale information of the leaf image and is also invariant to the similarity transform of the leaf image. It is conducted on the public test datasets and the experimental results show its higher accuracies over the state-of-the-art methods.

    • >Review Articles
    • Heterogenity-aware Scheduling Research on Performance Asymmetric Multicore Processors

      2019, 30(4):1164-1190. DOI: 10.13328/j.cnki.jos.005811

      Abstract (3586) HTML (4713) PDF 2.55 M (7692) Comment (0) Favorites

      Abstract:To meet the diverse needs of the applications, heterogeneous multicore processors appeared and entered into market, where the processing cores have a different microarchitecture or instruction set architecture (ISA), providing special features such as instruction level parallelism (ILP) and memory level parallelism (MLP). These cores work together to meet the optimization objectives of the entire computing system, such as high performance, low power consumption or energy efficiency. However, the mainstream scheduling technology is designed for the traditional homogeneous processor architecture, without considering the differences of processing capabilities of various cores. It is worth exploring for scheduling technologies that can perceive the heterogeneous characteristics of the hardware and make more suitable matching decision between applications and hardware resources. The researches of heterogeneous scheduling in recent years are systematically summarized in the paper. This paper also analyzes the scheduling challenges and techniques under the environment of performance asymmetric multicore processors from the following aspects:optimization objectives, analysis model, scheduling decision, and algorithm evaluation. Finally, the future work is prospected from the perspective of software and hardware integration.

    • Optimized Fault Tolerance as Services Provisioning for Cloud Applications

      2019, 30(4):1191-1202. DOI: 10.13328/j.cnki.jos.005372

      Abstract (2193) HTML (1168) PDF 1.31 M (3634) Comment (0) Favorites

      Abstract:It is important to provide efficient and continuously available fault tolerant services for cloud applications to ensure their reliable executions. This study adopts the fault tolerance as a service scheme to propose an optimized fault tolerance services provisioning method. The fault tolerance requirements for cloud applications are specified from certain aspects of cloud service components, such as reliability and response time. Based on major fault tolerance technologies, i.e., replication, checkpoint, and NVP (N-Version Programming), with consideration of the dynamic switching overhead among fault tolerance services, a novel method to compute optimal solution of feasible fault tolerance service provisioning is proposed according to the fault tolerance as a service scheme. Two analysis scenarios are considered, that is, whether cloud infrastructure resources used to support fault tolerance service are sufficient or not. The experimental results show that the proposed method reduces the fault tolerant service expenses for cloud application system, reduces the cost of cloud infrastructure resources supporting fault tolerance service, and improves the service capacity of fault tolerance service providers to provide efficient and reliable fault tolerance as a service for cloud application systems.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063