• Volume 31,Issue 7,2020 Table of Contents
    Select All
    Display Type: |
    • Preface

      2020, 31(7):1931-1932. DOI: 10.13328/j.cnki.jos.005936 CSTR:

      Abstract (1329) HTML (1190) PDF 292.83 K (2778) Comment (0) Favorites

      Abstract:

    • >Special Issue's Articles
    • Adversarial Training Triplet Network for Fine-grained Sketch Based Image Retrieval

      2020, 31(7):1933-1942. DOI: 10.13328/j.cnki.jos.005934 CSTR:

      Abstract (3146) HTML (3011) PDF 1.17 M (4867) Comment (0) Favorites

      Abstract:Sketch based image retrieval means that the sketch is used as the query in the retrieval. Fine-grained image retrieval or intra-categoryretrieval was proposed in 2014 and attracted more attentions quickly. Triplet network is often used to do fine-grained retrieval and get promising performance. However, training triplet network is quite difficult, it is hard to converge and easy to over-fit in some situations. Inspired by the adversarial training, this study proposes SketchCycleGAN to improve the efficiency of the triplet network training process. In this proposal, pre-training the networks with other database is replaced by mining the information inside the database with the help of adversarial training. That could simplify the training procedure with better performance. This proposal could get better performance than other state-of-the-art methods in a series of experiments executed on widely used databases for fine-grained sketchbased retrieval.

    • Multi-scale Generative Adversarial Network for Person Re-identification under Occlusion

      2020, 31(7):1943-1958. DOI: 10.13328/j.cnki.jos.005932 CSTR:

      Abstract (3923) HTML (3211) PDF 1.93 M (6238) Comment (0) Favorites

      Abstract:Person re-identification (ReID) refers to the task of retrieving a given probe pedestrian image from a large-scale gallery collected by multiple non-overlapping cameras, which belongs to a specific task of image retrieval. With the development of deep learning, the performance of person ReID has been significantly improved. However, in practical applications, person ReID usually suffers from the problem of occlusion (such as background occlusion, pedestrian occlusion). The occluded image not only loses partial target information, but also introduces additional interference, which makes the deep neural network difficult to learn robust feature representations and seriously degrades the performance of person ReID. Recently, generative adversarial network (GAN) has shown the powerful image generation ability on various computer vision tasks. Inspried by GAN, a person ReID method is proposedunder occlusion based on multi-scale GAN. Firstly, the paired occluded images and unoccluded images are usedto train a multi-scale generator and a discriminator. The multi-scale generator can restore the lost information for randomly occluded areas and generate high-quality reconstructed images; while the discriminator can distinguish whether the input image is a real image or a generated image. Then, the trained multi-scale generator is usedto generate the de-occluded images. Adding these de-occluded images to the original training image set can increase the diversity of training samples. Finally, a classification network is trainedbased on the augmented training image set, which effectively improves the generalization capability of the trained model on the testing image set. Experimental results on several challenging person ReID datasets demonstrate the effectiveness of theproposed method.

    • Posture Prior Driven Double-branch Network Model for Accurate Human Parsing

      2020, 31(7):1959-1968. DOI: 10.13328/j.cnki.jos.005933 CSTR:

      Abstract (2836) HTML (3183) PDF 1.49 M (5600) Comment (0) Favorites

      Abstract:Human parsing aims to segment a human image into multiple parts with fine-grained semantics and provides more detailed understanding of image contents. When the human body posture is complicated, the existing human parsing methods are easy to cause misjudgment to the human limb components, and the segmentation of the small target is not accurate enough. In order to solve the above problems, a double-branch networkjointingposture prior is proposed for accurate human parsing. The model first uses the backbone network to acquire the characteristics of the human body image, and then uses the pose prior information predicted by the human pose estimation model as the attention information to form a multi-scale feature expression driven by the human body structure prior. The multi-scale features are fed into the fully convolution network parsing branch and detection parsing branch separately. The fully convolutional network obtains global segmentation results, and the detection parsing branch pays more attention to the detection and segmentation of small-scale targets. The segmentation results of the two branches are fused to obtain the final parsing result, which can be more accurate. The experiment results verify the effectiveness of the proposed algorithm. Our Thisapproach has achieved 52.19% mIoU on LIP dataset, 68.29% mIoU on ATR dataset, which improves the human parsing accuracy effectively and achieves more accurate segmentation results in the human limb components and small target componentsn parsing accuracy effectively and achieves more accurate segmentation results in the human limb components and small target components.

    • Video Memorability Prediction Based on Global and Local Information

      2020, 31(7):1969-1979. DOI: 10.13328/j.cnki.jos.005935 CSTR:

      Abstract (2717) HTML (3654) PDF 1.37 M (4667) Comment (0) Favorites

      Abstract:Memorability of a video is a metric to describe that how memorable the video is. Memorable videos contain huge values and automatically predicting the memorability of large numbers of videos can be applied in various applications including digital content recommendation, advertisement design, education system, and so on. This study proposes a global and local information based framework to predict video memorability. The framework consists of three components, namely global context representation, spatial layout, and local object attention. The experimental results of the global context representation and local object attention are remarkable, and the spatial layout also contributes a lot to the prediction. Finally, the proposedmodel improves the performances of thebaseline of MediaEval 2018 Media Memorability Prediction Task.

    • Measurement and Optimization of Browser Cache Performance for Mobile Web Applications

      2020, 31(7):1980-1996. DOI: 10.13328/j.cnki.jos.005971 CSTR:

      Abstract (1796) HTML (1324) PDF 1.99 M (3416) Comment (0) Favorites

      Abstract:With the rapid development of the mobile Internet, users are increasingly accessing Web applications through mobile devices. Browsers provide the runtime support for Web applications, such as computation and rendering. The browser cache supports Web applications to obtain reusable resources directly from the local storage rather than downloading from the network. It not only improves the application loading speed, but also reduces network traffic usage and battery power consumption, ensuring the experience of mobile Web users. In recent years, attentions have been paid by both industry and academy on optimizing the browser cache performance of mobile Web applications. However, most of the existing research work focuses on the overall performance of the browser cache from the network level, and does not fully consider the impact of user access behaviors and application evolution on the performance of the browser cache. To address the issue, this study designs a proactive measurement experiment, which simulates the user access behavior and obtains the resources of the mobile Web applications. Experiment results reveal the huge gap between the ideal and actual performance of the browser cache, and dig out three main root causes to the gap:Resource aliases, heuristic caching strategies, and conservative cache time configuration. Based on the findings, in order to improve the browser cache performance of mobile Web applications, this study also proposes two optimization methods from the application layer and the platform layer, respectively, and implements the corresponding prototype systems. Evaluation results show that the two proposed methods can save the network traffic by 8%~51% and 4%~58% on average, respectively, and the system overhead is small.

    • >Review Articles
    • Survey of State-of-the-art Log-based Failure Diagnosis

      2020, 31(7):1997-2018. DOI: 10.13328/j.cnki.jos.006045 CSTR:

      Abstract (4090) HTML (5362) PDF 847.05 K (12253) Comment (0) Favorites

      Abstract:Log-based failure diagnosis refers to intelligent analysis of system runtime logs to automatically discover system anomalies and diagnose system failures. Today, this technology is one of the key technologies of artificial intelligence for IT operations (AIOps), which has become a research hotspot in both academia and industry. This study first analyzes the log-based failure diagnosis process, and summarizes the research framework of fault diagnosis based on logs and four key technologies in the field:Log processing and feature extraction technology, anomaly detection technology, failure prediction technology, and fault diagnosis technology. Next, a systematic review is conducted of the achievements of scholars at home and abroad in these four key technical fields in recent years. At last, the different technologies are summarized in this field based on the research framework, and the possible challenges are looked forwarded for future research.

    • Survey of State-of-the-art Distributed Tracing Technology

      2020, 31(7):2019-2039. DOI: 10.13328/j.cnki.jos.006047 CSTR:

      Abstract (3205) HTML (4418) PDF 6.15 M (6145) Comment (0) Favorites

      Abstract:As distributed computing and distributed systems are being widely applied in various areas, how to improve the efficiency of system operations to guarantee the stability and reliability of the services provided by these distributed systems have gained massive momentum from both academia and industry. However, system operation tasks are confronted with tough challenges due the large scale, the intricate structures and dependency, the continuous updating and concurrent service requests of distributed systems. Previous component-/node-/process-/thread-centric monitoring and tracing methods are not sufficient to support the system operation tasks such as fault diagnosis, performance optimization, and system understanding in a distributed system. To address this issue, distributed tracing is proposed and designed. Distributed tracing identifies all the events belonging to the same request and causally correlates these events. Distributed tracing technology precisely and fine-grainedly depicts the behavior of a distributed system in a service-request or workflow-centric way, which is critical to improve the efficiency of system operations. This paper presents a comprehensive survey of existing research work and application of distributed tracing technology. A research framework is proposed and existing research achievements in this field are compared and analyzed with this framework from four perspectives which are acquiring tracing data, identifying the events from the same request, determining the causal relationships among these events, and representing the request execution path. Then the research work of applying distributed tracing technology to system operation tasks such as fault diagnosis and performance optimization is briefly introduced. Finally, the data dependency issue, the generality issue, and evaluation metrics issue of distributed tracing are discussed and a perspective of the future research direction in distributed tracing technology is presented.

    • Survey of Software Vulnerability Mining Methods Based on Machine Learning

      2020, 31(7):2040-2061. DOI: 10.13328/j.cnki.jos.006055 CSTR:

      Abstract (6357) HTML (5699) PDF 568.72 K (15630) Comment (0) Favorites

      Abstract:The increasing complexity of software application brings great challenges to software security. Due to the increase of software scale and diversity of vulnerability forms, the high false positives and false negatives of traditional vulnerability mining methods cannot meet the requirements of software security analysis. In recent years, with the rise of artificial intelligence industry, a large number of machine learning methods have been tried to solve the problem of software vulnerability mining. Firstly, the latest research results of applying machine learning method to the research of vulnerability mining are summarized in recent years, and the technical characteristics and workflow are proposed. Then, starting from the core original data features extraction, the existing research is classified according to the code representation form, and the existing research is systematically compared. Finally, based on the summary of the existing research, the challenges in the field of software vulnerability mining based on machine learning are discussed, and the development trends of this field are proposed.

    • Survey on Automatic Term Extraction Research

      2020, 31(7):2062-2094. DOI: 10.13328/j.cnki.jos.006040 CSTR:

      Abstract (3826) HTML (5513) PDF 940.98 K (15922) Comment (0) Favorites

      Abstract:Automatic term extraction is to extract domain-related words or phrases from document collections. It is a core basic problem and research hotspot in the fields of ontology construction, text summarization, and knowledge graph. In particular, under the rise of unstructured text studies in big data, automatic term extraction technology has been further concerned by researchers and has obtained rich research results recently. With the terminology sorting algorithm as the main clue, this study surveys the basic theories, technologies, current research works, advantages and disadvantages of automatic term extraction methods. First, the formalized definition and solution framework of automatic term extraction problem are outlined. Then, based on the features of the basic language information and the relational structure information in the "shallow parsing", the latest study results are classified, research progress and major challenges of existing automatic term extraction methods are summarized systematically. Finally, some available data resources are listed, evaluation approaches are analyzed, and the possible research trends in the future are predicted.

    • Survey of Machine Reading Comprehension Based on Neural Network

      2020, 31(7):2095-2126. DOI: 10.13328/j.cnki.jos.006048 CSTR:

      Abstract (4220) HTML (4589) PDF 814.65 K (18775) Comment (0) Favorites

      Abstract:The task of machine reading comprehension is to make the machine understand natural language text and correctly answer text-related questions. Due to the limitation of the dataset scale, most of the early machine reading comprehension methods were modeled based on manual features and traditional machine learning methods. In recent years, with the development of knowledge bases and crowdsourcing, high quality large-scale datasets have been proposed by researchers, which has brought a new opportunity for the advance of neural network models and machine reading comprehension. In this survey, an exhaustive review on the state-of-the-art research efforts on machine reading comprehension based on neural network is made. First, an overview of machine reading comprehension, including development process, problem formulation, and evaluation metric, is given. Then, a comprehensive review is conducted of related technologies in the most fashionable neural reading comprehension framework including the embedding layer, encoder layer, interaction layer, and output layer as well as the latest BERT pre-training model and its advantages are discussed. After that, this paper concludes the recent research progress of machine reading comprehension datasets and neural reading comprehension model, and gives a comparison and analysis of the most representative datasets and neural network models in detail. Finally, the research challenges and future direction of machine reading comprehension are presented.

    • Survey on Privacy Preserving Techniques for Machine Learning

      2020, 31(7):2127-2156. DOI: 10.13328/j.cnki.jos.006052 CSTR:

      Abstract (6394) HTML (6237) PDF 802.56 K (15543) Comment (0) Favorites

      Abstract:Machine learning has become a core technology in areas such as big data, Internet of Things, and cloud computing. Training machine learning models requires a large amount of data, which is often collected by means of crowdsourcing and contains a large number of private data including personally identifiable information (such as phone number, id number, etc.) and sensitive information (such as financial data, health care, etc.). How to protect these data with low cost and high efficiency is an important issue. This paper first introduces the concept of machine learning, explains various definitions of privacy in machine learning and demonstrates all kinds of privacy threats encountered in machine learning, then continues to elaborate on the working principle and outstanding features of the mainstream technology of machine learning privacy protection. According to differential privacy, homomorphic encryption, and secure multi-party computing, the research achievements in the field of machine learning privacy protection are summarized respectively. On this basis, the paper comparatively analyzes the main advantages and disadvantages of different mechanisms of privacy preserving for machine learning. Finally, the developing trend of privacy preserving for machine learning is prospected, and the possible research directions in this field are proposed.

    • Easy Way for Multilayer Gradient Supplies

      2020, 31(7):2157-2168. DOI: 10.13328/j.cnki.jos.005822 CSTR:

      Abstract (1384) HTML (1342) PDF 1.37 M (2818) Comment (0) Favorites

      Abstract:Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These have dramatically improved the state-of-the-art methods in speech recognition, visual object recognition, natural language processing, and many other domains. However, due to the large number of layers and large parameter scales, deep learning often results in gradient vanishing, falling into local optimal solution, overfitting, and so on. By using ensemble learning methods, this study proposes a novel deep sharing ensemble network. Through joint training many independent output layers in each hidden layer and injecting gradients, this network can reduce the gradient vanishing phenomenon, and through ensemble multi-output, it can get a better generalization performance.

    • Frequent Episode Mining Algorithm Compatible with Various Support Definitions

      2020, 31(7):2169-2183. DOI: 10.13328/j.cnki.jos.005851 CSTR:

      Abstract (1429) HTML (1368) PDF 1.79 M (2522) Comment (0) Favorites

      Abstract:Frequent episodes hidden in an event sequence describe the behavioral regularities of users or systems. Existing algorithms yield good results for mining frequent episodes under their respective definitions of support, but each of them is difficult or impossible to directly mine frequent episodes when the definition of support is changed. To meet the needs of changeable support definitions of users, an algorithm called FEM-DFS (frequent episode mining-depth first search) is proposed to mine frequent episodes in this paper. After scanning the event sequence one pass, FEM-DFS finds frequent episodes in a depth first search fashion, stores frequent episodes in a shared prefix/suffix tree and compresses the search space of frequent episodes by utilizing monotonicity, prefix monotonicity or suffix monotonicity. Experimental evaluation demonstrates the effectiveness of the proposed algorithm.

    • >Review Articles
    • Survey on Deep Learning Applicatons in Software Defined Networking Research

      2020, 31(7):2184-2204. DOI: 10.13328/j.cnki.jos.006039 CSTR:

      Abstract (3830) HTML (4470) PDF 1.17 M (13938) Comment (0) Favorites

      Abstract:Software defined networking (SDN), which separates data forwarding from control, is a complete overthrow of traditional network architecture, introducing new opportunities and challenges for all aspects of network research. With the traditional network research methods encountering bottlenecks in SDN, deep learning based methods have been introduced into the research of SDN, resulting in plenty of achievements in real-time intelligent network management and control, which promotes the further development of SDN research. This study investigates the promoting factors of introducing deep learning into SDN, such as deep learning development platform, training datasets and intelligent SDN architectures; introduces the deep learning applications in SDN research fields such as intelligent routing, intrusion detection, traffic perception, and other applications systematically, and analyzes the features and shortcomings of those deep learning applications in detail. Finally, the future research direction and trend of SDN are prospected.

    • Survey on Technology of Security Enhancement for DNS

      2020, 31(7):2205-2220. DOI: 10.13328/j.cnki.jos.006046 CSTR:

      Abstract (3542) HTML (3420) PDF 467.61 K (8061) Comment (0) Favorites

      Abstract:As a vital infrastructure of the Internet, DNS provides name resolution services for Internet applications. Major Internet incidents in recent years indicate that DNS is facing serious security threats. The vulnerability of DNS can be divided into three categories:protocol design vulnerability, technology implementation vulnerability, and architecture vulnerability. In view of the above vulnerabilities, the latest research achievements on DNS security enhancement are summarized which include protocol design, system implementation, DNS monitoring and DNS decentralization. Some possible future research hotspots and challenges are also discussed.

    • Research on Load Balancing in Data Center Networks

      2020, 31(7):2221-2244. DOI: 10.13328/j.cnki.jos.006050 CSTR:

      Abstract (3909) HTML (4825) PDF 524.15 K (11512) Comment (0) Favorites

      Abstract:Data center networks are the important infrastructure of the modern Internet and cloud computing. It is critical to achieve load balancing in data center networks for guaranteeing high throughput and improving service experience. The difference between the data center network and traditional Internet is firstly analyzedand the features of data center networks and the facilitation for designing load balancing schemesare concluded. Then, the challenges of designing load balancing schemes are analyzedin data center networks from the perspective of complexity and diversity. Existing load balancing schemes in data center networksare classified into four types, i.e., the schemes based on the network layer, the transport layer, the application layer, and the synthetic designs, according to different modification types. The advantages and disadvantages of these schemes are detailedand they are evaluatedfrom the point of the control structure, the granularity of load balancing, the congestion sensing mechanism, the strategy of load balancing, scalability and the difficulty of deployment. Finally, all load balancing schemes are summarizedand some future feasible directionsare presented.

    • Survey of Automatic Ultrasonographic Analysis for Thyroid and Breast

      2020, 31(7):2245-2282. DOI: 10.13328/j.cnki.jos.006037 CSTR:

      Abstract (3177) HTML (5223) PDF 967.02 K (22550) Comment (0) Favorites

      Abstract:Ultrasonography is the first choice of imaging examination and preoperative evaluation for thyroid and breast cancer. However, ultrasonic characteristics of benign and malignant nodules are commonly overlapped. The diagnosis heavily relies on operator's experience other than quantitative and stable methods. In recent years, medical imaging analysis based on computer technology has developed rapidly, and a series of landmark breakthroughs have been made, which provides effective decision supports for medical imaging diagnosis. In this work, the research progress of computer vision and image recognition technologies in thyroid and breast ultrasound images is studied. A series of key technologies involved in automatic diagnosis of ultrasound images is the main lines of the work. The major algorithms in recent years are summarized and analyzed, such as ultrasound image preprocessing, lesion localization and segmentation, feature extraction and classification. Moreover, multi-dimensional analysis is made on the algorithms, data sets, and evaluation methods. Finally, existing problems related to automatic analysis of those two kinds of ultrasound imaging are discussed, research trend and development direction in the field of ultrasound images analysis are discussed.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063