• Volume 30,Issue S2,2019 Table of Contents
    Select All
    Display Type: |
    • Large-pose Face Alignment Based on Deep Learning

      2019, 30(S2):1-8. CSTR:

      Abstract (1916) HTML (0) PDF 1.24 M (4116) Comment (0) Favorites

      Abstract:Aiming at the low accuracy of the large-pose face alignment algorithm, this paper designs and implements a new hierarchical parallel and multi-scale Inception-resnet network to achieve large-pose face alignment. Firstly, a four-class Hourglass network model is constructed. The model directly inputs images for face alignment in an end-to-end manner. Secondly, the network internally uses pre-set parameters for sampling and feature extraction. Finally, the corresponding face feature points are directly output. A two-dimensional coordinate point drawing of the image and the equivalent face size is extracted, and the proposed method is tested on the AFLW2000-3D data set. Experimental results show that the normalized average error of this method is 4.41% for any unconstrained two-dimensional face image. Compared with the traditional method, the positive face attitude image outputted in this paper has high visual quality and fidelity.

    • Research on Key Object Extraction and Classification in Asynchronous Data Stream

      2019, 30(S2):9-16. CSTR:

      Abstract (1431) HTML (0) PDF 1.06 M (2344) Comment (0) Favorites

      Abstract:Event camera has attracted the attention of the majority of researchers due to the inspiration of biological vision, breaks the way of regular data acquisition in the field of computer vision, directly hits the pain point of RGB images, and brings the advantages that 2D image sensors cannot match. Event Camera brings the advantages of removing redundant information, fast sensing capability, high dynamic range sensitivity and low power consumption, while its asynchronous event data cannot be directly applied to existing computer vision processing modes. Therefore, this paper classify the data stream using the key event based classification method. This method detects corner events with important information and only extracts features of corner events. While retaining the important features of event and condensing the extraction of event stream features, the amount of computation for other events is effectively reduced. The preset gesture is recognized to verify the validity of this method, achieving an accuracy of 97.86%.

    • Complex Value Image Group Sparse Coding Denoising Algorithm Based on K-means Clustering Method

      2019, 30(S2):17-24. CSTR:

      Abstract (1566) HTML (0) PDF 1.14 M (3097) Comment (0) Favorites

      Abstract:Sparse coding has been widely used in complex value image demising. In recent years, the proposed block sparse coding has more advantages in noise filtering and noise reduction because it can make full use of the similarity of patches in the same block. In this paper, a K-means clustering method based sparse demising algorithm for complex image grouping is studied. By improving the clustering algorithm, the grouping effectiveness of K-means algorithm for sparse block coding algorithm is verified. The online complex dictionary training algorithm is used to acquire the coded dictionary quickly, and the sparse coding of block image is realized by using the grouping orthogonal matching pursuit algorithm. By inducing the similarity of the coding in each block, the coding of noise in the block is effectively suppressed and the noise reduction of the complex value image is improved. In order to verify the effectiveness of the proposed algorithm, the demising of simulated and real interferometric synthetic aperture radar images is quantitatively analyzed, which proves that the proposed algorithm has a certain improvement in peak signal-to-noise ratio (PSNR) compared with the previous block sparse coding algorithm. Finally, the real interferometric synthetic aperture radar image is demised, which further verifies the de-noising ability of the proposed algorithm for real noise.

    • Distant Speech Recognition Based on Knowledge Distillation and Generative Adversarial Network

      2019, 30(S2):25-34. CSTR:

      Abstract (1922) HTML (0) PDF 1.05 M (4158) Comment (0) Favorites

      Abstract:In order to further utilize near-field speech data to improve the performance of far-field speech recognition, this paper proposes an approach to integrate knowledge distillation with the generative adversarial network. In this work, a multi-task learning structure is firstly proposed to jointly train the acoustic model with feature mapping. To enhance the acoustic modeling, the acoustic model trained with far-field data (student model) is guided by an acoustic model trained with near-field data (teacher model). Such training process makes the student model mimics the behavior of the teacher model by minimizing the Kullback-Leibler Divergence. To improve the speech enhancement, an additional discriminator network is introduced to distinguish the enhanced features from the real clean ones. The distribution of the enhanced features is further pushed towards that of the clean features through this adversarial multi-task training. Evaluated on AMI single distant microphone data, the method achieves 5.6% relative non-overlapped word error rate (WER) and 4.7% relative overlapped WER decrease over the baseline model. Evaluated on AMI multi-channel distant microphone data, the method achieves 6.2% relative non-overlapped WER and 4.1% relative overlapped WER decrease over the baseline model. Evaluated on the TIMIT data, the method can reach 7.2% WER reduction. To better demonstrate the effects of generative adversarial network on speech enhancement, the enhanced features is visualized and the effectiveness of this method is verified.

    • Eye Tracking Data and Text Visual Analytics for Reading Assistance

      2019, 30(S2):35-47. CSTR:

      Abstract (1286) HTML (0) PDF 1.65 M (4249) Comment (0) Favorites

      Abstract:This paper studied visual attention behavior characteristics during reading process. Several visualization such as eye movement heatmap, doughnut chart, node link graph, and word cloud were designed to extract eye movement data and text themes for analysis of reading behavior characteristics and document structure. A visual aid prototype system for reading assistance was developed to record the eye movement data of expert users (such as teachers), and the visualization can be shared to novice users (such as students). The user study results showed that the average scores of objective and subjective questions in the experimental group were increased by 31.8% and 55.0%, respectively, and the total reading and answering time was reduced by 9.7%. It can be seen that this system can effectively help readers improve reading efficiency as well as quickly grasp the focus of the article and better understand the content of the article, so that it has certain effectiveness and feasibility.

    • Scenario Design Tool Based on Hybrid Input

      2019, 30(S2):48-61. CSTR:

      Abstract (1460) HTML (0) PDF 1.48 M (2701) Comment (0) Favorites

      Abstract:Pen-based user interface relying on touch technology is one of the Post-WIMP interfaces. It discards the physical keyboard and mouse, which changed the method of human-computer interaction to some extent. Though sketch drawing software and recognition software are constantly emerging, there is no mature pen-based interface design development tool. Based on the PGIS interaction paradigm and scenario design method, this paper develops a tool named with SDT that allows hybrid input of graphics and sketchs based on pen-based interaction primitives. Firstly, based on the principle of high-cohesion and low-coupling in the field of software engineering, the "Separation-Fusion" design method is proposed. Accordingly, the overall architecture of the system is put forward. Secondly, the essential technologies are elaborated from three aspects:user interface description language, pen-based interactive primitive and mono-case, hybrid input. Thirdly, an example of a complete application is built by the SDT, which makes the availability and feasibility of the system more convincible. Finally, the advantage and effectiveness of the tool are verified by two evaluation experiments.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063