• Volume 31,Issue 11,2020 Table of Contents
    Select All
    Display Type: |
    • Optimized Density Peaks Clustering Algorithm Based on Dissimilarity Measure

      2020, 31(11):3321-3333. DOI: 10.13328/j.cnki.jos.005813

      Abstract (2011) HTML (1173) PDF 1.82 M (3951) Comment (0) Favorites

      Abstract:Clustering by fast search and find of density peaks (DPC) is an efficient algorithm for finding cluster centers quickly based on local-density and relative-distance. DPC uses the decision graph to find the density peaks as cluster centers. It does not need to specify the number of clusters in advance and clusters with arbitrary shapes can be obtained. However, the calculation of local-density and relative-distance depends on the similarity matrix which is based on distance metrics simply, thus, DPC is not satisfactory on complex datasets, especially when the datasets with uneven density and higher dimensions. In addition, the measurement of the local-density is not unified and different methods correspond to different datasets. Third, the measurement of dc only considers the global distribution of datasets, ignoring the local information of the data, so the change of dc will affect the results of clustering, especially on small scale datasets. Aiming at these shortcomings, this study proposes an optimized density peaks clustering algorithm based on dissimilarity measure (DDPC). DDPC introduces a mass-based dissimilarity measure to calculate the similarity matrix, and calculates the k-nearest neighbor information of the sample based on the new similarity matrix. Then local-density is redefined by the k-nearest neighbor information. Experimental results show that the optimized density peaks clustering algorithm based on dissimilarity measure is superior to the optimized FKNN-DPC and DPC-KNN clustering algorithms, and can be satisfied on datasets with uneven density and higher dimensions. As a result, the local-density measurement method is unified at the same time, which avoids the influence of dc on the clustering results in the traditional DPC algorithm.

    • Dynamic Gene Regulatory Network Evolution Analysis

      2020, 31(11):3334-3350. DOI: 10.13328/j.cnki.jos.005821

      Abstract (1504) HTML (1837) PDF 1.93 M (3468) Comment (0) Favorites

      Abstract:Dynamic gene regulatory network is a complex network representing the dynamic interactions between genes in organism. The interactions can be divided into two groups, motivation and inhibition. The researches on the evolution of dynamic gene regulatory network can be used to predict the gene regulation relationship in the future, thus playing a reference role in diagnosis and prediction of diseases, Pharma projects, and biological experiments. However, the evolution of gene regulatory network is a huge and complex system in real world, the researches about its evolutionary mechanism only focus on statics networks but ignore dynamic networks as well as ignore the types of interaction. In response to these defects, a dynamic gene regulatory network evolution analyzing method (DGNE) is proposed to extend the research to the field of dynamic signed networks. According to the link prediction algorithm based on motif transfer probability (MT) and symbol discrimination algorithm based on latent space character included in DGNE, the evolution mechanism of dynamic gene regulatory network can be dynamically captured as well as the links of gene regulatory network are predicted precisely. The experiment results showed that the proposed DGNE method performs greatly on simulated datasets and real datasets.

    • Application of PES Algorithm Based on Preferred Collaborative Strategy on Integer Programming

      2020, 31(11):3351-3363. DOI: 10.13328/j.cnki.jos.005853

      Abstract (2087) HTML (1226) PDF 1.43 M (3976) Comment (0) Favorites

      Abstract:Integer programming is a kind of mathematical model which is widely used in the field of science and applied research. Because it is a NP-hard problem, it is difficult to solve it. The solution method is to use swarm intelligence algorithm as the main body, but this kind of method has not been able to solve the spear of exploration and exploitation, competition and collaboration among individuals and populations within the population. Swarm intelligence evolution strategy based on pyramid structure is a new algorithm, which can effectively solve the above two contradictions. In this study, the mechanism of PES algorithm is deeply analyzed, and a preferred collaborative strategy model is constructed. The improved PES algorithm is extended from the optimization function to solve the integer programming problem. Finally, through the exploration experiment and the contrast experiment, the convergence and stability of the algorithm and the performance of the global best are explored. The experimental results show that the PES algorithm based on the optimal cooperation strategy can solve the integer programming problem well.

    • Data-driven Modeling and Prediction of User Acceptance for Mobile Apps

      2020, 31(11):3364-3379. DOI: 10.13328/j.cnki.jos.006106

      Abstract (1652) HTML (1520) PDF 1.74 M (4264) Comment (0) Favorites

      Abstract:With the popularity of mobile Internet and smart mobile devices in recent years, the app market mode has become one of the main modes of software release. In this mode, app developers have to update their apps rapidly to keep competitive. In comparison with traditional software, the connection between end users and developers of mobile apps is closer with quicker release of software and feedback of users. Understanding and improving user acceptance of mobile apps inevitably becomes one of the main goals for developers to improve their apps. Meanwhile, there is a wealth of data covering different stages of the software cycle of mobile apps in the app-market-centered ecosystem. From the view of software analytics, with techniques such as machine learning and data mining, valuable information could be extracted from data including operation logs, user behavior sequence, etc. to help developers make decisions. This article first demonstrates the necessity and feasibility of building a comprehensive model of user acceptance indicators for mobile apps from a data-driven perspective, and provides basic indicators from three dimensions of user evaluation, operation, and usage. Furthermore, with large-scale datasets, specific indicators are given in three user acceptance prediction tasks, and features from different stages of the software cycle of mobile apps are extracted. With collaborative filtering, regression models, and probability models, the predictability of user acceptance indicators is verified, and the insight of the prediction results in the mobile app development process is provided.

    • System Dynamics Simulation Modeling of Software Requirements Change Management

      2020, 31(11):3380-3403. DOI: 10.13328/j.cnki.jos.005830

      Abstract (2306) HTML (1753) PDF 3.25 M (4577) Comment (0) Favorites

      Abstract:Software requirements change frequently, which pose many threats to software projects. Effective management of requirements change determines the success or failure of the software project. System dynamics can be used to simulate the process of software requirements change management, aiming to dynamically analyze and predict the cause of requirements change and the effects of change on software projects. System dynamics also can assist software organizations to improve requirement change management processes. In this study, the system dynamics method is first used to model the process of open source software requirements change management which refers to the agile processes. Then, the models are tested to find out the errors and correct them. Next, taking the Spring Framework as an empirical case study, the system dynamics simulation of the software requirement changes management process of the project 3.2.x branch is carried out. According to the simulation results, the improvement of the requirement change management processes is simulated. By comparing the baseline simulation results with the improvement simulation results, it shows that all the improvements effectively reduce the software defect rate and improve the software quality. In addition, based on the cost and schedule of the software project, the process improvement suggestions are provided.

    • Vulnerability Mining Method Based on Code Property Graph and Attention BiLSTM

      2020, 31(11):3404-3420. DOI: 10.13328/j.cnki.jos.006061

      Abstract (2324) HTML (2684) PDF 1.86 M (7626) Comment (0) Favorites

      Abstract:With the increasingly serious trend of information security, software vulnerability has become one of the main threats to computer security. How to accurately mine vulnerabilities in the program is a key issue in the field of information security. However, existing static vulnerability mining methods have low accuracy when mining vulnerabilities with unobvious vulnerability features. On the one hand, rule-based methods by matching expert-defined code vulnerability patterns in target programs. Its predefined vulnerability pattern is rigid and single, which is unable to cover detailed features and result in problems of low accuracy and high false positives. On the other hand, learning-based methods cannot adequately model the features of the source code and cannot effectively capture the key feature, which makes it fail to accurately mine vulnerabilities with unobvious vulnerability features. To solve this issue, a source code level vulnerability mining method based on code property graph and attention BiLSTM is proposed. It firstly transforms the program source code to code property graph which contains semantic features, and performs program slicing to remove redundant information that is not related to sensitive operations. Then, it encodes the code property graph into the feature tensor with encoding algorithm. After that, a neural network based on BiLSTM and attention mechanism is trained using large-scale feature datasets. Finally, the trained neural network model is used to mine the vulnerabilities in the target program. Experimental results show that the F1 scores of the method reach 82.8%, 77.4%, 82.5%, and 78.0% respectively on the SARD buffer error dataset, SARD resource management error dataset, and their two subsets composed of C programs, which is significantly higher than the rule-based static mining tools Flawfinder and RATS and the learning-based program analysis model TBCNN.

    • Function-level Data Dependence Graph and Its Application in Static Vulnerability Analysis

      2020, 31(11):3421-3435. DOI: 10.13328/j.cnki.jos.005833

      Abstract (1697) HTML (2274) PDF 1.63 M (3682) Comment (0) Favorites

      Abstract:Data flow analysis plays an important role in binary code analysis. Due to consuming too much time and space, constructing the traditional data dependence graph (DDG) limits the size of the analyzed code thoroughly. This study introduces a novel graph model, function-level data dependence graph (FDDG), and proposes a corresponding construction method. The key insights behind FDDG lie in the following two points. First, FDDG focuses on the relationships between function parameters; Second, FDDG treats a function as a whole and ignores the details inside the function. As a result, the size of the data dependence graph is reduced significantly. Also, the time and space are saved greatly. The experimental results show the time performance of the method is improved by about three orders of magnitude compared to the method in angr. As an instance, FDDG is employed to analyze the vulnerability of embedded firmwares, and a firmware vulnerability analysis prototype system called FFVA is implemented. The implemented FFVA system is used to analyze firmwares from real embedded devices, and find a total of 24 vulnerabilities in the devices from D-Link, NETGEAR, EasyN, uniview, and so on, among which 14 are unknown vulnerabilities, thus validating the effectiveness of function-level data dependence graph in static vulnerability analysis.

    • Malware Detection Method Based on Subgraph Similarity

      2020, 31(11):3436-3447. DOI: 10.13328/j.cnki.jos.005863

      Abstract (1427) HTML (1483) PDF 1.19 M (3152) Comment (0) Favorites

      Abstract:Dynamic behavior analysis is a common method of malware detection. It uses graphs to represent malware’s system calls or resource dependencies. It uses graph mining algorithms to find common malicious feature subgraphs in known malware samples, and detect unknown programs through these features. However, these methods often rely on the graph matching algorithm, and the inevitable calculation of the graph matching is slow, and the relationship between the subgraphs is also neglected in the algorithm. It can improve the detection accuracy of the model if the subgraphs’ relationship is considered. In order to solve these two problems, a sub-graph similarity malware detection method called DMBSS is proposed. It uses the data flow graph to represent the system behavior or event of the running malicious program, and then extracts the malicious behavior feature subgraph from the data flow graph, and uses “inverse topology identification” algorithm to represent the feature subgraph as a string, and the string implied the structural information of the subgraph, using a string instead of the matching of the graph. The neural network is then used to calculate the similarity between the subgraphs and to represent the subgraph structure as a high dimensional vector, so that the similar subgraphs’ distance is also shorter in the vector space. Finally, the subgraph vector is used to construct the similarity function of the malicious program, and based on this, the SVM classifier is used to detect the malicious program. The experimental results show that compared with other methods, DMBSS is faster in detecting malicious programs and has higher accuracy.

    • Fault Localization Approach Using Term Frequency and Inverse Document Frequency

      2020, 31(11):3448-3460. DOI: 10.13328/j.cnki.jos.006021

      Abstract (4309) HTML (1215) PDF 540.75 K (4000) Comment (0) Favorites

      Abstract:Most existing fault localization approaches utilize statement coverage information to identify suspicious statements potentially responsible for failures. They generally use the binary status information to represent the statement coverage information, indicating a statement executed or not executed. However, the binary information just shows whether a statement is executed or not whereas it cannot evaluate the importance of a statement in a specific execution. Consequently, this may degrade fault localization performance. To address this issue, this study proposes a fault localization approach using term frequency and inverse document frequency. Specifically, the proposed approach constructs an information model to successfully identify the influence of a statement in a test case, and uses the information model to evaluate the suspiciousness of a statement of being faulty. The experiments show that the proposed approach significantly improves fault localization effectiveness.

    • Scenario-driven and Bottom-up Microservice Decomposition Method for Monolithic Systems

      2020, 31(11):3461-3480. DOI: 10.13328/j.cnki.jos.006031

      Abstract (2578) HTML (1995) PDF 1.25 M (5316) Comment (0) Favorites

      Abstract:As a typical form of cloud-native application, microservice architecture has been widely used in various enterprise applications. In enterprise practice, many microservices are formed by decomposing and transforming the legacy system of monolithic architecture. The decomposition decision, especially database decomposition, has a great impact on the quality of the microservice system. At present, the microservice decomposition decision mainly depends on the human subjective experience. The whole process is costly, time-consuming, and uncertain. To solve this problem, this study proposes a scenario-driven, bottom-up microservice decomposition method for monolithic system. This method uses scenario-driven method to obtain the method call and database operation information of monolithic system by dynamic analysis, and to generate database decomposition scheme based on analyzing the association among data tables, and then it searches from the bottom up to generate the corresponding code module decomposition scheme. Based on this method, this study implements a prototype tool MSDecomposer, which visualizes the decomposing process and supports feedback adjustment strategies of multiple dimensions. This study conducts case studies based on several open-source software systems, and the results show that the method proposed in this study can significantly speed up the decision-making of microservice decomposition, reduce the decision-making burden of developers and the final result is reasonable.

    • Local Community Discovery Approach Based on Fuzzy Similarity Relation

      2020, 31(11):3481-3491. DOI: 10.13328/j.cnki.jos.005818

      Abstract (1335) HTML (1158) PDF 1.14 M (3539) Comment (0) Favorites

      Abstract:Online social media has developed rapidly in recent years, and many massive social networks have emerged. Traditional community detection methods are difficult to deal with these massive networks effectively for requiring knowledge of the entire network. Local community detection can find out the community of a given node through the connection relationship between the nodes around the given node without knowledge of the entire network structure, so it is of great significance in social media mining. For the relations between pairs of nodes in real-world networks are fuzzy or uncertain, the similarity relationship between two nodes with fuzzy relation is firstly described, and similarity between nodes as membership function of the fuzzy relation is defined. Then, it is proved that the fuzzy relation is a fuzzy similarity relation, and local community is defined as the equivalence class of the given node about fuzzy similarity relation. Moreover, local community of the given node is discovered by adopting maximal connected subgraph approach. The proposed algorithm is evaluated on both synthetic and real-world networks. The experimental results demonstrate that the proposed algorithm is highly effective at finding local community of the given node, and achieves higher F-score than other related algorithms.

    • Kernel Subspace Clustering Algorithm for Categorical Data

      2020, 31(11):3492-3505. DOI: 10.13328/j.cnki.jos.005819

      Abstract (1143) HTML (1521) PDF 1.67 M (3928) Comment (0) Favorites

      Abstract:Currently, the mainstream subspace clustering methods for categorical data are dependent on linear similarity measure and the relationship between attributes is overlooked. In this study, an approach is proposed for clustering categorical data with a novel kernel soft feature-selection scheme. First, categorical data is projected into the high-dimensional kernel space by introducing the kernel function and the similarity measure of categorical data in kernel subspace is given. Based on the measure, the kernel subspace clustering objective function is derived and an optimization method is proposed to solve the objective function. At last, kernel subspace clustering algorithm for categorical data is proposed, the algorithm considers the relationship between the attributes and each attribute assigned with weights measuring its degree of relevance to the clusters, enabling automatic feature selection during the clustering process. A cluster validity index is also defined to evaluate the categorical clusters. Experimental results carried out on some synthetic datasets and real-world datasets demonstrate that the proposed method effectively excavates the nonlinear relationship among attributes and improves the performance and efficiency of clustering.

    • Multi-module TSK Fuzzy System Based on Training Space Reconstruction

      2020, 31(11):3506-3518. DOI: 10.13328/j.cnki.jos.005846

      Abstract (1144) HTML (1563) PDF 1.34 M (3428) Comment (0) Favorites

      Abstract:A multi-training module Takagi-Sugeno-Kang (TSK) fuzzy classifier, H-TSK-FS, is proposed by means of reconstruction of training sample space. H-TSK-FS has good classification performance and high interpretability, which can solve the problems of existing hierarchical fuzzy classifiers such as the output and fuzzy rules of intermediate layer that are difficult to explain. In order to achieve enhanced classification performance, H-TSK-FS is composed of several optimized zero-order TSK fuzzy classifiers. These zero-order TSK fuzzy classifiers adopt an ingenious training method. The original training sample, part of the sample of the previous layer and part of the decision information that most approximates the real value in all the training layers are projected into the training module of the current layer and constitute its input space. In this way, the training results of the previous layers play a guiding and controlling role in the training of the current layer. This method of randomly selecting sample points and training features within a certain range can open up the manifold structure of the original input space and ensure better or equivalent classification performance. In addition, this study focuses on data sets with a small number of sample points and a small number of training features. In the design of each training unit, extreme learning machine is used to obtain the Then-part parameters of fuzzy rules. For each intermediate training layer, short rules are used to express knowledge. Each fuzzy rule determines the variable input features and Gaussian membership function by means of constraints, in order to ensure that the selected input features are highly interpretable. Experimental results of real datasets and application cases show that H-TSK-FS enhances classification performance and high interpretability.

    • Temporal Index and Query Based on Timing Partition

      2020, 31(11):3519-3539. DOI: 10.13328/j.cnki.jos.005826

      Abstract (1243) HTML (1498) PDF 2.23 M (3086) Comment (0) Favorites

      Abstract:Temporal index is one of key methods for temporal data managements and retrieval, which has been a hotspot in the field of temporal data. This paper presents a temporal index technique TPindex which is based on a temporal timing partition method. Firstly, the temporal attributes of massive amount of temporal data is mapped to a two-dimensional plane and the “Valid Time” points in this plane are sampled for timing partition. A “form up to down and form left to right” timing partition method is used to divide the plane into several balanced temporal areas and whole-partition index would be established at the same time. Once the steps above are completed, temporal data can be dynamically indexed by its querying schema of “one time, one set”. Secondly, the TPindex would build data structures through using “linear order partition” algorithm based on quasi-order relation for the data in each temporal area. Besides, a “Separated Files Model Index” based on disks and multi-threading parallel process technique that can be combined are proposed to make full use of modern hardware resources to meet the high performance needs under high-volume data, leading to better performance with index. On the other hand, the incremental updating algorithm was also studied. Finally, the corresponding simulation experiments are designed to compare with the current representative work to verify the feasibility and validity of the proposed algorithm.

    • Sampling-based Collection and Updating of Online Big Graph Data

      2020, 31(11):3540-3558. DOI: 10.13328/j.cnki.jos.005843

      Abstract (1568) HTML (1070) PDF 1.82 M (3367) Comment (0) Favorites

      Abstract:The large volume of unstructured data obtained from Web pages, social media and knowledge bases on the Internet could be represented as an online big graph (OBG). Confronted with many challenges, such as its large-scale, widespread, heterogeneous, and fast-changing properties, OBG data acquisition includes data collection and updating, which is the basis of massive data analysis and knowledge engineering. In this study, the method for adaptive and parallel data collection and updating is proposed based on sampling techniques. First, the HD-QMC algorithm is given for adaptive data collection of OBG data by combining the branch-and-bound method and quasi-Monte Carlo sampling technique. Next, the EPP algorithm is given for efficient data updating based on entropy and Poisson process to make the collected data reflect the dynamic change of OBGs in real-world environments. Further, the effectiveness of the proposed algorithms is analyzed theoretically, and various kinds of collected OBG data are represented by triples universally to provide an easy-to-use data foundation for OBG analysis and relevant studies. Finally, the proposed algorithms for data collection and updating are implemented with Spark, and experimental results on simulated and real-world datasets show the effectiveness and efficiency of the proposed method.

    • Network Evolution Algorithm of Unmanned Aerial Vehicle Flocking Based on Two-hop Common Neighbor

      2020, 31(11):3559-3570. DOI: 10.13328/j.cnki.jos.005827

      Abstract (1341) HTML (1048) PDF 1.30 M (3352) Comment (0) Favorites

      Abstract:The disturbance facing by UAV (unmanned aerial vehicle) flocking in the process of carrying out tasks post a new challenge to the reliability of the flocking communication network. To this end, a two-hop common neighbor metric is proposed to reflect the heterogeneity of network and the similarity between nodes simultaneously. Considering network initialization stage and network maintenance stage, a LPTCN (link prediction based on two-hop common neighbors) network evolution algorithm is proposed. Mathematical analysis and simulation experiments are applied to verify the validity of the algorithm. The results show that UAV flocking communication network constructed by the LPTCN network evolution algorithm has great survivability and invulnerability, and the communication network reliability can be guaranteed in the case of random attack and deliberate attack.

    • Graded Reversible Watermarking Scheme for Relational Data

      2020, 31(11):3571-3587. DOI: 10.13328/j.cnki.jos.005812

      Abstract (1317) HTML (1614) PDF 1.71 M (3531) Comment (0) Favorites

      Abstract:Reversible watermarking technique for relational data is intended to protect the copyright. It overcomes the shortcomings of traditional watermarking techniques. It can not only claim the copyright of data, but also recover the original data from the watermarked copy. However, existing reversible watermarking schemes for relational data cannot control the extent of data recovery. Aiming at this problem, a graded reversible watermarking scheme for relational data is proposed in the study. Data quality grade is defined to depict the impact of watermark embedding on the usability of data. Watermark embedding, grade detection, watermark detection, and grade enhancement algorithms are designed to achieve graded reversibility of watermark. Before distributing the data, the data owner can predefine several data quality grades, then embed the watermark into data partitions. A unique key is used in each data partition to control the position and value of the watermark information. If data users are not satisfied with the usability of data, they can require or purchase relevant keys from the owner to upgrade the data quality grade. The watermark in relational data with any data quality grade is sufficient to prove the copyright. Flexible watermark reversion is achieved via partitioned auxiliary data design. A more practical mechanism is devised to efficiently handle the hash table collision, which reduces both computational and storage overhead. Experiments on algorithms and watermark show that the proposed scheme is feasible and robust.

    • ROP Attack Detection Approach Based on Hardware Branch Information

      2020, 31(11):3588-3602. DOI: 10.13328/j.cnki.jos.005829

      Abstract (1161) HTML (1914) PDF 1.83 M (3442) Comment (0) Favorites

      Abstract:Control flow integrity (CFI) is an effective method to defend against return-oriented programming (ROP) attack. To address the four drawbacks of current CFI approaches, i.e., high performance overhead, relying on software code information, subject to history flushing attack, and evasion attack, this study proposed an ROP attack detection approach based on hardware branch information—mispredicted indirect branch checker, called MIBChecker. It performs real time ROP detection on every mispredicted indirect branch via events triggered by performance monitor unit, and produces a new critical syscall data detection approach to defend against ROP attacks using short gadgets-chain. Experiments show that MIBChecker can detect gadgets which is not affected by history flushing attack, and can effectively detect common ROP attack and evasion attack with only 5.7% performance overhead.

    • Optimized Block-matching Motion Estimation Using Adaptive Zoom Coefficient

      2020, 31(11):3603-3620. DOI: 10.13328/j.cnki.jos.005864

      Abstract (1065) HTML (1600) PDF 1.79 M (3148) Comment (0) Favorites

      Abstract:Fast block-wise motion estimation algorithm based on translational model solves the high computational complexity issue to some extent, but it sacrifices the motion compensation quality, whilst the higher-order motion model still exhibits the problems of computationally inefficiency and unstable convergence. Through a number of experiments, it is found that about 56.21% of the video blocks contain zoom motion, thus a conclusion is drawn that zoom motion is one of the most important motion forms in video except for the translational motion. Therefore, a zoom coefficient is introduced into the conventional block-wise translational model by bilinear interpolation, and model the motion-compensated error into a quadratic function with regard to the zoom coefficient. Subsequently, the approach is derived to compute the optimal zoom coefficient under the condition of 1D zoom motion through Vieta’s theorem, which is further extended to the condition of 2D zoom motion with equal proportion. Based on the above, a fast block-matching motion estimation algorithm is presented and is optimized by the adaptive zoom coefficient. It first uses the diamond search (DS) to compute the translational motion vector, and then determines an optimal matching block for the block to be predicted with the adaptive zoom coefficient. Experimental results carried out on 33 standard test video sequences showed that the proposed algorithm gains separately 0.11 dB and 0.64 dB higher motion-compensated peak signal-to-noise ratio (PSNR) than those of the full search (FS) and the DS based on block-wise translational model. And its computational complexity is 96.02% lower than that of the FS, slightly higher than that of the DS. Compared with the motion estimation based on the zoom model, the average PSNR of the proposed algorithm is 0.62 dB lower than that of 3D full search, but 0.008 dB higher than that of fast 3D diamond search. And the computational complexity only amounts to 0.11% and 3.86% of the 3D full search and the 3D diamond search, respectively. Meanwhile, the proposed algorithm can realize the self- synchronization between the encoder and decoder without transmitting the zoom vectors, so it does not increase the overhead of the side information. Additionally, the proposed adaptive zoom coefficient computation can also be combined with state-of-art fast block-wise motion estimation algorithms other than the diamond search, improving their motion-compensation quality.

    • Feature Representation Method of Microscopic Sandstone Images Based on Convolutional Neural Network

      2020, 31(11):3621-3639. DOI: 10.13328/j.cnki.jos.005836

      Abstract (1278) HTML (1109) PDF 2.15 M (3130) Comment (0) Favorites

      Abstract:The classification of microscopic sandstone images is a basic work in geological research, and it has an important significance in the evaluation of oil and gas reservoirs. In the automatic classification of microscopic sandstone images, due to their complex and variable micro-structures, the hand-crafted features have limited abilities to represent them. In addition, since the collection and labeling of sandstone samples are costly, labeled microscopic sandstone images are usually few. In this study, a convolutional neural network based feature representation method for small-scale data sets, called FeRNet, is proposed to effectively capture the semantic information of microscopic sandstone images and enhance their feature representation. The FeRNet has a simple structure, which reduces the quantity requirements for labeled images, and prevents the overfitting. Aiming at the problem of insufficient labeled microscopic sandstone image, the image augmentation preprocessing and a CAE network-based weight initialization strategy are proposed, to reduce the risk of overfitting. Based on the microscopic sandstone images collected from Tibet, the experiments are designed and conducted. The results show that both image augmentation and CAE network can effectively improve the training of FeRNet network, when the labeled microscopic sandstone images are few; and the FeRNet features are more capable of the representations of microscopic sandstone images than the hand-crafted features.

    • Weakly Supervised Image Semantic Segmentation Method Based on Object Location Cues

      2020, 31(11):3640-3656. DOI: 10.13328/j.cnki.jos.005828

      Abstract (2584) HTML (1625) PDF 1.95 M (5557) Comment (0) Favorites

      Abstract:Deep convolutional neural networks have achieved excellent performance in image semantic segmentation with strong pixel-level annotations. However, pixel-level annotations are very expensive and time-consuming. To overcome this problem, this study proposes a new weakly supervised image semantic segmentation method with image-level annotations. The proposed method consists of three steps: (1) Based on the sharing network for classification and segmentation task, the class-specific attention map is obtained which is the derivative of the spatial class scores (the class scores of pixels in the two-dimensional image space) with respect to the network feature maps; (2) Saliency map is gotten by successive erasing method, which is used to supplement the object localization information missing by attention maps; (3) Attention map is combined with saliency map to generate pseudo pixel-level annotations and train the segmentation network. A series of comparative experiments demonstrate the effectiveness and better segmentation performance of the proposed method on the challenging PASCAL VOC 2012 image segmentation dataset.

    • Scheduling Algorithm for Mixed-criticality Jobs Based on Dynamical Demand Boundary

      2020, 31(11):3657-3670. DOI: 10.13328/j.cnki.jos.005839

      Abstract (2005) HTML (1013) PDF 1.44 M (3544) Comment (0) Favorites

      Abstract:An important trend in embedded system is integrating functions with different level of importance into a sharing hardware platform, which is called mixed-criticality system. Most of the existing mixed-criticality theory did not support switching the system criticality from high to low in order to guarantee the jobs with higher criticality, which is not good for the overall performance of the system. To deal with this problem, this paper expands the traditional demand boundary analysis theory to the mixed-criticality systems, presenting the concept of dynamical demand boundary for mixed-criticality jobs, which represents the dynamical demand of jobs in run-time as a vector. And then, based on the concept of slack time for mixed-criticality jobs and the criticality of system, the paper presents an algorithm CSDDB (criticality switch based on dynamical demand boundary). The algorithm chooses the criticality with the minimum slack time as the execution criticality of the system to take full advantage of system resources and to guarantee the execution of jobs with lower criticality without affecting the schedulability of high criticality jobs. Experiments with randomly generated workload show that CSDDB makes more than 10% of progress in guaranteeing the system criticality and the completion of jobs set compared with the existing research.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063