• Volume 30,Issue 9,2019 Table of Contents
    Select All
    Display Type: |
    • >Review Articles
    • Survey of Enterprise Blockchains

      2019, 30(9):2571-2592. DOI: 10.13328/j.cnki.jos.005775 CSTR:

      Abstract (6676) HTML (4047) PDF 2.04 M (13179) Comment (0) Favorites

      Abstract:In legacy enterprise applications of cross-institution transactions, all institutions maintain their own ledgers. The discrepancies between different ledgers result in disputes and increase the need for manual reconciliations with settlement times and intermediaries with associated overhead costs. However, a blockchain implementing a distributed ledger, where transactions must be validated by consensus and cannot be altered once written to the ledger, guarantees the consistency of multi-institutional data and removes manual reconciliations and intermediaries. Blockchain is a decentralized, tamper-proof, traceable, trustless distributed database managed by multiple participants. An enterprise blockchain satisfying enterprise application requests means that any node must be authorized and authenticated in order to join the network. This paper presents an architecture model of enterprise blockchains based on the three mainstream blockchain platforms:Hyperledger Fabric, Corda, and Quorum. Furthermore, the principles and technologies of enterprise blockchains according to transaction flow, P2P network, consensus mechanism, blockchain data, smart contract, and privacy are discussed. Finally, by analyzing the limitations of the existing technologies, some challenging research issues and technology trends of enterprise blockchains are summarized.

    • >Special Issue's Articles
    • Proof of Trust: Mechanism of Trust Degree Based on Dynamic Authorization

      2019, 30(9):2593-2607. DOI: 10.13328/j.cnki.jos.005772 CSTR:

      Abstract (4628) HTML (3004) PDF 1.68 M (5840) Comment (0) Favorites

      Abstract:A trust degree mechanism based on dynamic authorization, proof of trust (PoT), is proposed in this study. Based on the mechanism, problems such as nothing-at-the-stake and bribe attack in the existing block generation strategies are fixed. There are two types of nodes in the network:miners and stakeholders. The trust degree is given according to the behavior of the node participating in the creation of a block. Once a node becomes a stakeholder of the network, it entrusts the block by signing its private key to the block. Finally, the blocks with trust degree compete with each other to be accepted as a legal extension of the blockchain. The cost of attacks against bribe attacks and common stake accumulation attacks, and the system's response to attacks is also analyzed. Simulation results show that the PoT mechanism can defend more efficiently against nothing-at-the-stake attack, bribe attack, and stake accumulation attack compared to proof of stake.

    • Formal Definition for Classical Smart Contracts and Reference Implementation

      2019, 30(9):2608-2619. DOI: 10.13328/j.cnki.jos.005773 CSTR:

      Abstract (4889) HTML (3913) PDF 1.36 M (7134) Comment (0) Favorites

      Abstract:Smart contract is one of the key components of blockchain systems, and has been widely applied in practice. However, there are no uniform definitions for smart contract. Moreover, the implementations of smart contracts in different platforms have quite large differences. This situation will affect the public perception of smart contracts and will cause obstacles to the development of the blockchain industry. This study recalls the history of the development of smart contracts, combing out the changes of the concepts, summarizes the essence of smart contracts, and analyzes and compares existing smart contract implementations. The formal definition of classical smart contracts is proposed, which may lay the foundation for the standardization of smart contracts. A common implementation method independent of the blockchain platforms is also given. And a reference implementation based on Hyperledger Fabric is carried out as well. Finally, the conclusion is presented and the future work is listed.

    • Archival Data Protection and Sharing Method Based on Blockchain

      2019, 30(9):2620-2635. DOI: 10.13328/j.cnki.jos.005770 CSTR:

      Abstract (6233) HTML (3504) PDF 1.80 M (10104) Comment (0) Favorites

      Abstract:In view of the problems existing in the management of archival data, such as centralized data storage, poor security, and low tamper resistant modification, this study proposes an archival data protection and sharing method based on blockchain technology. The identification of digital archives and the determination of archives ownership have been achieved through smart contracts and digital signature technologies; digital archives files have been protected, verified, restored, and shared through technologies such as smart contracts and the interplanetary file system (IPFS); the economic cost have been reduced meanwhile data security could be guaranteed and data extendibility improved through the combination of permissioned and permissionless blockchains. Featuring the characteristics of decentralization, security, credibility, and tamper-resistant, this method is expected to promote the data storage way transformation in archives so as to meet the increasing demand for the protection and sharing of archival data.

    • Blockchain-based Access Control Mechanism for Big Data

      2019, 30(9):2636-2654. DOI: 10.13328/j.cnki.jos.005771 CSTR:

      Abstract (6510) HTML (4421) PDF 1.80 M (9950) Comment (0) Favorites

      Abstract:In terms of the wide source, large dynamics, and distributed management characteristics of big data resources, the current mainstream centralized access control mechanisms have shortcomings, such as low efficiency, insufficient flexibility, and poor scalability. Therefore, this study proposes a blockchain-based big data access control mechanism based on the ABAC model. First, in this paper, the fundamental principle of blockchain technology is described and the attribute-based access control model is formalized. Then, big data access control architecture is presented based on blockchain technology, and the basic framework and flow of access control are analyzed. At the same time, to ensure the access control information is tamper-resistant, auditability, and verifiability, the transaction-based access control policy and entity attribute information management methods are also described in detail. In addition, a smart contract-based access control method is used to implement user-driven, transparent, dynamic, and automated access control for big data resources. Finally, simulation experiments validate the effectiveness of this mechanism, and then the views presented in this paper are summarized and prospected.

    • Efficient Query Model for Storage Capacity Scalable Blockchain System

      2019, 30(9):2655-2670. DOI: 10.13328/j.cnki.jos.005774 CSTR:

      Abstract (5052) HTML (2442) PDF 2.54 M (7179) Comment (0) Favorites

      Abstract:Blockchain technology is a research hotspot in the field of computers today. The decentralized and secured blockchain data effectively reduces the trust costs of the real economy. This study proposes an efficient query method for the scalable model of blockchain storage capacity-ElasticQM. The ElasticQM query model consists of four layers of modules:user layer, query layer, storage layer, and data layer. The user layer model puts the query results into the cache, which speeds up the query speed when querying the same data again. In the query level, this study proposes a global query optimization algorithm for the scalable blockchain model, which increases the roles of querying super nodes, query verification nodes and querying leaf nodes. It improves the efficiency of global queries. In the storage layer, the model improves the data storage process of the ElasticChain, which supports large scale blockchain. The storage layer achieves the scalability of the blockchain's capacity and reduces the storage space. In the data layer, this study proposes a blockchain storage structure based on B-M tree, and gives the establishment algorithm of B-M tree and search algorithm based on B-M tree. Blockchains based on B-M trees will increase the speed of queries in local search within a block. The experimental results on real datasets show that the ElasticQM model has efficient query efficiency.

    • BlockchainDB: Querable and Immutable Database

      2019, 30(9):2671-2685. DOI: 10.13328/j.cnki.jos.005776 CSTR:

      Abstract (6005) HTML (4648) PDF 1.49 M (8862) Comment (0) Favorites

      Abstract:With the rise of a series of crypto-currencies, such as Bitcoin and Ether, the underlying blockchain technology has received more and more attention. The blockchain is known as the characteristics of decentralization and immutability. Ethereum utilizes the blockchain technology to build the next generation decentralized application platform. BigchainDB combines blockchain technology with traditional distributed databases, and uses the federal based voting to improve the traditional PoW mechanism and finally improves the system's scalability and throughput. However, the existing blockchain system mostly stores transaction information with a fixed-form. Although there are data fields in each transaction, the existing blockchain system cannot directly query the specific details within the data fields of the transaction data from the blockchain data. To query the specific details of the data field, it must query the transaction first with the hash value of the transaction to get the complete information of the transaction, and then retrieve the details in the transaction data. This mechanism has a low operability of data and a lack of query functions of the traditional database. This study first proposes a framework of blockchain database system, which applies blockchain technology to distributed data management. Then, an immutable index is proposed based on hash functions. According to the index, the data in the block can be quickly retrieved to implement the query processing in the blockchain. Finally, experiments are designed to test the database's read/write performance. The experimental results show that the immutable index has good read/write performance while ensuring immutability.

    • High-dimensional Multi-objective Optimization Strategy Based on Decision Space Oriented Search

      2019, 30(9):2686-2704. DOI: 10.13328/j.cnki.jos.005842 CSTR:

      Abstract (2003) HTML (1312) PDF 3.06 M (4681) Comment (0) Favorites

      Abstract:Traditional multi-objective evolutionary algorithm (MOEA) have sound performance when solving low dimensional continuous multi-objective optimization problems. However, as the optimization problems' dimensions increase, the difficulty of optimization will also increase dramatically. The main reasons are the lack of algorithms' search ability, and the smaller selection pressure when the dimension increases as well as the difficulty to balance convergence and distribution conflicts. In this study, after analyzing the characteristics of the continuous multi-objective optimization problem, a directional search strategy based on decision space (DS) is proposed to solve high dimensional multi-objective optimization problems. This strategy can be combined with the MOEAs based on the dominating relationship. DS first samples solutions from the population and analyzes them, and obtains the controlling vectors of convergence subspace and distribution subspace by analyzing the problem characteristics. The algorithm is divided into convergence search stage and distribution search stage, which correspond to convergent subspace and distributive subspace respectively. In different stages of search, sampling analysis are used results to macroscopically control the region of offspring generation. The convergence and distribution are divided and emphasized in different stages to avoid the difficulty of balancing them. Additionally, it can also relatively focuses the search resources on certain aspect in certain stages, which facilitates the searching ability of the algorithm. In the experiment, NSGA-Ⅱ and SPEA2 algorithms are compared combining DS strategy with original NSGA-Ⅱ and SPEA2 algorithms, and DS-NSGA-Ⅱ is used as an example to compare it with other state-of-the-art high-dimensional algorithms, such as MOEAD-PBI, NSGA-Ⅲ, Hype, MSOPS, and LMEA. The experimental results show that the introduction of the DS strategy greatly improves the performance of NSGA-Ⅱ and SPEA2 when addressing high dimensional multi-objective optimization problems. It is also shown that DS-NSGA-Ⅱ is more competitive when compared the existing classical high dimensional multi-objective algorithms.

    • Short Text Summary Generation with Global Self-matching Mechanism

      2019, 30(9):2705-2717. DOI: 10.13328/j.cnki.jos.005850 CSTR:

      Abstract (2196) HTML (1390) PDF 1.27 M (4274) Comment (0) Favorites

      Abstract:In recent years, the sequence-to-sequence learning model with the encoder-decoder architecture has become the mainstream summarization generation approach. Currently, the model usually only considers limited words before (or after) when calculating the hidden layer state of a word, but can not obtain global information, so as to optimize the global situation. In order to address above challenges, this study introduces a global self-matching mechanism to optimize the encoder globally, and proposes a global gating unit to extract the core content of the text. The global self-matching mechanism dynamically collects relevant information from the entire input text for each word in the text according to the matching degree of each word semantics and the overall semantics of the text, and then effectively encodes the word and its matching information into the final hidden layer representation to obtain the hidden layer representation containing the global information. Meanwhile, considering that integrating global information into each word may cause redundancy, this study introduces a global gating unit, filters the information flow into the decoder according to the global information obtained from the self-matching layer, and filters out the core content of the source text. Experimented result shows that the proposed model has a significant improvement in the Rouge evaluation over the state-of-the art method.

    • Multi-robot Federating Method for Disjoint Segments in Wireless Sensor Network Based on Search and Bandwidth Aware

      2019, 30(9):2718-2732. DOI: 10.13328/j.cnki.jos.005515 CSTR:

      Abstract (1605) HTML (988) PDF 1.68 M (3260) Comment (0) Favorites

      Abstract:In order to improve the adaptability and efficiency of federating method for disjoint segments in wireless sensor network (WSN) under the unknown distribution of disjoint segments and bandwidth-constrained relay, an optimization problem about multi-robot federating for disjoint segments in WSN based on search and bandwidth aware (MRF-SNSA) is proposed, and approximate solution algorithms for the optimization problem are given. Firstly, according to the related model assumptions and symbol definitions, the iterative process of synchronizing the step-flow rounding is introduced referring to the idea of the iterated local search and flow shop scheduling, and the formal definition for MRF-SNSA is established. After that, based on the overlapping-aware connectivity search algorithm, a hierarchical relay deployment algorithm with the search and bandwidth aware is designed using related theories of hierarchical clustering and network flow. Finally, through experiments and comparison with the existing method, the experimental results show that the proposed method can effectively improve the efficiency of federating disjoint segments in WSN while meeting the constraints for the unknown distribution of disjoint segments and the bandwidth-constrained relay.

    • Detecting Covert Timing Channels Based on Difference Entropy

      2019, 30(9):2733-2759. DOI: 10.13328/j.cnki.jos.005518 CSTR:

      Abstract (1988) HTML (1252) PDF 2.76 M (3541) Comment (0) Favorites

      Abstract:Covert channel is a way to building confidential channels based on the legitimate channels (also named with ‘overt channel’). Compared with the encryption technology, covert channel has stronger covertness because it conceals the behavior of covert communication as well as the transmitted message it contains. The emergence of covert channels has threatened the information security and personal privacy in public Internet. Some hackers and criminals, in particular, adopt covert channels to steal secret information bypassing the inspection of security facilities. It is, therefore, crucial to design and deploy more efficient and accurate detection algorithm for covert channels. In this study, a detection algorithm is proposed for covert timing channels based on the difference entropy. First, the definition of difference entropy is introduced, then, the principle of the algorithm is proposed, and the description of the implementation of this algorithm and parameter optimization is given. Lastly, the performance of the detection algorithm is evaluated through experiments, and experimental results show that proposed algorithm is effective on the detection of the IPCTC, TRCTC, JitterBug covert timing channels.

    • CP-ABE Scheme Supporting Fine-grained Attribute Direct Revocation

      2019, 30(9):2760-2771. DOI: 10.13328/j.cnki.jos.005420 CSTR:

      Abstract (1796) HTML (2015) PDF 1.22 M (4330) Comment (0) Favorites

      Abstract:In the attribute-based cryptosystems, user's identity is extended as a set of attributes. In order to solve the access control problem caused by the change of users' attributes, attribute-based encryption (ABE) schemes with attribute revocation were proposed. However, there are some problems like high revocation cost or coarse-grained revocation in most of the existing ABE schemes. Besides, the attribute key escrow problem is serious, that is the attribute authority can impersonate any user to decrypt the ciphertexts since the user's attribute private key is generated by the attribute authority himself. In order to remedy the above mentioned problems, the study proposes a ciphertext-policy attribute-based scheme supporting fine-grained attribute direct revocation, whose formal definition and security model are also presented. In the proposal, user's attribute private key is generated by the system authority and multiple attribute authorities jointly, so that each attribute authority's privilege can be effectively limited. Furthermore, the proposal constructs an efficient re-encryption method based on the access tree, which, together with the attribute revocation list, can be used to realize fine-grained attribute direct revocation with low revocation cost. By the formal security proof, the proposal is proven to have the characteristics of indistinguish ability under the adaptive chosen cipher-text attack and can protect the system from being attacked by the incredible authority. Compared to the similar schemes, the proposal can achieve higher computation efficiency and finer-grained attribute direct revocation.

    • Defensing Code Reuse Attacks Using Live Code Randomization

      2019, 30(9):2772-2790. DOI: 10.13328/j.cnki.jos.005516 CSTR:

      Abstract (1806) HTML (1481) PDF 2.52 M (3584) Comment (0) Favorites

      Abstract:As code reuse attacks (CRA) are becoming increasingly complex, legacy code randomization methods have been unable to provide adequate protection. An approach called LCR is present to defense CRA by living code randomization. LCR real-time monitors all suspicious operations which aim to find or utilize gadgets. When above events occur, LCR randomizes the function blocks of the target process in the memory so that gadgets' information known by attackers become invalid and attacks composed of these gadgets will fail. Finally, a prototype system of LCR is implemented to test the proposed method. Experiment results show that LCR can effectively defense CRAs based on direct or indirect memory disclosure, meanwhile introduces low run-time performance overhead on SPEC CPU2006 with less than 5% on average.

    • Hierarchical Anti-Spoofing Alliance Construction Approach

      2019, 30(9):2791-2814. DOI: 10.13328/j.cnki.jos.005517 CSTR:

      Abstract (2610) HTML (1174) PDF 2.61 M (3758) Comment (0) Favorites

      Abstract:IP spoofing, as one of the most threatening security flaws in the current Internet, can bring a series of issues about network management and telecommunications billing. For this reason, the researchers propose the mutual egress filtering based defense mechanism, which uses the best current anti-spoofing practice, i.e., egress filtering, to clean the anonymous packets with high-efficiency, and simultaneously increase the incentive deployment through constructing the anti-spoofing alliance. However, the existing work has the following disadvantages:the flat and plain architecture leads to the higher overhead on the filter and communication; the inefficient data processing and non-member identification leads to the higher computation overhead and the lower precision of filter optimization. Therefore, this study proposes a hierarchical anti-spoofing alliance construction approach based on mutual egress filtering. Extensive mathematical analysis and simulations are performed to evaluate the proposed approach. The results show that the proposed approach significantly outperforms the prior approaches in terms of the filter overhead, communication overhead, computation overhead, and the precision of filter optimization.

    • Group VPN System and Multicast Key Distribution Protocol Based on Group-oriented Cryptography

      2019, 30(9):2815-2829. DOI: 10.13328/j.cnki.jos.005588 CSTR:

      Abstract (1889) HTML (1215) PDF 1.46 M (3974) Comment (0) Favorites

      Abstract:The rapid growth of the Internet economy has already led to increasing demand for enterprises in establishing network connections with multiple branches in large scale, even global scale. The original VPNs constructed on centralized gateway mode are gradually turning to the VPN system using peer-to-peer technology. The existing peer-to-peer VPN technology built on the two-party key exchange method is more suitable for pairwise communication. However, considering that the tunnel keys are mutually independent in a multi-node communication, the cumulative computation delays of encryption under different tunnels will raise the difficulty in synchronous message-passing. Aiming at this problem, in this study, a peer-to-peer VPN framework called GroupVPN is proposed, which improves the efficiency of multicast communication by designing a non-centralized and highly scalable multicast key distribution protocol. The proposed framework adds a group management layer over the security tunnel layer in order to facilitate dynamic group management and efficient key distribution. This new protocol is applicable for realizing the efficient key distribution for arbitrary group in two mechanisms:designation and revocation by combining broadcast encryption (BE) under public-key group-oriented cryptography infrastructure. In addition, security analysis indicates that this protocol could meet the security requirements of data privacy, data integrity, and identities' authenticity under the strong Deffie-Hellman (SDH) assumption. Experimental analysis also shows that the communication and key-storage overheads of this protocol are actually independent of group size, and the communication delay is more limited by the phase of session key distribution for improving the performance.

    • >Review Articles
    • Data Governance Technology

      2019, 30(9):2830-2856. DOI: 10.13328/j.cnki.jos.005854 CSTR:

      Abstract (7263) HTML (6357) PDF 2.43 M (9476) Comment (0) Favorites

      Abstract:Along with the pervasiveness of information technology, the amount of data generated by human beings is growing at an exponential rate. Such massive data requires management with new methodologies. Data governance is the management of data for an organization (enterprise or government) as a strategic asset, from the collection of data to a set of management mechanisms for processing and applications, aiming to improve data quality, achieve a wide range of data sharing, and ultimately maximize the data value. Research and development on big data is nowadays popular in various domains, but big data governance is still in its infancy, and the decision-making of an organization cannot be separated from excellent data governance. This paper first introduces the concepts, developments, and necessity of data governance and big data governance, then analyzes existing data governance technologies-data specification, data cleaning, data exchange, and data integration, and also discusses the maturity measurement and framework design of data governance. Based on these introductions, analyses and reviews, the paper puts forward a "HAO governance" model for big data governance, which aims to facilitate HAO Intelligence with human intelligence (HI), artificial intelligence (AI), and organizational intelligence (OI), and then instantiates the "HAO governance" model with public security data governance as an example. Finally, the paper summarizes data governance with its challenges and opportunities.

    • PUseqClust: A Clustering Analysis Method for RNA-Seq Data

      2019, 30(9):2857-2868. DOI: 10.13328/j.cnki.jos.005512 CSTR:

      Abstract (1733) HTML (2176) PDF 1.34 M (3894) Comment (0) Favorites

      Abstract:Clustering analysis is an important technique for gene expression data analysis. It groups the data according to similar gene expression patterns to explore the unknown gene functions. In recent years, RNA-seq technology has been widely adopted to measure gene expression. It produces a large number of read data, which provide possibilities for clustering analysis of gene expression. In this area, read counts are popularly modeled by the negative binomial distribution to reduce the impact of the non-uniform read distribution, while most existing clustering methods process directly read counts. They donot fully consider the various noise existing in the data, and the uncertainty of gene expression measurements. Some methods also ignore the variability of clustering centers. This study proposes PUseqClust (propagating uncertainty into RNA-Seq clustering) framework for clustering of RNA-seq data. This framework first uses PGSeq to model the stochastic process of read generation. Laplace method is next used to consider correlation between expressions under various conditions and replicates to obtain the uncertainty of expression estimation. Finally, the method adopts the student's t mixture model to perform gene expression clustering. Results show that the proposed methods obtained more biologically relevant clustering results.

    • Restaurant Recommendation Model with Multiple Information Fusion

      2019, 30(9):2869-2885. DOI: 10.13328/j.cnki.jos.005540 CSTR:

      Abstract (2566) HTML (1096) PDF 3.16 M (4169) Comment (0) Favorites

      Abstract:Restaurant recommendation can leverage check-ins, time, location, restaurant attributes, and user demographics to dig user's dining preference, and recommend a list of restaurants for each user. In order to fuse these data information more effectively, this study proposes a restaurant recommendation model with multiple information fusion. Firstly, this model constructs a three-dimensional tensor by using check-ins and time context, and digs some users' similar relation matrices and restaurants' similar relation matrices from additional data information. Secondly, these relation matrices and tensor are decomposed simultaneously. Then, Bayesian personalized ranking optimization criterion method (BPR Opt) and gradient descent algorithm are adopted to solve the model parameters. Finally, the proposed model generates a corresponding restaurant candidate list for target user at different time by calculating predicted tensor. A comprehensive experimental study is conducted on two real-world datasets. The experimental results not only validate the efficacy of the proposed model, which outperforms the current restaurant recommendation model and effectively alleviates influence of the data sparsity on recommendation performance, but also evaluate the efficiency of the proposed model, which has acceptable running time.

    • Hierarchical Point-set Description of Object Edge and Its Application in Shape Retrieval

      2019, 30(9):2886-2903. DOI: 10.13328/j.cnki.jos.005535 CSTR:

      Abstract (2094) HTML (1196) PDF 2.24 M (3903) Comment (0) Favorites

      Abstract:A novel shape description which can be generally applied to both contour shape and region shape recognition is proposed in this study. This method treats the edge (including the inner edge) of the object as an unordered point-set, a hierarchical description model is built by iteratively partitioning the edge of the object into progressively smaller parts along different directions. At each level of the hierarchy structure, the geometrical features of the object edge are characterized by two measurements, partition ratio, and dispersion degree. Combining them, a hierarchical description of the object shape can then be constructed. The dissimilarity of two shapes can be measured by computing the L-1 distance between their hierarchical shape descriptors. The merits of the proposed method can be summarized as follows. (1) Both contour shape and region shape can be effectively described by this method, thus it has the ability for general use. (2) Based on the proposed hierarchical description framework, besides the proposed two measures, partition ratio, and dispersion degree, many other measures can be included for meeting various accuracy requirements on shape recognition, so the proposed method has extendibility. (3) The proposed hierarchical description scheme make the available descriptors characterize the shape from coarse to fine, so the proposed descriptor is multi-scale. (4) Instead of using all the pixel points of the object, the proposed method only take the edge points of the object into account, for this reason, it has a relative low computational complexity. Two standard test sets of MPEG-7 CE-2 region shape database and MPEG-7 CE-1 contour shape database are used to evaluate the performance of the proposed method. The experimental results indicate that the proposed method outperforms the state-of-the-art approaches in terms of a comprehensive consideration on the retrieval rates, retrieval efficiency, and general application ability.

    • Anti-rotation and Efficient Discriminative Feature Representation Method for Circular Images

      2019, 30(9):2904-2917. DOI: 10.13328/j.cnki.jos.005566 CSTR:

      Abstract (2732) HTML (1205) PDF 1.42 M (4362) Comment (0) Favorites

      Abstract:According to the center symmetric characteristics of circular image objects, this study proposes an anti-rotation and efficient discriminative binary feature extraction method based on pairs of spatial symmetry structure. This method reconstructs the local coordinate system by radial transform during feature computation, based on it, then local binary pattern with anti-rotation of spatial symmetry regions are extracted. Meanwhile, the annular space is adopted to achieve rotation invariability during the feature pooling operation, which ensures the anti-rotation ability of final feature description. This method are tested in the euro coins, QQ expression, and car logo data set, and the recognition accuracy reached 100%, 100%, and 97.07% respectively, which is superior to traditional LBP and HOG features in euro coins and QQ expression datasets. Moreover, the algorithm is efficient, and the computation time for single point feature extraction is only 0.045 ms.

    • >Review Articles
    • Keywords Statistics and Analysis of Applications and Grants in the Field of Computer Image and Video Processing of the National Natural Science Foundation of China

      2019, 30(9):2918-2924. DOI: 10.13328/j.cnki.jos.005849 CSTR:

      Abstract (3016) HTML (3002) PDF 709.96 K (4760) Comment (0) Favorites

      Abstract:Keywords can reflect the main research content of a project application. In this study, the keywords of applications and grants of the National Natural Science Foundation of China in the field of computer image and video processing during 2014-2018 are firstly counted, and then they are analyzed from various perspectives, such as keyword quantity, keyword frequency, and the relationship between them and subsidizing rate. Lastly, the change of research hotspots in the field of computer image and video processing is also discussed based on the frequency of the keywords by using quantitative method.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063