• Volume 32,Issue 11,2021 Table of Contents
    Select All
    Display Type: |
    • Efficient Algorithm for Solving Strict Pattern Matching Under Nonoverlapping Condition

      2021, 32(11):3331-3350. DOI: 10.13328/j.cnki.jos.006054

      Abstract (1052) HTML (941) PDF 1.83 M (2310) Comment (0) Favorites

      Abstract:Nonoverlapping conditional sequence pattern mining is a method of gap constrained sequence pattern mining. Compared with similar mining methods, this method is easier to find valuable frequent patterns. The core of the problem is to calculate the support (or the number of occurrences) of a pattern in the sequence, and then determine whether the pattern is frequent. The essence of calculating the support is the pattern matching under nonoverlapping condition. The current studies employ the iterative search to find a nonoverlapping occurrence, and then prune the useless nodes to calculate the support of the pattern. The computational time complexity of these algorithms is O(m×m×n×W), where m, n, and W are the pattern length, sequence length, and maximum gap, respectively. In order to improve the calculation speed of pattern matching under nonoverlapping condition, and effectively reduce sequence pattern mining time, this study proposes an efficient and effective algorithm, which converts the pattern matching problem into a NetTree, then starts from the minroot node of the NetTree, and adopts the backtracking strategy to iteratively search the leftmost child to calculate the nonoverlapping minimum occurrence. After pruning the occurrence on the NetTree, the problem can be solved without further searching and pruning invalid nodes. This study proves the completeness of the algorithm and reduces the time complexity to O(m×n×W). On this basis, the study continues to indicate that there are other three similar solving strategies for this problem, iteratively finds the leftmost parent path from the leftmost leaf, the rightmost child path from the rightmost root, and the rightmost parent path from the rightmost leaf. Extensively experimental results verify the efficiency of the proposed algorithm in this study, especially, the mining algorithm adopting this method can reduce the mining time.

    • Code Line Recommendation Based on Deep Context-awareness of Onsite Programming

      2021, 32(11):3351-3371. DOI: 10.13328/j.cnki.jos.006059

      Abstract (1760) HTML (1245) PDF 666.49 K (3302) Comment (0) Favorites

      Abstract:In the Onsite Programming of software development, there are a lot of information related to the current development task, such as code line context information, user development intention, etc. If the next code line or lines can be recommended to program developers according to the existing code lines, then it will not only help the developer to complete the development task better, but also improve the efficiency of software development. However, most existing approaches only focus on code repair or completion, which seldom considers how to meet the demand of recommending code lines based on contextual information. To solve this problem, a feasible solution is using deep learning methods to extract the relevant context factors of code lines through mining hidden context information based on the existing massive source data. Therefore, this study proposes a novel approach based on deep learning for onsite programming. In this approach, the contextual relationships among various code lines are learned from existing large-scale code data sets and then Top-N code lines are recommended to programmers. The approach utilizes the RNN encoder-decoder framework, which can encode several lines of code to a vector with context-aware information, and then obtain the Top-N new code lines based on the context vector. Finally, the approach is empirically evaluated with a large-scale code line data set collected from the open source platform. The study results show that the proposed approach can recommend the relevant code lines to developers according to the existing context, and the accuracy value is approaching to 60%. In addition, the MRR value is about 0.3, indicating that the recommended items are ranked in the top of the N recommended results.

    • Response Time Constrained Code Reviewer Recommendation

      2021, 32(11):3372-3387. DOI: 10.13328/j.cnki.jos.006079

      Abstract (2142) HTML (1366) PDF 672.17 K (3873) Comment (0) Favorites

      Abstract:Peer code review, or manual review of submitted code, which is an effective way to reduce defects and improve quality, has been widely adopted by open source communities and many software development organizations, such as Github. In the GitHub community, code reviews are an important part of its pull-based software development model. Open source projects often have hundreds or thousands of candidate reviewers, recommend suitable reviewers for code review is a very valuable and challenging work. Based on the data analysis of real open source projects, it is found that the response time of review is a common problem, which will extend the review cycle and reduce the enthusiasm of participants. Existed work did not take the response time into account. Therefore, the code reviewer recommendation problem is proposed with response time constraint, and then the code reviewer recommendation method (MOC2R) is proposed based on multi-objective optimization by maximizing the experience of code reviewers, maximizing the response probability within the time window, and maximizing the activity of staff within the latest time. The experiments are conducted based on data from six open source projects, and the results show that under different time window constraints (2h, 4h, 8h), Top-1 accuracy rate is 41.7%~61.5%, Top-5 accuracy rate is 66.5%~77.7%, significantly better than the two commonly used and industry-leading baseline methods, and all three objectives contributed to the recommendation among which the response probability within the time window contributes the most. The proposed method can further enhance code review efficiency, improve the activity of the open source community.

    • Top-k Online Service Evaluating to Maximize Satisfaction of User Group

      2021, 32(11):3388-3403. DOI: 10.13328/j.cnki.jos.006089

      Abstract (697) HTML (1102) PDF 1.49 M (1982) Comment (0) Favorites

      Abstract:Online service evaluations that consider inconsistent user evaluation criteria usually use a complete ranking of services as the evaluation result, instead of selecting the Top-k online service set that maximizes the satisfaction of the user group. Thus, it makes the evaluation results cannot satisfy the rationality and fairness requirement in the scenario of Top-k online service evaluation. This study proposes a Top-k online service evaluation method that maximizes the satisfaction of user group. Firstly, a metric of user group satisfaction is defined to measure the rationality of the selected k online services. Secondly, considering the inconsistency of user evaluation criteria and incomplete user preference information, the Borda rule is used to construct user-service matrix based on users' preference relationship for online services. Then, inspired by the theory of Monroe proportional representation, the Top-k online service evaluation problem is modeled as an optimization problem to find a set of online services that maximizes satisfaction of the user group. Finally, a greedy algorithm is designed to solve the optimization problem and the obtained set of online services is served as the result of Top-k services evaluation. The rationality and effectiveness of the method are verified by theoretical analysis and experiments study. Theoretical analysis shows that the proposed method satisfies the proportional representation and fairness required for Top-k online service evaluation. Meanwhile, experiments also show that the method can obtain the result close to the ideal upper bound of the user group satisfaction in the reasonable time, so that the user group can make right service choice decision. In addition, the method can also realize Top-k online service evaluation when users' preferences are incomplete.

    • Human-cyber-physical Services Dispatch Approach for Data Characteristics

      2021, 32(11):3404-3422. DOI: 10.13328/j.cnki.jos.006090

      Abstract (679) HTML (1247) PDF 2.11 M (1935) Comment (0) Favorites

      Abstract:With the continuous development of the industrial Internet, big data and artificial intelligence contribute to the comprehensive interconnection in human-cyber-physical system. The amount of task data generated by users using the service is growing exponentially. While recommending services for online users to meet personalized needs, and for services that need to be completed through human- cyber-physical interaction, it has become a challenging problem how to integrate the various offline and online resources to dispatch the right person to complete the task quickly and effectively. In order to ensure the accuracy of services dispatch, this study proposes a cross-domain collaborative service dispatch method that takes into account the data characteristics of all these factors in human-cyber-physical system. In order to get a more reasonable dispatch, the sentiment characteristics of user evaluation and the similarity of business data are analyzed respectively, and then the attributes inherent in the real world are added of which have an impact on business processes. Finally, taking the doctor-patient assignment of an online diagnosis and treatment platform on the Internet as an example, the results show that the method proposed in this study has high accuracy and can improve the efficiency of task execution.

    • Framework for Architecting Smart Contracts Using Microservices

      2021, 32(11):3423-3439. DOI: 10.13328/j.cnki.jos.006277

      Abstract (1227) HTML (1083) PDF 797.54 K (3072) Comment (0) Favorites

      Abstract:Blockchain has the advantages of distribution, immutability, decentralization, and traceability, but short of implementing. Smart contract is a decent solution to make up for this deficiency. However, smart contracts also struggle in deploying and monitoring. Inspired by the DevOps tools that support continuous delivery and continuous monitoring for microservices, a framework is proposed to architect smart contracts using microservices. Besides, a prototype platform (Mictract) is implemented in which DevOps tools were aggregated to support smart contracts deploying and monitoring. The case study performed in Marbles of Hyperledger Fabric shows that the proposed framework and the prototype platform significantly improve the automation level to deploy and monitor smart contracts.

    • Feature Selection Algorithm for Noise Data

      2021, 32(11):3440-3451. DOI: 10.13328/j.cnki.jos.006041

      Abstract (1650) HTML (1365) PDF 629.23 K (2362) Comment (0) Favorites

      Abstract:The regularization feature selection algorithm is not effective in reducing the impact of noisy data. Moreover, the local structure of the sample space is hardly considered. After the samples are mapped to the feature subspace, the relationship between samples is inconsistent with the original space, resulting in unsatisfactory results of the data mining algorithm. This study proposes an anti-noise feature selection method that can effectively solve these two shortcomings of traditional algorithms. This method first uses a self-paced learning training method, which not only greatly reduces the possibility of outliers entering training, but also facilitates the rapid convergence of the model. Then, a regression learner with regular terms is used to select the embedded features, taking into account the "sparse solution" and "solving over-fitting" to make the model more robust. Finally, the technique of locality preserving projections is integrated, and its projection matrix is transformed into the regression parameter matrix of the model, while maintaining the original local structure between the samples while selecting the features. Some experiments are conducted for evaluating the algorithm with a series of benchmark data sets. Experimental results show the effectiveness of the proposed algorithm in term of the aCC and aRMSE.

    • Improved Meta-heuristic Optimization Algorithm and Its Application in Image Segmentation

      2021, 32(11):3452-3467. DOI: 10.13328/j.cnki.jos.006043

      Abstract (864) HTML (1189) PDF 1.68 M (2702) Comment (0) Favorites

      Abstract:Metaheuristic algorithms have been widely used since they were proposed in the 1960s as they can effectively reduce the amount of computation and improve the efficiency of optimization. The algorithms are characterized by imitating various operating mechanisms in nature, have the characteristics of self-regulation, and have solved the problems like low computational efficiency and poor convergence of traditional optimization algorithms such as gradient descent, Newton's method, and conjugate descent. The algorithms have sound effects in combination optimization, production scheduling, and image processing. In this study, an improved metaheuristic optimization algorithm-NBAS algorithm is proposed, which is obtained by mixing binary discrete beetle antennae search algorithm (BBAS) and the original antennae search algorithm (BAS). NBAS balances the local and global search, and effectively solves the problem like the local optimum. It is concluded that the algorithm balances the local and global search, which effectively compensates the shortcomings of the algorithm that is easy to fall into local optimum. In order to verify the effectiveness of the NBAS algorithm, this study combines the NBAS algorithm with the two-dimensional Kaniadakis entropy algorithm, and proposes a fast and accurate NBAS-K entropy image segmentation algorithm. The NBAS-K entropy solves the problems that the optimization algorithms used for image threshold segmentation function are easy to fall into local optimum, and have the large number of optimization individuals and the high design complexity, which results in large amount of computation and time-consuming. Finally, the NBAS algorithm is combined with the two-dimensional K entropy algorithm to generate a fast and accurate NBAS-K entropy image segmentation algorithm. The experimental results of the NBAS-K entropy algorithm, BAS-K entropy algorithm, BBAS-K entropy algorithm, Genetic-K entropy algorithm (GA-K entropy), particle swarm optimization-K entropy algorithm (PSO-K entropy), and grasshopper optimization-K entropy algorithm (GOA-K entropy) on Berkeley datasets, artificially noisy images, and remote sensing images show that the proposed method not only has better anti-noise performance, but also has higher precision and robustness, and can realize complex image segmentation more effectively.

    • Automated Tensor Decomposition to Accelerate Convolutional Neural Networks

      2021, 32(11):3468-3481. DOI: 10.13328/j.cnki.jos.006057

      Abstract (632) HTML (1114) PDF 1.42 M (2455) Comment (0) Favorites

      Abstract:Recently, convolutional neural network (CNN) have demonstrated strong performance and are widely used in many fields. Due to the large number of CNN parameters and high storage and computing power requirements, it is difficult to deploy on resource-constrained devices. Therefore, compression and acceleration of CNN models have become an urgent problem to be solved. With the research and development of automatic machine learning (AutoML), AutoML has profoundly impacted the development of neural networks. Inspired by this, this study proposes two automated accelerated CNN algorithms based on parameter estimation and genetic algorithms, which can calculate the optimal accelerated CNN model within a given accuracy loss range, effectively solving the error caused by artificially selected rank in tensor decomposition. It can effectively improve the compression and acceleration effects of the convolutional neural network. By rigorous testing on the MNIST and CIFAR-10 data sets, the accuracy rate on the MNIST dataset is slightly reduced by 0.35% compared to the original network, and the running time of the model is greatly reduced by 4.1 times, the accuracy rate on the CIFAR-10 dataset dropped slightly by 5.13%, and the running time of the model was greatly decreased by 0.8 times.

    • Unsupervised Fine-grained Video Categorization via Adaptation Learning Across Domains and Modalities

      2021, 32(11):3482-3495. DOI: 10.13328/j.cnki.jos.006058

      Abstract (1044) HTML (1266) PDF 1.44 M (2338) Comment (0) Favorites

      Abstract:Fine-grained video categorization is a highly challenging task to discriminate similar subcategories that belong to the same basic-level category. Due to the significant advances in fine-grained image categorization and expensive cost of labeling video data, it is intuitive to adapt the knowledge learned from image to video in an unsupervised manner. However, there is a clear gap to directly apply the models learned from image to recognize the fine-grained instances in video, due to domain distinction and modality distinction between image and video. Therefore, this study proposes the unsupervised discriminative adaptation network (UDAN), which transfers the ability of discrimination localization from image to video. A progressive pseudo labeling strategy is adopted to iteratively guide UDAN to approximate the distribution of the target video data. To verify the effectiveness of the proposed UDAN approach, adaptation tasks between image and video are performed, adapting the knowledge learned from CUB-200-2011/Cars-196 datasets (image) to YouTube Birds/YouTube Cars datasets (video). Experimental results illustrate the advantage of the proposed UDAN approach for unsupervised fine-grained video categorization.

    • Planning Network Model Based on Generalized Asynchronous Value Iteration

      2021, 32(11):3496-3511. DOI: 10.13328/j.cnki.jos.006077

      Abstract (710) HTML (1446) PDF 1.71 M (2023) Comment (0) Favorites

      Abstract:In recent years, how to generate policies with generalization abilities has become one of the hot issues in the field of deep reinforcement learning, and many related research achievements have appeared. One representative work among them is generalized value iteration network (GVIN). GVIN is a differential planning network that uses a special graph convolution operator to approximately represent a state-transition matrix, and uses the value iteration (VI) process to perform planning during the learning of structure information in irregular graphs, resulting in policies with generalization abilities. In GVIN, each round of VI involves performing value updates synchronously at all states over the entire state space. Since there is no consideration about how to rationally allocate the planning time according to the importance of states, synchronous updates may degrade the planning performance of network when the state space is large. This work applies the idea of asynchronous update to further study GVIN. By defining the priority of each state and performing asynchronous VI, a planning network is proposed, it is called generalized asynchronous value iteration network (GAVIN). In unknown tasks with irregular graph structure, compared with GVIN, GAVIN has a more efficient and effective planning process. Furthermore, this work improves the reinforcement learning algorithm and the graph convolutional operator in GVIN, and their effectiveness are verified by path planning experiments in irregular graphs and real maps.

    • Black-box Adversarial Attack Method Based on Evolution Strategy and Attention Mechanism

      2021, 32(11):3512-3529. DOI: 10.13328/j.cnki.jos.006084

      Abstract (1125) HTML (1092) PDF 2.11 M (2821) Comment (0) Favorites

      Abstract:Since deep neural networks (DNNs) have provided state-of-the-art results for different computer vision tasks, they are utilized as the basic backbones to be employed in many domains. Nevertheless, DNNs have been demonstrated to be vulnerable to adversarial attacks in recent researches, which will threaten the security of different DNN-based systems. Compared with white-box adversarial attacks, black-box attacks are more similar to the realistic scenarios under the constraints like lacking knowledge of model and limited queries. However, existing methods under black-box scenarios not only require a large amount of model queries, but also are perceptible from human vision system. To address these issues, this study proposes a novel method based on evolution strategy, which improves the attack performance by considering the inherent distribution of updated gradient direction. It helps the proposed method in sampling effective solutions with higher probabilities as well as learning better searching paths. In order to make generated adversarial example less perceptible and reduce the redundant perturbations after a successful attacking, the proposed method utilizes class activation mapping to group the perturbations by introducing the attention mechanism, and then compresses the noise group by group while ensure that the generated images can still fool the target model. Extensive experiments on seven DNNs with different structures suggest the superiority of the proposed method compared with the state-of-the-art black-box adversarial attack approaches (i.e., AutoZOOM, QL-attack, FD-attack, and D-based attack).

    • Optimizing Simple Tabular Reduction Algorithm for Factor-decomposition Encoding Instances

      2021, 32(11):3530-3540. DOI: 10.13328/j.cnki.jos.006094

      Abstract (499) HTML (859) PDF 1.13 M (1573) Comment (0) Favorites

      Abstract:Table constraints are widely studied in constraint programming (CP). At present, the most efficient algorithms for solving table constraint problems are compact-table (CT) and simple tabular reduction bit (STRbit), which maintain generalized arc consistency (GAC) during the search process. Full pairwise consistency (fPWC) is a consistency that is stronger than GAC. The most efficient algorithm for maintaining fPWC is PW-CT which is difficult to implement in general solver. Factor-decomposition encoding (FDE) is an encoding method that implements fPWC and is usually used together with STR. The current STR algorithms that use bitset may cause memory overflow issues to solve FDE instances. This study proposes STRFDE, a new algorithm using bitset for solving FDE instances. It combines the advantages of CT and STRbit that makes the memory as little as possible while ensuring the efficiency of solving. Experimental results show that the proposed algorithm is competitive over a variety of instances with non-trivial intersections.

    • Transaction Data Collection for Itemset Mining Under Local Differential Privacy

      2021, 32(11):3541-3562. DOI: 10.13328/j.cnki.jos.006044

      Abstract (1086) HTML (1098) PDF 1007.20 K (2499) Comment (0) Favorites

      Abstract:Transaction data is commonly in various application scenarios, such as shopping records, page browsing history, etc., service providers collect and analyze transaction data for providing better services. However, collecting transaction data will disclose privacy information. To solve the problem, this study proposes a transaction data collection mechanism based on condensed local differential privacy (CLDP). Firstly, a new score function of the candidate set is defined. Secondly, the output domain of the candidate set is separated into several subspaces according to the function. Thirdly, the client selects one subspace randomly, and generates transaction data randomly based on the subspace, then, sends it to the untrusted data collector. Finally, considering the difficulty for setting the privacy parameter, the heuristic privacy parameter setting strategy is designed based on the maximum posterior confidence threat model (MPC). The theoretical analysis shows that this method can protect the length and content of transaction data at the same time and satisfies a-CLDP. The experiments demonstrate that the transaction data collected in this study has higher utility than the state-of-the-art approaches, and the privacy parameter setting is semantic.

    • Deduplication Scheme Based on Threshold Dynamic Adjustment

      2021, 32(11):3563-3575. DOI: 10.13328/j.cnki.jos.006073

      Abstract (463) HTML (992) PDF 1.34 M (1752) Comment (0) Favorites

      Abstract:Cloud storage has become a major application model. As the number of users and data volume increase, cloud storage providers use deduplication technology to reserve storage space and resources. Existing solutions generally use a uniform popularity threshold to process all the data, while the issue is not addressed that different data information should have different privacy levels. A deduplication scheme is proposed based on threshold dynamic adjustment to ensure the security of uploaded data and related operations. The concept of ideal threshold is introduced, which can be used to eliminate the drawbacks of uniform threshold in the traditional schemes. The item response theory is adopted to determine the sensitivity of different data and their privacy scores, which ensures the applicability of data privacy scores, it can solve the problem that some users care little about privacy issues. A privacy score query and response mechanism are proposed based on data encryption. On this basis, the dynamic adjustment method of the popularity threshold is designed for data uploading. Experiment results and comparative analysis show that the proposed scheme based on threshold dynamic adjustment has sound scalability and solid practicability.

    • Distributed Index Construction for Big Data Streams

      2021, 32(11):3576-3595. DOI: 10.13328/j.cnki.jos.006097

      Abstract (985) HTML (1215) PDF 3.04 M (2042) Comment (0) Favorites

      Abstract:Efficient storage and indexing of big data streams are challenging issues in the database field. By segmenting the temporal data stream into continuous time windows, a distributed master-slave index structure is proposed based on double-layer B+ tree called WB-Index. Lower B+ tree index is built on stream tuples in each time window. Upper B+ tree index is built on each successive time window. Lower B+ tree index is constructed by combining both batch loading and parallel sorting techniques. The core idea of the construction method is to slice the time window and isolate the parallelable operations from others in the time window. Sorting and data stream receiving between slices work in parallel, while the B+ tree skeleton (a B+ tree without value) construction for the time window and the merge-sorting operation are parallelized as well. These techniques effectively expedite the B+ tree construction. Due to the monotonous increasement of timestamps of time windows, a split-less method for upper B+ tree index construction is adopted to avoid the node splitting and memory movement overhead, and improve the space utilization and update efficiency. In WB-Index, data stream tuples and index are separated, and index and hotspot data are cached as much as possible to improve query efficiency. Finally, theoretic analysis and experiments have both demonstrated that WB-Index can support efficient real-time data stream writing and stream data querying.

    • MLWE-based Homomorphic Inner Product Scheme

      2021, 32(11):3596-3605. DOI: 10.13328/j.cnki.jos.006032

      Abstract (712) HTML (1398) PDF 993.15 K (1820) Comment (0) Favorites

      Abstract:The homomorphic inner product has a wide range of applications such as secure multi-geometry calculation, private data mining, outsourced computing, and sortable ciphertext retrieval. However, the existing schemes for calculating the homomorphism inner product are mostly based on FHE by RLWE with low efficiency. With MLWE, this study proposes a homomorphic inner product scheme by using a low expansion rate encryption algorithm proposed by Ke, et al. Firstly, the tensor product operation in the cipher space is given, which corresponds to the integer vector product operation in the plaintext space. Then, the correctness and security of the scheme are analyzed. At last, two sets of optimized encryption parameters are given, corresponding to the different application scenarios of homomorphic inner product. The scheme of this study is implemented by C++ and the large integer computation library NTL. Compared with other homomorphic encryption schemes, this scheme can efficiently calculate the homomorphism inner products of integer vectors.

    • Blockchain-based Multi-recipient Multi-message Signcryption Scheme

      2021, 32(11):3606-3627. DOI: 10.13328/j.cnki.jos.006034

      Abstract (915) HTML (884) PDF 2.80 M (2138) Comment (0) Favorites

      Abstract:When data is transmitted through the network, it is vulnerable to network attacks such as eavesdropping and tampering. Therefore, data confidentiality and data integrity should be guaranteed which can be achieved with the signcryption schemes. Based on the elliptic curve, a multi-receiver multi-message signcryption scheme is proposed, which can be effectively adapted to many scenarios such as broadcast systems. Multiple key distribution centers are used to manage the system master key, and the secrets of each center can be updated periodically to resist the APT attacks. In addition, users registered in different periods can communicate with each other to improve the availability. A secret update strategy based on the public blockchain is proposed, and the update operation is triggered based on the block height and the block timestamp. Blockchain, with its non-tampering feature, can guarantee security of the proposed scheme. In addition, the new scheme does not need to send transactions and is therefore free. Based on the computational Diffie-Hellman problem and the discrete logarithm problem, confidentiality and unforgeability of the proposed scheme are analyzed on the random oracle model. The proposed scheme also has the following security attributes:key escrow security, forward and backward compatibility, and non-repudiation. Performance analysis shows that the proposed scheme has a shorter ciphertext length and higher efficiency. In the simulation part, influence of the number of key distribution centers and the threshold on the system performance is analyzed. Without considering the network delay and other disturbing factors, the performance loss is less than 5% for the proposed scheme compared with those with a single key distribution center. The time errors incurred by the update strategy based on blockchain decrease with the increasing periods. When the period is set more than 550s, the time error percentage is less than 1%. The time errors make it more difficult for the attackers to predict the update time and launch the attacks.

    • Efficient Secure Vector Computation and Its Extension

      2021, 32(11):3628-3645. DOI: 10.13328/j.cnki.jos.006093

      Abstract (823) HTML (1544) PDF 433.33 K (2000) Comment (0) Favorites

      Abstract:Secure multiparty computation is an important research topic of cryptography and focus of the international cryptographic community. Many practical problems can be described using vectors. Therefore, it is of important theoretical and practical significance to study secure multiparty vector computation. Existing secure vector computation protocols are for integer vectors, and there are few works on rational vectors. To fill the gap, the secure multiparty computation is studied for rational vectors, including computing the dot product of two vectors, determining whether two vectors are equal, and whether one vector dominates another. The efficient protocols are proposed for these problems and the application of secure vector computation is extended. It is also proved that these new protocols are secure. The efficiency analysis shows that the proposed protocols outperform existing protocols. Finally, these new protocols are applied to solve some new vector computation problems and some computational geometric problems.

    • Red Lesion Segmentation of Fundus Image with Multi-task Learning

      2021, 32(11):3646-3658. DOI: 10.13328/j.cnki.jos.006038

      Abstract (833) HTML (1377) PDF 1.51 M (1922) Comment (0) Favorites

      Abstract:Diabetic retinopathy (DR) is the leading cause of vision loss for adult individuals, and early fundus screening can significantly reduce this visual loss. Color fundus image is often used in large-scale fundus screening due to the acquisition convenience and its human-harmless. As a kind of red lesions in fundus images, the appearance of microaneurysms is the main marker of mild non-proliferative DR, and hemorrhage, as another kind of red lesions, is related to moderate and severe non-proliferative DR. So that red lesions in fundus images are important indicators for the screening of DR. This study proposes a multi-task network, named Red-Seg, for red lesion segmentation. The network contains two individual branches, each is used for one kind of lesion segmentation. Meantime, a two-stage training algorithm is presented where different loss functions are used in different stages. In the first stage, modified Top-k balanced cross-entropy loss is used to push the network focuses on hard-to-classify samples. And, in the second stage, false positive and false negative are integrated as loss function into training to reduce misclassification further. At last, extensive experiments are employed on the IDRiD dataset, and the lesion segmentation results are compared with other methods. Experimental results show that proposed two-stage training algorithm can lead to much higher precision and recall, which means this method can reduce misclassification in some certain. Specifically for hemorrhage segmentation, both recall and precision increased by at least 2.8%. Meanwhile, compared with other image-level lesion segmentation models, such as HED, FCRN, DeepLabv3+, and L-Seg, Red-Seg achieves much higher AUC_PR on microaneurysm segmentation.

    • Single Image Super-Resolution Reconstruction Based on VGG Energy Loss

      2021, 32(11):3659-3668. DOI: 10.13328/j.cnki.jos.006053

      Abstract (790) HTML (1226) PDF 1.64 M (2213) Comment (0) Favorites

      Abstract:Single image super-resolution (SR) is an important task in image synthesis. Based on neural nets, the loss function in the SR task commonly contains a content-based reconstruction loss and a generative adversarial network (GAN) based regularization loss. However, due to the instability of GAN training, the generated discriminative signal of a high-resolution image from the GAN loss is not stable in the SRGAN model. In order to alleviate this problem, based on the commonly used VGG reconstruction loss, this study designs a stable energy-based regularization loss, which is called VGG energy loss. The proposed VGG energy loss in this study uses the VGG encoder in the reconstruction loss as an encoder, and designs the corresponding decoder to build a VGG-U-Net auto encoder:VGG-UAE; by using the VGG-UAE as the energy function, which can provide gradients for the generator, the generated high-resolution samples track the energy flow of real data. Experiments verify that a generative model using the proposed VGG energy loss can generate more effective high-resolution images.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063