• Volume 31,Issue 12,2020 Table of Contents
    Select All
    Display Type: |
    • Approach of Sketch-guided GUI Testing for Mobile App

      2020, 31(12):3671-3684. DOI: 10.13328/j.cnki.jos.005873

      Abstract (1129) HTML (1146) PDF 1.47 M (2603) Comment (0) Favorites

      Abstract:Software testing plays an important role in improving the security and reliability of mobile applications. However, there are many shortcomings in the current mainstream testing technology for mobile application. For example, manual scripting and recording playback techniques require a lot of labor costs, while automatic testing solutions have been limited when they are applied in mobile applications. This article presents a sketch-guided GUI test generation approach for testing mobile applications. It provides a simple but expressive sketch language to help testers specify their testing purposes easily. Testers just need to draw a few simple strokes on the corresponding screenshots of the application under test, then the proposed approach recognizes these strokes automatically and converts them into the test model. Finally, test cases are generated based on the model. The effects of proposed sketch-guided testing technique are evaluated based on the test cases, which were used in the literature of mobile application testing in recent years. The results show that the proposed approach is significantly effective with little labor cost in GUI testing of mobile applications.

    • Reliable Algorithm for Computing Cyclic Iterative Program

      2020, 31(12):3685-3699. DOI: 10.13328/j.cnki.jos.005883

      Abstract (1432) HTML (1136) PDF 1.41 M (3154) Comment (0) Favorites

      Abstract:As a basic component of software, the correct running of cyclic iteration program is of great significance. However, sometimes (e.g., when its NID is greater than 0) the rounding error (or representation error) in the calculation can lead to unstable results of the cyclic iteration. Based on the computing technology of “automatic dynamic adjustment of intermediate calculation accuracy”, a reliable calculation algorithm for cyclic iteration is presented in this paper. By using this algorithm, the value of arbitrary precision of cyclic iteration can be obtained. At present, the algorithm has been programmed and implemented in ISReal through C++ language.

    • POI Recommendation Based on Multidimensional Context-aware Graph Embedding Model

      2020, 31(12):3700-3715. DOI: 10.13328/j.cnki.jos.005855

      Abstract (1457) HTML (1188) PDF 1.71 M (2901) Comment (0) Favorites

      Abstract:In recent years, the point-of-interest (POI) recommendation system has gradually become one of the research hotspots in the field of mobile recommendation systems. The method of joint modeling of multiple factors, such as time, space, sequence, socialization, and semantic information, has been gradually introduced into a unified model to compute the user preferences under multidimensional scenarios. As an effective multi-factor joint modeling method, the embedding learning model has better performance in the mobile recommendation systems. However, many of the embedded learning models just simply embed the explicit factors, such as timestamps, items, regions, sequences, etc. into the same space. Due to the lack of deep mining of user and item semantic features, it is hard to accurately obtain user preferences when the users’ check-in data is extremely sparse. In view of this, a multi-dimensional context-aware graph embedding model, called MCAGE, is proposed in this study. In MACGE model, the topic model is used to extract the potential semantic features between users and items. Then, a series of graph nodes and association rules are redefined. To enhance the accuracy of describing the user preferences, a more effective user preference formula is designed. Finally, the results of experiments based on the real-world dataset shows that the proposed model has better recommendation performance.

    • Preference Vector Guided Co-evolutionary Algorithm for Many-objective Optimization

      2020, 31(12):3716-3732. DOI: 10.13328/j.cnki.jos.005869

      Abstract (2360) HTML (1264) PDF 1.99 M (3941) Comment (0) Favorites

      Abstract:The preference-inspired co-evolutionary algorithm (PICEA-g) uses goal vectors as preferences, and uses the number of target vectors that the individual can dominated as fitness value, to effectively decrease the proportion of non-dominated solutions in high dimensional space. However, the obtained set is approximate Pareto frontier, not Pareto optimal solution that decision makers are really interested in. This leads to the performance degradation and computational resources waste when dealing with high-dimensional optimization problems. Therefore, a preference vector guided co-evolutionary algorithm for many-objective optimization is proposed in this study. Firstly, the ASF extension function is used to map the ideal point in the evolution population on the objective space, which is used as a preference vector to guide the evolution direction of the population. Then, two temporary points are obtained by preference region selection strategy in order to build region of preference for decision maker (ROI). The range of upper and lower bounds generated by random preference sets is determined, and the co-evolution mechanism is used to guide the population to converge towards the ROI. The ASF-PICEA-g is compared with g-NSGA-II and r-NSGA-II on WFG and DTLZ benchmark test functions based on 3 dimension to 20 dimension. The experimental results demonstrate that ASF-PICEA-g shows sound performance on the WFG series test function, and the obtained solution set is better than the comparison algorithm; it is slightly better than the comparison algorithm in the DTLZ series test function, especially in the 10 dimension or higher dimension. In addition, ASF-PICEA-g shows better stability, and the obtained solution set has better convergence and distribution.

    • Research on Feature Selection Algorithm Based on Natural Evolution Strategy

      2020, 31(12):3733-3752. DOI: 10.13328/j.cnki.jos.005874

      Abstract (1304) HTML (1432) PDF 2.01 M (2969) Comment (0) Favorites

      Abstract:Feature selection is an NP-hard problem that aims to improve the accuracy of the model by eliminating irrelevant or redundant features to reduce model training time. Therefore, feature selection is an important data preprocessing technique in the fields of machine learning, data mining, and pattern recognition. This study proposes a new feature selection algorithm MCC-NES based on natural evolutionary strategy. Firstly, the algorithm adopts natural evolutionary strategy based on diagonal covariance matrix modeling, which adaptively adjusts parameters through gradient information. Secondly, in order to enable the algorithm to effectively deal with feature selection problems, a feature coding mechanism is introduced in the initialization phase, and combined with classification accuracy and dimensional reduction, given the new fitness function. In addition, the idea of sub-population cooperative co-evolution is introduced to solve high-dimensional data. The original problem is decomposed into relatively small sub-problems to reduce the combined effect of the original problem scale and each sub-question is solved independently, and then all sub-problems are correlated to optimize the solution to the original problem. Further, applying multiple competing evolutionary populations to enhance the exploration ability of the algorithm and designing a population restart strategy to prevent the population from falling into the local optimal solution. Finally, the proposed algorithm is compared with several traditional feature selection algorithms on some UCI public datasets. The experimental results show that the proposed algorithm can effectively complete the feature selection problem and has excellent performance compared with the classical feature selection algorithm, especially when dealing with high-dimensional data.

    • Survey of Traffic Travel-time Prediction Methods

      2020, 31(12):3753-3771. DOI: 10.13328/j.cnki.jos.005875

      Abstract (1126) HTML (1857) PDF 1.84 M (4041) Comment (0) Favorites

      Abstract:Travel-time prediction can help implement advanced traveler information systems. In recent years, a variety of travel-time prediction methods have been developed. In this study, travel-time prediction methods are classified into two categories: model-driven and data-driven methods. Two common model-driven approaches are elaborated, namely queuing theory and cell transmission model. The data-driven methods are classified into parametric and non-parametric methods. Parametric methods include linear regression, autoregressive integrated moving average, and Kalman filtering. Non-parametric methods contain neural networks, support vector regression, nearest neighbors, and ensemble learning methods. Existing travel-time prediction methods are analyzed and concluded from source data, prediction range, accuracy, advantages, disadvantages, and application scenarios. Several solutions are proposed for some shortcomings of existing methods. A novel data preprocessing framework and a travel-time prediction model are presented, and future research challenges are highlighted.

    • Recognition Method Based on Deep Learning for Chinese Textual Entailment Chunks and Labels

      2020, 31(12):3772-3786. DOI: 10.13328/j.cnki.jos.005885

      Abstract (1283) HTML (1501) PDF 1.51 M (2745) Comment (0) Favorites

      Abstract:Recognizing textual entailment (RTE) is a task to recognize whether two sentences have an entailment relationship. In recent years, RTE in English had made a great progress. The current researches are mainly based on type judgment, and pay less attention to locate the language chunks that lead to the entailment relationship. More over, it leads to a low interpretability of the RTE models. This study selects 12 000 Chinese entailment sentence pairs from the Chinese Natural Language Inference (CNLI) data and labeled chunks which lead to their entailment relationship. Then 7 entailment types are summarized considering Chinese linguistic features. On the basis, two tasks are proposed. One is to recognize the seven-category of entailment type for each entailment sentence pairs, another is to recognize the boundaries of the entailment chunks in it. The proposed deep learning based method reaches an accuracy of 69.19% and 62.09% in the two tasks. The experimental results show that proposed approaches can effectively identifying different types of entailment in Chinese and find the boundaries of the entailment chunks, which demonstrate that the proposed model provides a reliable benchmark for further research.

    • Temporal Epistemic Logic for Perfect Recall

      2020, 31(12):3787-3796. DOI: 10.13328/j.cnki.jos.005888

      Abstract (1965) HTML (1147) PDF 1.05 M (3352) Comment (0) Favorites

      Abstract:Traditional temporal epistemic logic is defective to express perfect recall. It cannot express the memories of individuals’ knowledge. Time and knowledge are integrated into a single operator in the new system S5tCt, such that individual knowledge, general knowledge and common knowledge can be indicated by certain time. With this simple setting, each agent (individual or group) can recall all of their historical knowledge and has memories. The main result of the study is completeness, and it can be proved by using canonical model that S5tCt is complete with respect to the class of all equivalence and monotone decreasing frames.

    • Chinese-Vietnamese Convolutional Neural Machine Translation with Incorporating Syntactic Parsing Tree

      2020, 31(12):3797-3807. DOI: 10.13328/j.cnki.jos.005889

      Abstract (1184) HTML (1357) PDF 1.24 M (3153) Comment (0) Favorites

      Abstract:Neural machine translation is the most widely used machine translation method at present, and has sound performance in languages with rich corpus resources. However, it does not work well in languages that lack of bilingual data, such as Chinese-Vietnamese. Taking the difference in grammatical structure between different languages into consideration, this study proposes a neural machine translation method that incorporates syntactic parse tree. In this method, a depth-first search is used to obtain the vectorized representation of the syntactic parse tree of the source language, and the translation model is trained by embedding the obtained vectors and the source language embedding as inputs. This method is implemented on Chinese-Vietnamese, language pair and achieves 0.6 BLUE values improvement compared to the baseline system. This experiment shows that the incorporating syntax parse tree can effectively improve the performance of the machine translation model under the resource scarcity.

    • Adaptive Active Learning for Semi-supervised Learning

      2020, 31(12):3808-3822. DOI: 10.13328/j.cnki.jos.005890

      Abstract (1364) HTML (1516) PDF 1.49 M (3127) Comment (0) Favorites

      Abstract:Active learning algorithms attempt to overcome the labeling bottleneck by asking queries from a large collection of unlabeled examples. Existing batch mode active learning algorithms suffer from three limitations: (1) the models with assumption on data are hard in finding images that are both informative and representative; (2) the methods that are based on similarity function or optimizing certain diversity measurement may lead to suboptimal performance and produce the selected set with redundant examples; (3) the problem of noise labels has been an obstacle for active learning algorithms. This study proposes a novel batch mode active learning method based on deep learning. The deep neural network generates the representations (embeddings) of labeled and unlabeled examples, and label cycle mode is adopted by connecting the embeddings from labeled examples to those of unlabeled examples and back at the same class, which considers both informativeness and representativeness of examples, as well as being robust to noisy labels. The proposed active learning method is applied to semi-supervised classification and clustering. The submodular function is designed to reduce the redundancy of the selected examples. Moreover, the query criteria of weighting losses are optimized in active learning, which automatically trade off the balance of informative and representative examples. Specifically, batch mode active scheme is incorporated into the classification approaches, in which the generalization ability is improved. For semi-supervised clustering, the proposed active scheme for constraints is used to facilitate fast convergence and perform better than unsupervised clustering. To validate the effectiveness of the proposed algorithms, extensive experiments are conducted on diversity benchmark datasets for different tasks, and the experimental results demonstrate consistent and substantial improvements over the state-of-the-art approaches.

    • Fast Temporal Cycle Enumeration Algorithm on Temporal Graphs

      2020, 31(12):3823-3835. DOI: 10.13328/j.cnki.jos.005968

      Abstract (1366) HTML (1688) PDF 1.32 M (3412) Comment (0) Favorites

      Abstract:Temporal graph is a type of graph where each edge is associated with a timestamp. In temporal graphs, a temporal cycle denotes a loop where the timestamps of the edges in the loop follow an increasing order. Temporal cycle enumeration has a number of real-life applications. For example, it can be applied to detect the fraud behavior in temporal financial networks. Additionally, the number of temporal cycles can be used to characterize the topological properties of temporal graphs. Based on the 2SCENT algorithm proposed by Rohit Kumar et al. in 2018, a new temporal cycle enumeration algorithm is proposed which uses additional cycle information to prune the search space. Specifically, the proposed algorithm is a two-stage algorithm. First, the algorithm traverses the temporal graph to identify all root nodes that probably form temporal cycles, as well as the corresponding time and length information of the cycles. Second, the algorithm performs a dynamic depth first search using the above information to find all valid temporal cycles. Extensive experiments are conducted on four real-life datasets, using 2SCENT as the baseline algorithm. The result shows that the proposed algorithm reduces the running time over the 2SCENT algorithm by 50 percent.

    • 3D-online Stable Matching Problem for New Spatial Crowdsourcing Platforms

      2020, 31(12):3836-3851. DOI: 10.13328/j.cnki.jos.005969

      Abstract (1185) HTML (1366) PDF 1.85 M (2756) Comment (0) Favorites

      Abstract:In recent years, spatial crowdsourcing platforms attract more and more attention. One of the core issues is to assign proper workers to users to finish their tasks under the temporal and spatial constraints. Most existing works aim to maximize the number of tasks that are finished or the sum of utility score. These approaches ignore the preference of users and workers. Moreover, existing works usually only focus on two roles, workers and users. Workers travel to the location of users to finish the tasks. However, new spatial crowdsourcing platforms contain three types of roles, workers, users, and workplaces. The platforms assign workplaces for workers and users to finish the tasks. Thus, the stable matching problem in the three-dimensional platforms is proposed to solve the static scenarios. However, most spatial crowdsourcing platforms are online scenarios. Workers and tasks issued by the users appear in real time. Therefore, a three-dimensional online stable matching problem is formalized in new spatial crowdsourcing platforms. A baseline algorithm and an improved algorithm are proposed which benefit from the advantages of artificial intelligence to solve this problem. Finally, extensive experiments are conducted on real datasets and synthetic datasets to verify the efficiency and effectiveness of the proposed algorithms.

    • Deployment and Scheduling for Fusion-based Detection in RF-powered Sensor Networks

      2020, 31(12):3852-3866. DOI: 10.13328/j.cnki.jos.005877

      Abstract (1060) HTML (1119) PDF 1.54 M (2428) Comment (0) Favorites

      Abstract:When RF-powered sensor network is applied to target detection, rational planning of sensor placement and charging/sensing schedule is an effective way to improve the system detection quality. Based on the fusion-based detection model, firstly, the joint optimization problem of sensor placement and scheduling problem is formulated to maximize the system detection quality. The problem is proved to be NP-complete. Then after analyzing the impact of fusion radius on the detection rate, a joint optimization greedy algorithm (JOGA) is proposed to solve the problem. Finally, the performance of the proposed JOGA is compared with those obtained by exhaustive search and two-stage greedy algorithm (TSGA), an algorithm that optimizes sensor placement and scheduling separately, through extensive numerical simulations as well as simulations based on real data traces collected from a vehicle detection experiment. Results show that, the proposed JOGA always outperforms TSGA in all the simulation scenarios, and is near optimal in small-scale networks.

    • Implementation of Fair Contract Signing Protocol Based on Blockchain Technology

      2020, 31(12):3867-3879. DOI: 10.13328/j.cnki.jos.005880

      Abstract (4027) HTML (1765) PDF 1.32 M (5242) Comment (0) Favorites

      Abstract:The current blockchain technology only realizes the credible transmission of “interests” in the network, and the corresponding “responsibility” transmission has not been implemented. The key scientific questions are what is the carrier of “responsibility” and how the receipt of the “responsibility” is confirmed. Only the “interest” is passed on the blockchain network. Therefore, this status quo causes the trust relationship established on the blockchain to be one-way, and it is impossible to establish the trust of the originator to the receiver. This paper presents the realization of deterministic fair contract signing protocol based on blockchain technology without trusted third party, which changes the one-way trust relationship of the transaction blockchain technology and establishes a multi-way trust relationship between the nodes participating in the blockchain through an additional protocol. The transaction content in blockchain is replaced by the contract to be signed, then, conduct “transfer” transactions between multiple parties, to achieve multi-party sign the contract in the random order. It is the only confirmation that the contract is effective when multiple parties complete the sequential signature among the linked tickets. Due to the openness, tampering, and non-repudiation of the blockchain transaction data, the cheat of any party in the contract is avoided, the fairness of the contract exchange process is guaranteed, and the balance between multiple parties is completed after the contract exchange. At the same time, this protocol provides real-time, dynamic management of multi-party contracts, including the addition, renewal and deletion of contract content. Finally, the paper discusses the fairness, privacy and the choice of blockchain consensus.

    • Efficient Single-packet Traceback Approach Based on Alliance Theory

      2020, 31(12):3880-3908. DOI: 10.13328/j.cnki.jos.005882

      Abstract (1568) HTML (984) PDF 1.43 M (3467) Comment (0) Favorites

      Abstract:Single-packet traceback, as a key technology to solve the network security management issues caused by the "statelessness" of IP protocol, has drawn significant attentions in recent years. However, the prior work has not been widely used due to the following disadvantages: 1) inability to deploy incrementally; 2) lack of deployment incentives, i.e., none deployer can gain free riding; 3) high maintenance costs. This study proposes an efficient single-packet traceback approach based on alliance theory termed as TIST. It firstly establishes the traceability alliance on the large scale networks, so as to remove free-rider ASes and improve the deployment incentives. Secondly, it designs link fingerprint establishment strategy towards traceability alliance through combining IP stream labeling and peer-to-peer filtering technics, which can weaken the traceability coupling between autonomous domains and achieve incremental deployment. Finally, it defines a novel counting Bloom Filter towards network prefixes. By optimizes its parameters, the traceable routers can quickly identify the traceable packets, and achieve the selective establishment of link fingerprints. Extensive mathematical analysis and simulations are performed to evaluate the proposed approach. The results show that the proposed approach significantly out performs the prior approaches in terms of the deploy ability.

    • Scheme of Revisable Blockchain

      2020, 31(12):3909-3922. DOI: 10.13328/j.cnki.jos.005894

      Abstract (3656) HTML (860) PDF 584.99 K (5262) Comment (0) Favorites

      Abstract:With the rapid development of the blockchain, the data on the chain not only include financial data, but also have data of technology, culture, politics, and so on. However, the data will not be revised once it is packaged on the existing blockchain system, which has the problem that the invalid data cannot be deleted and the wrong data cannot be modified. Therefore, a revisable blockchain under certain conditions has broad application prospects. Under the POSpace (proof of space) consensus mechanism, a revisable blockchain scheme is proposed based on the trapdoor one-way function and a new blockchain structure. In this scheme, the nodes can execute the revision operation as long as their number exceeds the threshold. Otherwise, the revision operation cannot be executed. Except for the revised data, the remaining data on the blocks keeps unchanged, and all of the nodes on the network can still verify the validity of data by using the original verification ways. The experiments show that block generation and data revision all have high efficiency as long as the threshold is selected appropriately, and the link relationship between the blocks will not be changed after the data revision, and the scheme has practical operability.

    • Attribute-based Encryption Scheme with Fast Encryption

      2020, 31(12):3923-3936. DOI: 10.13328/j.cnki.jos.005856

      Abstract (1238) HTML (927) PDF 1.48 M (2864) Comment (0) Favorites

      Abstract:Attribute-based encryption algorithm contains a large number of time-consuming exponential operations and bilinear pairing operations, therefore, some schemes propose to outsource encryption to the cloud server. However, these schemes do not provide the parallel computing method of outsourcing encryption on cloud servers. Besides, in these schemes, user manages too many private keys and the authorization center generates a private key for the user with excessive cost. To solve these problems, a fast encryption and sharing scheme based on the Spark big data platform is proposed. In this scheme, an encryption parallelization algorithm is designed according to the characteristics of the sharing access tree, with which, distribution of secret value of the sharing access tree and encryption at leaf node are parallelized. Then, the parallelization tasks are handed over to the Spark cluster. As a result, user client needs only one exponent operation for each leaf node. In addition to this, the private key attribute computation is also outsourced to the Spark cluster. In proposed scheme, the authorization center generates a user private key requiring only four exponential and users only need to save a key sub-item with small space.

    • Routing Algorithm for Video Opportunistic Transmission Based on Multi-player Cooperative Game

      2020, 31(12):3937-3949. DOI: 10.13328/j.cnki.jos.005857

      Abstract (1122) HTML (952) PDF 1.41 M (2423) Comment (0) Favorites

      Abstract:The increasing popularity of video delivery among mobile users makes the problem of explosive traffic growth becoming more and more serious for traditional wireless networks, and video transmission based on D2D (device-to-device) communication through MONs (mobile opportunistic networks) is regarded as an ideal way to resolve this issue. However, data transmission in MONs is mainly through the two ways: data replication and data forwarding. Thus, to achieve high delivery ratio and low delivery delay, data replication is usually excessively exploited, and the large number of redundant replicas will not only consume large amount of nodal resource but also greatly increase the overload of networks. For video transmission, this issue becomes more severe for its volume and continuity. Thus, this study proposes a novel routing scheme for video data transmission in MONs, based on multi-player cooperative game, which can maximize the quality of reconstructed video data while minimizing the overhead of nodal and network resources. Specifically, the marginal gain model is first constructed for video delivery quality, and then the video data transmission is modeled among multiple encounters as a multi-player cooperative game. Under the guidance of Nash equilibrium theory, the video data carried by these encountering nodes is adaptively and optimally re-assigned among them. Extensive simulations based on real-life mobility traces and synthetic traces have validated the effectiveness of the proposed routing algorithm.

    • Protocols for Secure Test on Relationship on Number Axis

      2020, 31(12):3950-3967. DOI: 10.13328/j.cnki.jos.005858

      Abstract (950) HTML (965) PDF 1.68 M (2275) Comment (0) Favorites

      Abstract:In recent years, secure multiparty computation (SMC) is one of research focuses in the field of information security, and a key technology of privacy protecting for distributed users in their jointly evaluating. Researchers have proposed many schemes for SMC problem, however, there are many other secure multi-computation problems needed to be investigated. This study involves private relationship test on number axis, which covers three subproblems: (1) secure test on the relationship between a confidential number and a private interval; (2) multi-dimensional secure test on the relationship between multi-number and multi-interval; (3) secure test on the relationship between two confidential intervals. Private relationship test on number axis has an extensive application in the field of privacy protection, and it can be employed as a basic block to construct other SMC protocols. Based on a variant encryption scheme of Paillier’s homomorphic encryption (in which, who encrypts message who evaluates the base), three protocols for private relationship test on number axis are designed. They are secure test on the relationship between a confidential number and a private interval, multi-dimension secure test on the relationship between multi-number and multi-interval, and secure test on the relationship between two confidential intervals. And their security is analyzed using simulation framework (idea/real) in the standard model. The idea of private ratio calculation in these three protocols can be directly used to solve the millionaire problem within the range of rational numbers. More widely, these three protocols can be employed as a basic block to solve the following SMC problems: private test on relationship between a point and an annulus, private test on relationship between a point and a convex polygon, and private proximity test.

    • Image Restoration Based on Cascading Dense Network in Contourlet Transform Domain

      2020, 31(12):3968-3980. DOI: 10.13328/j.cnki.jos.005866

      Abstract (2460) HTML (1811) PDF 2.16 M (3464) Comment (0) Favorites

      Abstract:In recent years, due to the powerful learning ability, convolutional neural networks (CNN) have achieved more satisfactory results than conventional learning methods in image restoration tasks. However, these CNN-based methods generally have the disadvantage of producing over-smoothed restored image due to the fact that losing important textural details. In order to solve this problem, this study proposes an image restoration method based on cascaded dense CNN (CDCNN) in contourlet transform, which can be used for three classical image restoration tasks, namely, single image denoising, super resolution, and JPEG decompression. First, this study constructs a compact cascading dense network structure, which not only fully exploits and utilizes the different hierarchical features of images, but also solves the problem of the long-term dependency problem as growing the network depth. Next, this study introduces the contourlet transform into CDCNN, which can sparsely represent the important image features. Here, the contourlet subbands of low-quality image and corresponding restored image are used as the input and output of the network respectively, which can recover realistic structure and texture details more effectively. Comprehensive experiments on the standard benchmarks show that the unanimous superiority of the proposed method on all three tasks over the state-of-the-art methods. The proposed method not only obtains higher peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), but also contains more realistic textural details in the subjective reconstruct images.

    • Dynamically Fine-grained Scheduling Method in Cloud Environment

      2020, 31(12):3981-3999. DOI: 10.13328/j.cnki.jos.005892

      Abstract (1353) HTML (1620) PDF 1.98 M (3043) Comment (0) Favorites

      Abstract:The coarse-grained scheduling used in cloud computing platform allocates fixed quantity resources to tasks. However, this allocation can easily lead to problems such as resource fragmentation, over-commitment and inefficient resource utilization. This study proposes a dynamically fine-grained scheduling method to resolve those problems. This method estimates resource requirement of task according to similar tasks and divides tasks into execution stages according to the task requirement, and it also matches task resource requirement and available server resources by stages to refine two aspects of allocation granularity: allocation duration and allocation quantity. Furthermore, this method may compress resource allocation to further improve resource utilization and performance, and this method uses several mechanisms including runtime resource monitoring, allocation policy adjustments, and scheduling constraint checks to ensure resource utilization and performance of cloud computing platform. Based on this method, a scheduler has been implemented in the open source cloud computing platform Yarn. The test results show that the dynamically fine-grained scheduling method can resolve resource allocation problems by significantly improving resource utilization and performance with acceptable fairness and scheduling response times.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063