• Volume 29,Issue 11,2018 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Parallel Retrieval Approach of Cloud Workflow Model Repositories Based on Data Set Partitioning

      2018, 29(11):3241-3259. DOI: 10.13328/j.cnki.jos.005480 CSTR:

      Abstract (5573) HTML (2927) PDF 1.87 M (5637) Comment (0) Favorites

      Abstract:In the integrated service platform composed of multiple industry cloud service platforms, with the increasing of the number of cloud service platforms and theirs tenants, the scale of its underlying cloud workflow model repository will be increasing. When the scale of the cloud workflow model repository is super large, the existing retrieval methods of large-scale process model repositories still can't meet the needs of efficient retrieval of cloud workflow model repositories, therefore, it is necessary to study a more efficient parallel retrieval method. To address this issue, this paper adopts two data partitioning modes, equipartition and clustering based partitioning, to divide large-scale cloud workflow model repositories into small pieces. Combined with the improved process retrieval algorithm proposed in authors' previous work, a series of data partitioning based process parallel retrieval approaches are put forward to accelerate the large-scale process retrieval. These approaches mainly include four kinds of process retrieval algorithms from static/dynamic parallel retrieval algorithm based on uniform/automatic clustering partitioning model sets. Finally, based on the large-scale simulation process model library and the actual cloud workflow model repository, experiments are conducted to evaluate the efficiency of four parallel retrieval algorithms.

    • Adaptation Algorithm of Semantic Workflows Based on Behavioral Characteristics

      2018, 29(11):3260-3277. DOI: 10.13328/j.cnki.jos.005476 CSTR:

      Abstract (5519) HTML (2657) PDF 2.18 M (5754) Comment (0) Favorites

      Abstract:Workflow adaptation is an important task of workflow reuse. During semantic workflow adaptation based on workflow streams, i.e., the reusable segments of semantic workflows, the absence of workflow streams structurally similar to the streams of the retrieved semantic workflow in the workflow streams repository leads to unachievable workflow adaptation. Focusing on the problem, this paper proposes an improved method, i.e., an adaptation algorithm of semantic workflows based on behavioral characteristics of workflow streams. The set of task adjacency relations is used to express the workflow streams' behavioral characteristics. First, for each stream of the retrieved semantic workflow (called query stream), the data index of the anchor set and stream matching rules are used to filter the workflow stream repository to obtain the matching stream candidates.Then, these stream candidates are verified with the change request and the behavioral similarity metric, to obtain the query streams that need to be substituted and the corresponding matching streams that are most coincident with the change request and behaviorally similar to them. Next, each matching stream is used to substitute the query stream in the retrieved workflow to gradually adapt defects of retrieved workflow. Finally, the adapted semantic workflow is obtained. The experimental results show that the proposed adaptation algorithm achieves the adapted semantic workflow set with higher overall quality and has better adaptability compared with existing adaptation algorithm based on workflow streams. The adaptation algorithm can provide semantic workflows of higher quality for business processes managers for references when adapting workflows to meet new business requirements, and is helpful for the improvement of the efficiency and quality of workflow reuse in business process management (BPM).

    • Approach to Mining Length-Two Loops From the Log Without “aba” Pattern

      2018, 29(11):3278-3294. DOI: 10.13328/j.cnki.jos.005478 CSTR:

      Abstract (5343) HTML (2789) PDF 2.00 M (5483) Comment (0) Favorites

      Abstract:The current research in mining length-two loops depends on "aba" pattern. However, the pattern does not necessarily appear in the logs that satisfies local completeness. This research aims at finding ways to mine length-two loops without the pattern. It results in a new algorithm (αL+-algorithm) that is based on the α-algorithm. First, an order vector matrix is established by tasks in logs to abstract variant structures of length-two loops. Then, distinction between loops and concurrency structure is obtained by event's frequency and location in traces. Finally, proximity and circuit abstraction are used to eliminate the interference caused by the concurrent branches. The experimental results show that the αL+-algorithm can handle length-two loops with or without "aba" pattern. In addition, the αL+-algorithm is implemented in the ProM tool.

    • Optimized Topic Model for Clinical Pathway Mining

      2018, 29(11):3295-3305. DOI: 10.13328/j.cnki.jos.005481 CSTR:

      Abstract (5029) HTML (3308) PDF 1.34 M (6040) Comment (0) Favorites

      Abstract:In healthcare domain, the care process is critical for the care quality. Clinical pathway (CP), which integrates a lot of medical knowledge, is a tool for standardizing the care process. However, most of existing CPs are designed by experts with limited experience and data, and consequently they are always static and non-adaptive for implementation. According to authors' previous work, topic-based CP mining is an effective approach which can discover the process model from clinical data. The various clinical activities are summarized into several topics by latent dirichlet allocation (LDA), and each clinical day in the patient trace is converted to a topic distribution. A CP model can be derived by applying process mining method on the topic-based sequences. However, LDA ignores the similarity between clinical days, which means that in some cases, two similar days may be assigned quite different topic distributions. This paper proposes an optimized topic model for clinical topic discovering by incorporating the similarity constraint, which is based on the domain knowledge. Experiments on real data demonstrate that this new approach can discover quality topics which are useful for topic-based CP mining.

    • Multi-Objective Workflow Scheduling Based on Delay Transmission in Mobile Cloud Computing

      2018, 29(11):3306-3325. DOI: 10.13328/j.cnki.jos.005479 CSTR:

      Abstract (6236) HTML (3321) PDF 2.67 M (6704) Comment (0) Favorites

      Abstract:The integration between cloud computing and mobile Internet promotes the development of mobile cloud computing. The tasks of workflow can be migrated to cloud that can not only improve the computing capacity of mobile device, but also reduce the energy consumption of battery. However, a great amount of data transmission introduced by using unreasonable tasks scheduling strategies can damage the QoS (quality of service) of workflow and increase the energy consumption of mobile device. In this paper, a multi-objective workflow scheduling is proposed based on delay transmission mechanism (MOWS-DTM) to optimize execution time of workflow and energy consumption of mobile device in mobile cloud computing environment. MOWS-DTM, derived from genetic algorithm, is combined with the process of workflow scheduling and takes both task scheduling location and execution sequence into consideration in coding strategy. When mobile user is moving, wireless network signal of mobile device is changing with the pace of different location. The stronger the network signal, the less time it takes to transmit data with fixed size, and the less energy the mobile device will consume. Moreover, there are many non-critical tasks reside in workflow, and increasing their execution time will not affect the makespan of workflow. Therefore, the delay transmission mechanism (DTM), incorporated in the process of workflow scheduling, can optimize the energy consumption of mobile device and the makespan of workflow simultaneously. Simulation results demonstrate significant multi-objective performance improvement of MOWS-DTM over the MOHEFT algorithm and RANDOM algorithm.

    • Multi-Scientific Workflow Scheduling Algorithm Based on Multi-Objective in Cloud Environment

      2018, 29(11):3326-3339. DOI: 10.13328/j.cnki.jos.005477 CSTR:

      Abstract (5681) HTML (2821) PDF 1.58 M (6346) Comment (0) Favorites

      Abstract:To address the problem that safe scheduling is not taken into consideration in existing multi-scientific scheduling workflow algorithm in cloud environment, this paper proposes a multi-scientific workflows security-deadline constraint cost optimization algorithm (MSW-SDCOA). First, based on data flow dependency, MSW-SDCOA compresses scientific workflow and reduces the number of task nodes to save scheduling cost. Secondly, through optimizing HEFT algorithm, a scheduling sequence is formed to realize overall multi-objective optimization scheduling. Lastly, by optimizing update strategies of pheromone and heuristic information in ant colony optimization (ACO), cost optimization effect is further improved. The simulation experiment results show that the cost optimization effect of MSW-SDCOA algorithm is about 14% better than that of MW-DBS algorithm.

    • Approach to Optimal Staff Assignment in Workflows Based on Collaboration Patterns

      2018, 29(11):3340-3354. DOI: 10.13328/j.cnki.jos.005483 CSTR:

      Abstract (5876) HTML (3001) PDF 1.65 M (6257) Comment (0) Favorites

      Abstract:The execution of business process usually requires the collaboration of many staff members. When the capability of a staff member is determined, the collaboration becomes the dominating influence to the performance of process instances. In general, the better collaboration of the actors who perform collaborative tasks, the higher performance the workflow instance would achieve. In this paper, an approach to optimize staff assignment is proposed based on the collaboration patterns with high performance. Firstly, it computes the compatibility of actors when performing activities based on the historical logs. Afterwards, it mines collaboration patterns with high compatibility from process logs and matches them with the process for staff assignment by means of encoding. Experimental results demonstrate this approach can assign staff in a workflow for maximum collaboration efficiently.

    • Manufacturing Service Composition Self-Adaptive Approach Based on Dynamic Matching Network

      2018, 29(11):3355-3373. DOI: 10.13328/j.cnki.jos.005475 CSTR:

      Abstract (5410) HTML (3086) PDF 2.16 M (6756) Comment (0) Favorites

      Abstract:In cloud manufacturing (CMfg) model, both manufacturing tasks and manufacturing services are in a dynamic environment, therefore the dynamic adaptability of the manufacturing service composition needs to be solved urgently. To address this problem, a theoretical model for manufacturing task and manufacturing service dynamic matching network (DMN) is constructed based on the matching relationships between manufacturing tasks and manufacturing services. Based on this model, a three-phase manufacturing service composition self-adaptive approach (TPMSCSAA) is proposed in this paper. Firstly, by using the load and dynamic QoS evaluated by the load queue model as the optimization goals, the optimal manufacturing service composition is transformed into the shortest path search in the manufacturing service network, and thus dynamic scheduling of manufacturing service composition is realized. Secondly, the changes of manufacturing tasks and manufacturing services are obtained to refresh the manufacturing task network and manufacturing service network in real time. Thirdly, the dynamic scheduling algorithm is triggered to complete the reconstruction of the dynamic matching edges. Finally, the experimental simulation of elevator design service composition is carried out to validate and verify the proposed approach.

    • Topic-Adaptive Web API Recommendation Method via Integrating Multidimensional Information

      2018, 29(11):3374-3387. DOI: 10.13328/j.cnki.jos.005482 CSTR:

      Abstract (5456) HTML (3109) PDF 1.63 M (6073) Comment (0) Favorites

      Abstract:How to automatically generate or recommend a set of Web APIs for Mashup creation according a user's natural language description of requirement is a focus of attention among business process managers and services composition designers. A topic adaptive Web API recommendation method, HDP-FM (hierarchical Dirichlet processes-factorization machine), is proposed in this paper to recommend a set of Web APIs for Mashup creation. This approach firstly makes the Web API description document as a corpus, and trains a topic distribution vector for a Web API by the HDP model. It then predicts a topic distribution vector for a Mashup via the generated model, where the topic distribution vector is used to calculate the similarity. Finally, a factorization model is utilized to score and sort Web APIs by taking the similarity between Mashups, the similarity between Web APIs, the popularity of Web APIs and the co-occurrence of Web APIs as inputs. A Mashup can be created based on these recommended Web APIs. To verify the performance of the HDP-FM method, a series of experiments are conducted on a real dataset crawled from the ProgrammableWeb platform. The results show that the HDP-FM method has a good performance over others in term of precision, recall, F-measure and NDCG@N.

    • Quality Prediction for Services Based on SOM Neural Network

      2018, 29(11):3388-3399. DOI: 10.13328/j.cnki.jos.005474 CSTR:

      Abstract (6451) HTML (3425) PDF 1.54 M (6818) Comment (0) Favorites

      Abstract:Quality prediction for services is a hot research topic for service recommendation and composition. It's a challenge to design an accurate quality prediction approach to meet the user's personalized needs due to the sparsity of QoS historical data. In order to solve the challenging problem, this paper proposes a SOM neural network based service quality prediction approach (SOMQP). First, based on historical QoS data, the new approach clusters on users and services respectively by applying SOM neural network algorithm, and then a new top-k selection mechanism is adopted to obtain similar users and similar services based on the comprehensive consideration of user reputation and service relevance. Finally, a hybrid user-based and item-based strategy is used to predict the missing QoS value. A set of comprehensive experiments are conducted on the real Web service dataset WS-Dream, the results indicate that compared with the classical CF and K-means methods, the presented approach achieves higher QoS prediction accuracy.

    • >Review Articles
    • Design, Modeling and Verification of Security Protocols Based on Event-B Method

      2018, 29(11):3400-3411. DOI: 10.13328/j.cnki.jos.005622 CSTR:

      Abstract (3169) HTML (3656) PDF 1.26 M (5296) Comment (0) Favorites

      Abstract:With the progress in software refinement verification methods and theorem provers such as Isabella/HOL and VCC, researchers begin to study the design and modeling of security protocols and verify the correctness of their source codes based on the refinement technique and theorem provers. In this paper, the event-B method and verification tools Isabelle/HOL and VCC are introduced, and the typical work on the design and modeling of security protocols and verification of the correctness of their source codes is surveyed. These work include:the refinement design method of security protocols, the refinement modeling method of TPM-based protocol applications, and the refinement verification method of source code.

    • Trustworthiness Evidence Supporting Evaluation of Software Process Trustworthiness

      2018, 29(11):3412-3434. DOI: 10.13328/j.cnki.jos.005291 CSTR:

      Abstract (3325) HTML (1843) PDF 3.05 M (4592) Comment (0) Favorites

      Abstract:In recent years, software trustworthiness has been a focus of interest for researchers. A more consensus view is that software trustworthiness is the degree of how software behavior is accordant with people's expectation. The quality is formed in the process. Obviously, the evidences that build the confidence of software quality are presented in software process too. The process subjects, behaviors and the various methods to guarantee the quality of process products provide the basic evidences to establish the software trustworthiness. Evidence-based decision-making and management is the core of the modern theories of quality. Thus, both evidence-based and data-driving software engineering approaches have tried to address the problem from the perspective of objective data. Under the support of national natural science foundation of China, this study presents a software process trustworthiness model for building the confidence from the view of software processes. As the important and fundamental part in the model, evidence is used to transfer the trust chain bottom-up and to support the evaluation of trustworthiness of software process. Focusing on evidence system in the model, this study complies with four principles including integrity, necessity, compatibility and sustainability. According to the basic requirements of process management, it investigates CMMI and other software process reference models to refine and cross-examination the evidences, create some new evidence to adapt open source software development and extend some evidence to enhance the trustworthiness of process. The study develops an evidence system with high credibility, objectiveness and comparability. The presented evidence system can establish the trustworthiness from three dimensions:process subjects, process behaviors and process products. It also covers the whole lifecycle of software development. Combined with other parts of the trustworthiness model, it can support the evidence-based, bottom-up trustworthiness evaluation of software process. The presented model can be widely applied in software industry.

    • Reusable and Customizable SaaS Software Development Approach Based on Variation Model

      2018, 29(11):3435-3454. DOI: 10.13328/j.cnki.jos.005294 CSTR:

      Abstract (2641) HTML (2512) PDF 2.06 M (3955) Comment (0) Favorites

      Abstract:In the context of cloud computing, software is delivered as a service to the customer through the Internet, and such a software delivery mode is called SaaS (software as a service). Unlike the traditional software delivery mode, SaaS software is usually running on the server side, which provides services to multiple tenants at the same time. As a result, SaaS software should be designed to meet the individual needs of different tenants, they should be flexible enough to cater for the rapidly changing tenant's requirements, and the response to a talent's change should not affect other tenants. Using the adaptive service composition method based on variability management and its supporting platform, a reusable and customizable SaaS software development method for the context of cloud computing is proposed, and a supporting platform is developed to facilitate the adoption of the proposed method. The platform includes a SaaS mode supporting service composition engine and a remote customization tool. The proposed method first creates an abstract service composition model to meet the common requirements of different tenants, and then the supporting platform is used to interpret the model and derive multiple different process instances at runtime, which are concurrently executed and isolated. A case study is conducted to validate the feasibility of the proposed method and evaluate the performance of the supporting platform using a domain specific SaaS software. Experimental results show that the proposed method and platform present a viable alternative for multi-tenant and multi-instance delivery mode for SaaS software.

    • Data-Driven Bilayer Software Process Mining

      2018, 29(11):3455-3483. DOI: 10.13328/j.cnki.jos.005304 CSTR:

      Abstract (3365) HTML (1532) PDF 2.97 M (4671) Comment (0) Favorites

      Abstract:To address the issue of difficulty in applying the traditional process mining on software process data due to the deficiency of activity and case attribute, this paper focus on the software process data and proposes a bilayer software process mining approach. In the mining activity layer, a weighted structured linked vector mode is proposed to vectorize the process log. The result of fuzzy clustering, which can be regarded as activity information, is determined by the average activity entropy. In the process layer, based on the heuristic relation metrics, this paper studies the non-complete cycle situation and presents the single firing sequence of loop dividing condition of log completeness, and then proposes a method to measure the affiliation of loop. The real-world data sets are used to show the effectiveness and correctness of the proposed bilayer software process mining.

    • Uncertainty Measures of Rough Set Based on Strictly Concave Functions

      2018, 29(11):3484-3499. DOI: 10.13328/j.cnki.jos.005292 CSTR:

      Abstract (2466) HTML (1688) PDF 1.61 M (3358) Comment (0) Favorites

      Abstract:Based on the semantic analysis, a general axiomatic definition of uncertainty measure for rough set is proposed. By extending the Shannon entropy function to strictly concave function, a class of uncertainty measures based on strictly concave function are put forward. They are weighted average of strictly concave function, whose variable is a conditional probability. It follows that a series of measuring methods are developed. The measuring methods based on fuzzy entropy are discussed under the view of strictly concave function. It is proved that they are the special cases of the method proposed in this paper. The difference and relationship among roughness measure, modified rough measure and the uncertainty measure based on strictly concave function are discussed. Finally, some examples are designed to compare the methods discussed in this paper. It is found that the proposed uncertainty measures based on strictly concave function are consistent with the semantics of uncertainty of rough set.

    • Personalized Map-Matching Algorithm Based on Driving Preference

      2018, 29(11):3500-3516. DOI: 10.13328/j.cnki.jos.005297 CSTR:

      Abstract (3093) HTML (1632) PDF 2.17 M (4043) Comment (0) Favorites

      Abstract:With the increasing proliferation of position technologies, there comes huge volumes of trajectory data, which are used in many modern applications such as path planning and location based services. Accurate road network matching can improve the service quality of these new applications. However, the low sampling trajectories bring a major challenge for map-matching. This paper studies the problem of matching individual law-sampling trajectory to a dynamic multi-criteria road network based on user's driving preferences. First, a driving preference model in the dynamic road traffic network is proposed. Based on this model, a two-stage map-matching algorithm is developed. While local matching searches multiple local likely Skyline paths, a global matching dynamic programming algorithm is designed and the most probable k global paths are selected as the matching result. Experiments show that the proposed method is effective and efficient.

    • #SAT Solving Algorithms Based on Extension Rule Using Heuristic Strategies

      2018, 29(11):3517-3527. DOI: 10.13328/j.cnki.jos.005298 CSTR:

      Abstract (2397) HTML (1437) PDF 1.26 M (4151) Comment (0) Favorites

      Abstract:#SAT is used widely in the field of artificial intelligence, and many real-world problems can be reduced to #SAT to get the number of models of a propositional theory. By an in-depth study of #SAT solvers using extension rule, this paper finds that the order of reduction clause has great impact on the size of maxterm space. Thus, in this paper two heuristic strategies, MW and LC&MW are proposed to enhance the efficiency of solving #SAT. MW chooses the clause with maximum weight as reduction clause in every calling procedure; LC&MW chooses the longest clause as reduction clause in every calling procedure and takes the clause with maximum weight as tie breaker. The algorithm CER_MW is designed by using MW, and the algorithm CER_LC&MW is designed by using LC&MW. According to the experimental results, CER_MW and CER_LC&MW show a significant improvement over previous #SAT algorithms. Comparing the new #SAT solvers with other state-of-the-art #SAT solvers using extension rule also shows that CER_MW and CER_LC&MW are significantly improved over the previous #SAT algorithms in solving efficiency and solving capacity. In terms of efficiency, the speed of CER_MW and CER_LC&MW is 1.4~100 times that of other algorithms. In terms of capacity, CER_MW and CER_LC&MW can solve more instances in a time limit.

    • Multi-Authority Attribute-Based Alterable Threshold Ring Signature without Central Authority

      2018, 29(11):3528-3543. DOI: 10.13328/j.cnki.jos.005293 CSTR:

      Abstract (2664) HTML (1918) PDF 1.73 M (3904) Comment (0) Favorites

      Abstract:Attribute based ring signature has gradually become a hot topic in the related field, owing to its priorities including strong in expressive power, flexible in use, and easy to hide the identity of signer. By analyzing existing attribute-based ring signature schemes, it can be found that the majority of earlier schemes cannot resist the collusion attack with the premise of unconditional strong anonymity, and there are many issues such as attribute key escrow, fixed threshold and inefficient verification. To address the above defects, this paper firstly introduces the formal definitions and security model for the multi-authority attribute-based threshold ring signature scheme. Then a multi-authority attribute-based variable threshold ring signature scheme is presented. This scheme uses distributed key generation protocol to constrain the rights of attribute authority, and to overcome the problem of attribute key escrow. Through embedding a random identity factor in each user's attribute key, and introducing a random fuzzy parameter in each signature, the scheme can provides both unconditional strong anonymity and collusion resistance. In addition, a batch verification algorithm is proposed to reduce the computation complexity of verification from nO(·) to O(·)+n. Under random oracle model and computational Diffie-Hellman assumption, the proposal can be proven to be existentially unforgeable and can resist collusion attacks launched by the malicious users with the complementary attributes in chosen message and predicate attack.

    • Automatic Search Algorithm for Impossible Differential Trials and Zero-Correlation Linear Trials in SIMON

      2018, 29(11):3544-3553. DOI: 10.13328/j.cnki.jos.005296 CSTR:

      Abstract (2604) HTML (1383) PDF 1.07 M (4252) Comment (0) Favorites

      Abstract:Impossible differential cryptanalysis and zero-correlation linear cryptanalysis are two of the most useful cryptanalysis methods in the field of symmetric ciphers. Taking the non-linear components into consideration, this article proposes a method for searching the impossible differentials and zero-correlation linear approximations of SIMON based on a technique of SAT. In applications, the proposed method is used to find more impossible differentials and zero-correlation linear approximations for 11-round SIMON. Furthermore, this tool can be used to prove whether there are impossible differentials (zero-correlation linear approximations) in certain rounds of SIMON, particularly for certain subset of input and output patterns of differences (masks). Utilizing this tool, the security of SIMON as well as the choice of its parameter set when resisting the impossible differential cryptanalysis are also explored.

    • Dynamically Scalable and Efficient Approach for Single-Packet Traceback

      2018, 29(11):3554-3574. DOI: 10.13328/j.cnki.jos.005330 CSTR:

      Abstract (2730) HTML (1335) PDF 3.56 M (4029) Comment (0) Favorites

      Abstract:IP spoofing, as a trick that can conceal the attackers' location, bypass the attack prevention, gather the confidential information and enhance the destructive power, has been prevalent in the current network attacks to further bring about severe damage to the Internet. For this reason, the IP traceback technology that can trace an individual attack packet to its origin and then disclose the attacker identity has been extensively researched and developed. Although the existing research can achieve the purpose of tracking to some extent, they also suffer from the following disadvantages:the leakage of topology privacy, the lack of scalability and the higher processing overhead. To tackle those issues, this paper proposes a dynamically scalable and efficient approach for single-packet IP traceback, termed as SEE. SEE first designs the hierarchical traceback system architecture to weaken the traceability relationships among the autonomous domains, and then employs the intra-AS traceback network construction based on OSPF, the traceback address assignment based on edge-coloring, path fingerprint establishment and extraction based on link-binding, the anti-spoofing alliance establishment based on peer-peer relationship and the stable transition process from intra AS to inter AS to improve the scalability and cut down the processing overhead. Extensive mathematical analysis and simulations are performed to evaluate our approach. The results show that the proposed approach significantly out per forms the prior approaches in terms of the scalability and high-efficiency.

    • Registration Based on Dual-Feature Gaussian Mixture Model and Dual-Constraint Spatial Transformation

      2018, 29(11):3575-3593. DOI: 10.13328/j.cnki.jos.005300 CSTR:

      Abstract (3069) HTML (1738) PDF 2.47 M (4311) Comment (0) Favorites

      Abstract:Non-Rigid point set registration is very important for many fields of study. Currently, the famous algorithms generally use correspondence estimation and transformation update based on single feature and single constraint. But performance and application area of the single feature and constraint based algorithms are limited. This paper presents a non-rigid point set registration method based on dual-feature Gaussian mixture model and dual-constraint transformation. Firstly, a dual-feature descriptor is defined and global feature and local feature are used to build the dual-feature descriptor. Then, Gaussian mixture model is improved to obtain a dual-feature Gaussian mixture model by the dual-feature descriptor. Finally, a local structure constraint descriptor is defined and used together with global structure constraint descriptor to preserve the local and global structures of point set. A method is presented for running estimate correspondence that uses dual-feature Gaussian mixture model and updates dual-constraint transformation based on Gaussian radial basis function iteratively to match non-rigid point set accurately. Performance of the presented method is evaluated by synthetic point set registration, CMU sequence image registration, remote sensing registration, IMM face data registration and true image feature point registration. Comparing with other eight state-of-the-ate methods, the new method shows the best alignments in most scenarios.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063