• Volume 23,Issue 9,2012 Table of Contents
    Select All
    Display Type: |
    • Theory of Borel Probability Truth Degrees of Propositions in Łukasiewicz Propositional Logics and a Limit Theorem

      2012, 23(9):2235-2247. DOI: 10.3724/SP.J.1001.2012.04179 CSTR:

      Abstract (3645) HTML (0) PDF 808.95 K (5961) Comment (0) Favorites

      Abstract:By means of Borel probability measures on the valuation set endowed with the usual product topology, the notion of probability truth degrees of propositions in n-valued and [0,1]-valued Łukasiewicz propositional logics is introduced. Its basic properties are investigated, and the integral representation theorem and the limit theorem of probability truth degree functions in n-valued case, in particular, are obtained. Theses results show that the notion of truth degree existing in quantitative logic is just a particular case of Borel probability truth degrees, and a more general quantitative model based on the notion of Borel probability truth degree for uncertainty reasoning can be then established.

    • Research on Automated Sorting Algorithms Generation Based on PAR

      2012, 23(9):2248-2260. DOI: 10.3724/SP.J.1001.2012.04164 CSTR:

      Abstract (3656) HTML (0) PDF 701.06 K (5621) Comment (0) Favorites

      Abstract:Sorting is a kind of special problem in computer science. The flexibility of whose algorithm design tactics leads to the diversity of sorting algorithms. Based on the formal method PAR (partition-and-recur), an automated sorting algorithm generation is studied. The algebraic property of sorting problem is described, generic type components and algorithm components are formally developed, and domain specific language and a formal algorithm generative model are designed. Through replacing the generic identifiers with a few concrete operations a series of known and unknown sorting algorithms, such as quick sort, heap sort, shell sort, and increment select sort, etc., are automatically generated, which is supported by the enhanced program generation system. Through the super framework and underlying components, the reliability and productivity of domain specific algorithm have dramatically improved.

    • 1/(1-1/k)-Optimal Algorithm for Regular Expression Grouping

      2012, 23(9):2261-2272. DOI: 10.3724/SP.J.1001.2012.04098 CSTR:

      Abstract (4599) HTML (0) PDF 682.03 K (6603) Comment (0) Favorites

      Abstract:Dividing regular expression sets into multiple groups is an important process to solve the problem of DFA state explosion. Previous grouping algorithms are heuristic or are done by brute-force, which have poor grouping results. This paper analyzes the reasons of states explosion and summarizes conflicting relationship among regular expressions of some types. When conflicts are non-negative and independent, the optimum k-grouping problem of regular expression sets can be reduced to the maximum k-cut problem, which is NP-hard. Based on the idea of local searching, a new grouping algorithm named GRELS is introduced to solve the problem efficiently, which is 1/(1-1/k) -approximation for maximum k-cut problem. Comparing with previous grouping algorithms, GRELS has the minimum number of states for the same number of groups, and requires the least time to update grouping results when pattern sets change.

    • Simulation and Research on Pre-mRNA Splicing in Analog-Cell

      2012, 23(9):2273-2284. DOI: 10.3724/SP.J.1001.2012.04121 CSTR:

      Abstract (3608) HTML (0) PDF 710.29 K (5290) Comment (0) Favorites

      Abstract:Pre-mRNA splicing is a key process in the gene expression of eukaryotic cells, which includes two major procedures: the cutting of introns and the union of exons. This paper proposes a algorithm to simulate the Pre-mRNA splicing process in the Analog-Cell, which was developed independently. Using the algorithm, the correct results are consistent with the biological principles. Through simulation of the process, the study has found that this algorithm has the capacity to make the introns to form the lariat structure efficiently and accurately, meanwhile to unit the exons selectively. In this way, the Analog-Cell can finish the formation of the mature mRNA.

    • Two Strategies for Solving the Equal Sphere Packing Problem

      2012, 23(9):2285-2296. DOI: 10.3724/SP.J.1001.2012.04158 CSTR:

      Abstract (3514) HTML (0) PDF 604.79 K (5407) Comment (0) Favorites

      Abstract:Based on the quasi physical model, two heuristic strategies are proposed for dealing with the equal sphere packing problem. The fake sphere strategy guarantees that the results are rigorous. The serial symmetrical relocation strategy is designed to search a dense feasible configuration from a local optimal configuration. Through a personal computer with Pentium E6500 2.93GHz CPU, the study has densely packed up to 200 equal spheres in spherical container and up to 150 equal spheres in cubic container. The obtained results not only have better quality than that of the international best known records, but also greatly outnumbered them. Particularly, the study packed 68 equal spheres of radius 1 into a large sphere whose radius is smaller than 5, thus proved wrong a conjecture which alleges a large sphere of radius 5 can contain at most 67 equal spheres of radius 1.

    • Multiple Kernel Local Leaning-Based Domain Adaptation

      2012, 23(9):2297-2310. DOI: 10.3724/SP.J.1001.2012.04240 CSTR:

      Abstract (4502) HTML (0) PDF 914.12 K (7188) Comment (0) Favorites

      Abstract:Domain adaptation (or cross domain) learning (DAL) aims to learn a robust target classifier for the target domain, which has none or a few labeled samples, by leveraging labeled samples from the source domain (or auxiliary domain). The key challenge in DAL is how to minimize the maximum distribution distance among different domains. To address the considerable change between feature distributions of different domains, this paper proposes a three-stage multiple kernel local learning-based domain adaptation (MKLDA) scheme:1) MKLDA simultaneously learns a reproduced multiple kernel Hilbert space and a initial support vector machine (SVM) by minimizing both the structure risk functional and the maximum mean discrepancy (MMD) between different domains, thus implementing the initial separation of patterns from target domain; 2) By employing the idea of local learning-based method, MKLDA predicts the label of each data point in target domain based on its neighbors and their labels in the kernel Hilbert space learned in 1); And 3) MKLDA learns a robust kernel classifier to classify the unseen data in target domain with training data well predicted in 2). Experimental results on real world problems show the outperformed or comparable effectiveness of the proposed approach compared to related approaches.

    • Qualitative Spatial Model Used f or Processing Spatial Information in Commonsense

      2012, 23(9):2311-2322. DOI: 10.3724/SP.J.1001.2012.04136 CSTR:

      Abstract (3421) HTML (0) PDF 718.38 K (4961) Comment (0) Favorites

      Abstract:In spatial information processing, spatial information is usually combined with various spatial relationships, which are often dynamic. To represent and reason with these complex spatial relationships effectively, a novel model topology-direction-size calculus (TDSC) is proposed, which is integrated with multi-aspects qualitative spatial relations. Then, a framework for dealing with dynamic qualitative spatial relations is proposed. First, a base relation set which integrated with multi-aspects is constructed. Next, the algorithm of constructing composition table is proposed for reasoning, which allows the reasoning result of original model to still be used in new model. To handle the dynamic spatial relations, the neighborhood partition graph (NPG) is proposed, and an algorithm is give to generate the GNP. Using this algorithm, one can get the new model’s GNP easily. Finally, the framework for handing dynamic spatial relations is proposed, which is based on the new model TDSC and its GNP. An example is used to show the framework is correct and effective.

    • On the Translation from Quantified Modal Logic into the Counterpart Theory

      2012, 23(9):2323-2335. DOI: 10.3724/SP.J.1001.2012.04163 CSTR:

      Abstract (3426) HTML (0) PDF 658.72 K (5383) Comment (0) Favorites

      Abstract:The counterpart theory is a theory of first-order logic. Lewis interprets modal claims by using a translation from quantified modal logic into the counterpart theory. However, Lewis's translation does not preserve the unsatisfiability of formulas. In this paper, an extended semantics for quantified modal logic is introduced, and the corresponding connection between models of the quantified modal logic and models of the counterpart theory is given. Based on the semantics, a faithful and full translation from quantified modal logic to the counterpart theory, which preserves the satisfiability and the unsatisfiability of formulas, is also established. Furthermore, since the counterpart theory is sound and complete, and the soundness and the completeness are preserved by the faithful and full translation, the quantified modal logic is also sound and complete.

    • Support Vector Regression Optimization Problem without Bias

      2012, 23(9):2336-2346. DOI: 10.3724/SP.J.1001.2012.04150 CSTR:

      Abstract (3320) HTML (0) PDF 589.47 K (5242) Comment (0) Favorites

      Abstract:To study the role of bias in support vector regression (SVR), primal and dual optimization formulations of support vector regression optimization problem without bias (NBSVR) are proposed first, and the necessary condition of NBSVR optimization formulation’s global optima is presented and sub-optima solution of NBSVR dual problem has been proved for the dual problem of SVR then. An active set algorithm of dual optimization formulation without bias is proposed, and the linear convergence of the proposed algorithm has been proved. The experimental results on 21 benchmark datasets show that in the solution space of dual problem, SVR can only obtain the sub-optimal solution of NBSVR, the root mean square error (RMSE) of NBSVR tends to lower than SVR. The training time of NBSVR is not only less than SVR, but also less sensitive to kernel parameter.

    • Research on the Translation Model Combination Based on Hypergraph

      2012, 23(9):2347-2357. DOI: 10.3724/SP.J.1001.2012.04165 CSTR:

      Abstract (3489) HTML (0) PDF 649.83 K (5567) Comment (0) Favorites

      Abstract:The system combination performs under post-processing, but the paper introduces a translation model combination, which combines two mainstream translation models (Hiero and MaxEnt-based BTG) during decoding. To the spurious ambiguity and consensus problem, the paper introduces new decoding method to solve two problems. In experiment, translation model combination is significantly better than member model, and the new decoding method is better.

    • Automatic Optimization Model for Virtual Human’s Working Behaviors

      2012, 23(9):2358-2373. DOI: 10.3724/SP.J.1001.2012.04174 CSTR:

      Abstract (3426) HTML (0) PDF 825.53 K (6352) Comment (0) Favorites

      Abstract:Simulation of humanoid adaptive behaviors is a prerequisite of ergonomic evaluation. To overcome the shortages of existing technologies, an autonomic optimization model of virtual human’s working behaviors is presented based on multi-agent cooperative games. In this model, the humanoid adaptive behaviors in working environment is defined as multi-objective optimization problem, and the definitions of mannequin workspace and behavior elements are proposed for discretization of human behaviors. A simulation algorithm based on fuzzy multi-objective games theory is presented to solve this model. With fuzzy multi-objective Nash negotiation theory, a grads-up method is adopted to search the Pareto optimizing solutions of human’s working posture. Case studies show that the human’s comfortable working posture can be derived by the new strategy even in cases with only insufficient data and better adaptability of the method in engineering domain can be reached.

    • ε-Differential Evolution Algorithm for Constrained Optimization Problems

      2012, 23(9):2374-2387. DOI: 10.3724/SP.J.1001.2012.04149 CSTR:

      Abstract (3765) HTML (0) PDF 740.26 K (6499) Comment (0) Favorites

      Abstract:Differential evolution algorithm usually solves the constrained optimization problems by the feasible solutions priority rule, but the method can not use the infeasible solutions information populations. ε-DE algorithm is designed and can use the information of infeasible solutions. By designing new comparison rules, the infeasible solutions with better objective function are made full use of in the evolution process. The concept of population constraint relax degree is introduced in the comparison rules. During the evolution initial phase, the infeasible solutions with better objective function and near the boundary of the feasible region are incorporated in the population. With the evolutionary generation increasing, the decrease in the population constraint relax degree decreases the number of infeasible solutions in the population. Unless the population constraint relax degree is 0, the population is entirely composed of feasible solutions. In addition, an improved DE algorithm is chosen as the search algorithm, so a faster convergence is gotten. The simulation results of 13 benchmark functions prove that ε-DE is most competitive in all DE algorithms for solving COPs.

    • Technique of Evaluating AS Importance Based on Preferred Route

      2012, 23(9):2388-2400. DOI: 10.3724/SP.J.1001.2012.04233 CSTR:

      Abstract (3702) HTML (0) PDF 762.31 K (5266) Comment (0) Favorites

      Abstract:Evaluating the importance of node is valuable for improving the network survivability. Due to the complexity of inter-domain policy, the existing evaluating methods which are based on static topology can not reflect the real importance of the autonomous systems (ASes) in routing. This paper is the first study to evaluate the AS importance through the best paths between the ASes from the view of dynamic routing. The more the best paths passing through an AS, the more important it is. An evaluation method based on preferred route is proposed, the complexity of which is O(l×nm) and the evaluated importance is more accurate. The time complexity is the same as the best complexity of the evaluating methods based on static topology. To verify the validity, the preferred route method is compared with two representative methods based on static topology under real routing data, which are degree method and stress centrality method respectively. The result shows that the preferred route method can discover the nodes of importance but small connections efficiently. Moreover, the evaluated importance is closer to the real AS importance than the other two methods.

    • Broadcast Medium Access for Opportunistic Routing with Network Coding in Wireless Mesh Networks

      2012, 23(9):2401-2415. DOI: 10.3724/SP.J.1001.2012.04138 CSTR:

      Abstract (3696) HTML (0) PDF 926.56 K (5596) Comment (0) Favorites

      Abstract:Opportunistic routing with network coding (NCOR) has emerged as a promising approach to improve both the throughput and reliability in lossy wireless multi-hop networks. This new routing paradigm, which is based on the multi-user diversity advantage of wireless broadcast links and the erasure coding property of random linear network coding, brings opportunities and challenges for broadcast MAC designs. This paper carries out a study on the opportunistic broadcast channel access problem with the intent on appealing to the optimal stopping theory, the study proposes an access strategy to achieve the maximal average effective rate, which is a trade-off between the access delay and the instantaneous delivery ability. By extending IEEE 802.11 DCF, the study further presents a MAC protocol O-BCast to execute the proposed strategy. Simulation results show that O-BCast achieves notable improvement of NCOR’s end-to-end throughput and is adaptive to various network loads.

    • Proxy Signature in the Standard Model: Constructing Security Model and Proving Security

      2012, 23(9):2416-2429. DOI: 10.3724/SP.J.1001.2012.04246 CSTR:

      Abstract (4061) HTML (0) PDF 692.54 K (5370) Comment (0) Favorites

      Abstract:Current proxy signature schemes are not proved for their security in the complete provable security model of proxy signature. In this paper, we show a complete provable security model for proxy signature based on Boldyreva’s provable security model, and a new identity-based proxy signature scheme are proposed in the standard model, which is based on Paterson’s scheme. In the complete provable security model for proxy signature, the new scheme is proved to have the existential identity-based proxy signature unforgerability under an adaptive chosen message attack, which has a security reduction to CDHP. Comparing with other proxy signature schemes based on public key cryptosystem in the standard model, the concept of the identity is introduced into the new scheme, and the new scheme is more secure.

    • Several Properties of Binary Diffusion Layers for Block Cipher

      2012, 23(9):2430-2437. DOI: 10.3724/SP.J.1001.2012.04137 CSTR:

      Abstract (3565) HTML (0) PDF 454.10 K (5069) Comment (0) Favorites

      Abstract:0-1 matrices are often-used in the design of diffusion structures in block ciphers. This paper first proves that the branch number of matrix over GF(2n) does not change while it is redefined over the extension field GF(2mn). By this result, the study reinforces the proof given by Choy et al., which is about the upper bound of branch number of binary matrices over GF(2n). This paper constructs a kind of invertible binary matrices with size 8 and largest branch number, proposes a kind of matrices with equal differential branch number and linear branch number, and also includes lots of matrices and involution matrices with order 16 and optimal branch number with this structure are searched out.

    • Analysis and Avoidance of Energy Hole Problem in Heterogeneous Wireless Sensor Networks

      2012, 23(9):2438-2448. DOI: 10.3724/SP.J.1001.2012.04167 CSTR:

      Abstract (4221) HTML (0) PDF 656.36 K (6184) Comment (0) Favorites

      Abstract:Since nodes near the sink burden the data load for nodes far away in wireless sensor networks, its energy consumption is higher and easy to form the energy hole. In this paper, for cluster based heterogeneous wireless network in which the node with higher initial energy as the cluster head and nodes with lower initial energy as common nodes, the study proposes an energy hole avoid strategy by working with unequal cluster radius. The core idea of this strategy is that the cluster radius near the sink is smaller and cluster radius away from the sink is bigger, so that more cluster heads with higher initial energy are deployed near the sink, and then weaken the impact of energy hole, so as to balance the energy consumption. This paper aims at mitigating energy hole by deploying fewer nodes under the network lifetime constraint, and gives the detailed calculation and optimization method of the unequal cluster radius values. Theoretical analysis and experimental results show that the strategy in this paper has greatly improve the network lifetime and performance, and can be a good guidance for deployment of heterogeneous wireless networks.

    • Identity Traceable Attribute-Based Signature Scheme

      2012, 23(9):2449-2464. DOI: 10.3724/SP.J.1001.2012.04172 CSTR:

      Abstract (4189) HTML (0) PDF 847.63 K (7327) Comment (0) Favorites

      Abstract:In an attribute-based signature (ABS) scheme, the signer’s identity keeps anonymous. To prevent the signer from abusing this property, Escala, Herranz, and Morillo proposed an identity traceable attribute-based signature scheme (EHM-ABS). Their scheme used an automorphic signature, and applied the non-interactive witness indistinguishable (NIWI) proofs many times. Inspired by Boyen, and Waters’ identity-based compact group signature scheme, the study presents an identity traceable ABS scheme in the standard model. When issuing the attribute private key, the signer’s identity is embedded. By using the NIWI proof of encrypting each bit of identity, this scheme achieves the traceability. To compare with EHM-ABS, this scheme reduces the number of applying the NIWI proofs when the order of the claimed attributes set is bigger than the quarter of the bit length of identity, and do not need to use automorphic signature. The security of this proposed scheme is based on the subgroup dicision and the computational diffie-Hellman assumptions.

    • Access Authentication for HMIPv6 with Node Certificate and Identity-Based Hybrid Scheme

      2012, 23(9):2465-2480. DOI: 10.3724/SP.J.1001.2012.04188 CSTR:

      Abstract (3883) HTML (0) PDF 1.04 M (6100) Comment (0) Favorites

      Abstract:Access authentication is the basic security requirement of hierarchical mobile IPv6 (HMIPv6) network. A mutual access authentication scheme is proposed in this paper based on hierarchical authentication framework as well as node certificate and identity-based hybrid approach. The scheme adopts identity-based cryptography to simplify the cumbersome key management of PKI. Node certificate is introduced to authenticate entity, which eliminates message interactions between home network and access network. A mutual authentication protocol is achieved using proposed hierarchical signature mechanism. The protocol can also be extended to support access authentication in multi-level HMIPv6 network. Performance and security analysis demonstrates that the proposed scheme outperforms other identity-based proposals in terms of efficiency and security.

    • Point-in-Polygon Test Method Based on Center Points of Grid

      2012, 23(9):2481-2488. DOI: 10.3724/SP.J.1001.2012.04087 CSTR:

      Abstract (3580) HTML (0) PDF 501.11 K (6971) Comment (0) Favorites

      Abstract:The paper proposes an efficient point-in-polygon test method based on uniform grids. In preprocessing, it constructs uniform grids and processes the center points in a sequence to determine whether they are inside the polygon, when the center points with known inclusion properties are used to quickly determine their adjacent center points. When performing an inclusion query, it first finds the cell that contains the query point and then produces a line segment from the query point to the cell’s center point and counts the edges crossed by the line segment for determination. As both the preprocessing and the inclusion tests make use of known information to determine the center points or query points, the computation can run locally for acceleration and the new method can run much faster than existing methods. In most cases, it can reduce the preprocessing time complexity from O(N3/2) to O(N) for the methods based on grids, where N is the number of the polygon edges. Meanwhile, the new method can efficiently treat the non-manifold polygons with self-intersection or overlapping edges. Experimental results show that compared with other similar methods based on uniform grids, the new algorithm can speed up the preprocessing by several times and the query computation by one or several dozens of times, and can move faster than the method by convex decomposition, which has the lowest complexity O(logN) for point-in- polygon tests.

    • Vessel Segmentation Under Non-Uniform Illumination: A Level Set Approach

      2012, 23(9):2489-2499. DOI: 10.3724/SP.J.1001.2012.04095 CSTR:

      Abstract (3431) HTML (0) PDF 1.38 M (6597) Comment (0) Favorites

      Abstract:In this paper, a new level set segementation model is proposed and is coupled with the geometric information, the edge information and the region information. The new level set segementation model is aimed at a vessel segmentation in a non-uniform image with weak object boundaries. First, a multiscaled filter with a Hessian matrix, which has a anisotropic character, is used to identify the direction of vessels. Second, the edge information is embed into a energy functional by a fast edge integral method with a laplacian zero crossing algorithm. A new level set segmentation model based on information of geometric structure, edge and region is constructed by this method. This new model can segment vessels exactly on grayscale uneven images. Compared to GAC CV segmentation model and other improved models based on CV model, the method in this paper has a better accuracy and robustness.

    • Image Annotation by the Multiple Kernel Learning with Group Sparsity Effect

      2012, 23(9):2500-2509. DOI: 10.3724/SP.J.1001.2012.04154 CSTR:

      Abstract (4576) HTML (0) PDF 880.54 K (6920) Comment (0) Favorites

      Abstract:Since different kinds of heterogeneous features (such as color, shape and texture) in image shave different intrinsic discriminative power for image understanding, this paper proposes a multiple kernel learning with group sparsity (MKLGS) to select groups of discriminative features for image annotation to effectively utilize those heterogeneous visual features. Given each image label, the MKLGS method embeds the nonlinearity image data with discriminative features into a Hilbert space, and then utilizes the kernel function in the Hilbert space and group LASSO to select groups of discriminative features. Finally, a classification model can be trained for image annotation. In comparison to other image annotation algorithms, experiments show that the proposed MKLGS for imageannotation achieves a better performance.

    • Fragile Watermarking Algorithm for JPEG Images Based on Neighborhood Comparison and its Performance Analysis

      2012, 23(9):2510-2521. DOI: 10.3724/SP.J.1001.2012.04169 CSTR:

      Abstract (3305) HTML (0) PDF 742.00 K (6001) Comment (0) Favorites

      Abstract:To improve the tamper detection performance and harmonize the conflict between security and invisibility, this paper proposes a fragile watermarking algorithm for JPEG images, in which the authenticity of image blocks is determined by neighborhood comparison. This scheme divides the original image into 8×8 image blocks. For each block, four bits watermarks are generated based on the DCT coefficients to be protected. Next, the watermarks are randomly embedded in the least significant bit (LSB) of DCT coefficients with smaller quantization step in other four blocks. The authenticity of each block is determined by comparison between the number of inconsistent image blocks in the eight-neighborhood of each block and its four corresponding mapping blocks. Then, this paper derives the probability of false acceptance and false rejection under general tampering and collage attack and validates the theoretical analysis results by statistical experiments. Theoretical analysis and statistical experiments show that comparing the number of inconsistent image blocks in the eight-neighborhood of each block with its four corresponding mapping blocks improves the tamper detection performance. Embedding watermarks in the LSB of DCT coefficients with less quantization step efficiently solves the conflict between the number of DCT coefficients to be protected and invisibility.

    • Optimizing Input Efficiency for Graphical User Interface Using Adaptive Cursors

      2012, 23(9):2522-2532. DOI: 10.3724/SP.J.1001.2012.04239 CSTR:

      Abstract (3012) HTML (0) PDF 1.00 M (5653) Comment (0) Favorites

      Abstract:The improvement of input efficiency of graphical user interfaces (GUIs) is an important issue in human computer interaction research. The existing researches include two aspects: pointing techniques and adaptive user interfaces: the formal manipulates either the visual representation or the controlling method of a cursor while the latter adjusts the layout of widgets. However, both approaches suffer drawbacks. This paper analyzes the operations on GUI and presents a quantitative model to evaluate the input efficiency of GUIs. Based on the metric, the study then proposes a novel approach, the adaptive cursor, which adaptively supports a predicted set of widgets with pointing techniques to enable fast access. This approach avoids the extra cognitive cost caused by adaptive user interfaces, which frequently adjust the widget layout. It also extends previous pointing techniques that should have applied only to a sparse layout. To evaluate its usability, the study realizes the adaptive cursor technique on Visual Studio, whose interface contains a considerate number of controls. Experimental result shows that the adaptive cursor can save up to 27.7% of pointing time.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《Journal of Software 》
  • 主办单位:Institute of Software, CAS, China
  • 邮编:100190
  • 电话:010-62562563
  • 电子邮箱:jos@iscas.ac.cn
  • 网址:https://www.jos.org.cn
  • 刊号:ISSN 1000-9825
  •           CN 11-2560/TP
  • 国内定价:70元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063