WANG Qing , WU Shu-Jian , LI Ming-Shu
2008, 19(7):1565-1580.
Abstract:Software defect prediction has been one of the active parts of software engineering since it was developed in 1970's. It plays a very important role in the analysis of software quality and balance of software cost. This paper investigates and discusses the motivation, evolvement, solutions and challenges of software defect prediction technologies, and it also categorizes, analyzes and compares the representatives of these prediction technologies. Some case studies for software defect distribution models are given to help understanding.
ZOU Qiong , WU Ming , HU Wei-Wu , ZHANG Long-Bing
2008, 19(7):1581-1589.
Abstract:Accessing to heap data brings main overhead for Java application. VM (virtual machine) researchers utilize prefetch or garbage collection to improve the performance, with the help of collected information of accesses to heap. The general methods to collect information are sampling and program instrumentation, however, they can't satisfy fine granularity and low overhead simultaneously. To satisfy these two requirements, this paper proposes an instrument- analysis framework for adaptive prefetch optimization in JVM, which instruments code to collect profiling information, and guide to dispatch code and perform prefetch according to feedback. The experimental results show that it achieves up to 18.1% speedup in industry-standard benchmark SPEC JVM98 and Dacapo on the Pentium 4, while the overhead is less than 4.0%.
MA Jian-Gang , HUANG Tao , XU Gang , WANG Jin-Ling , YE Dan
2008, 19(7):1590-1602.
Abstract:The publish/subscribe (pub/sub) system has a wide potential application due to its asynchronous, many-to-many and loosely-coupled communication properties. However the existing pub/sub systems can't satisfy the delay requirement of application in dynamic distributed computing environment. In order to solve this problem, this paper extends the syntax of pub/sub system, sets up the delay model, proposes an earning-based time-constraint assurance technique for distributed pub/sub system, and puts forward a scheduling algorithm named as MTEP (maximum total earning priority). The algorithm satisfies the specified delay requirement for both publisher and subscriber, makes use of available bandwidth efficiently with the price and penalty information agreed with subscribers, and adapts to the dynamic network environment. Experimental results show that the strategy can make subscribers receive many more valid events and improve system earning significantly comparing with the classical FCFS, fixed priority and least remain time priority algorithms.
LIU Xian-Hua , YANG Yang , ZHANG Ji-Yu , CHENG Xu
2008, 19(7):1603-1612.
Abstract:Basic-Block reordering is a kind of compiler optimization technique which has the effect of reducing branch penalty and I-cache miss cost by reordering basic blocks in memory. A new basic-block reordering algorithm based on structural analysis is presented. The algorithm takes the architectural branch cost model and basic-block layout cost model into consideration, uses the execution frequencies of control-flow edges from profile information, builds a local structural optimization policy and utilizes it in reordering program's basic blocks. The algorithm is implemented based on UniCore architecture, experimental results show that it better improved programs' performance with a complexity of only O(n(logn).
YING Wei-Qin , LI Yuan-Xiang , SHEU Phillip C-Y
2008, 19(7):1613-1622.
Abstract:Thermodynamical genetic algorithms (TDGA) simulate the competitive model between energy and entropy in annealing to harmonize the conflicts between selective pressure and population diversity in GA. But high computational cost restricts the applications of TDGA. In order to improve the computational efficiency, a measurement method of rating-based entropy (RE) is proposed. The RE method can measure the fitness dispersal with low computational cost. Then a component thermodynamical replacement (CTR) rule is introduced to reduce the complexity of the replacement, and it is proved that the CTR rule has the approximate steepest descent ability of the population free energy. Experimental results on 0-1 knapsack problems show that the RE method and the CTR rule not only maintain the excellent performance and stability of TDGA, but also remarkably improve the computational efficiency of TDGA.
2008, 19(7):1623-1634.
Abstract:Action recognition is a popular and important research topic in computer vision. However, it is challenging when facing viewpoint variance. So far, most researches in action recognition remain rooted in view-dependent representations. Some view invariance approaches have been proposed, but most of them suffer from some weaknesses, such as lack of abundant information for recognition, dependency on robust meaningful feature detection or point correspondence. To perform viewpoint and subject independent action recognition, this paper proposes a representation called "Envelop Shape" which is viewpoint insensitive. "Envelop Shape" is easy to acquire from silhouettes using two orthogonal cameras. It makes full use of two cameras' silhouettes to dispel influence caused by human body's vertical rotation, which is often the primary viewpoint variance. With the help of "Envelop Shape" representation and Hidden Markov Model, the inspiring results on action recognition independent of subject and viewpoint are obtained. Results indicate that "Envelop Shape" representation contains enough discriminating features for action recognition. Extension of "Envelop Shape" is also proposed to make it run under fewer restrictions of camera configurations, which increases its application value effectively.
WU Jia-Ji , JIAO Li-Cheng , SHI Guang-Ming , WANG Lei
2008, 19(7):1635-1643.
Abstract:The paper proposes a shape-adaptive wavelet coding algorithm for known object of diagnostic region of three-dimensional medical images. The new algorithm only applies to the shape-adaptive transformation of the pixels inside the object for decorrelation. After transformation, the number of coefficients of the object is as many as that of the pixels inside the image area. To achieve a quick and lossless transformation, a novel shape-adaptive wavelet transform based on lifting scheme for arbitrarily shaped object is proposed. By analyzing the location of invalid coefficients transformed, the paper also proposes a modified OB-3DSPECK (object-based set partitioned embedded block coder) method that cancels symbol outputs of invalid block or coefficients outside the object, specifically, only two types of symbols are output to arithmetic coding coder. For object region of three-dimensional medical images, the proposed algorithm supports the lossy-to-lossless embedded en/decoding. Experimental results show that the proposed algorithm outperforms OB-3DSPECK by 0.5dB on the average SNR. Furthermore, because of the reduction of the output of one type symbol, the arithmetic coding becomes optional.
WU Jun , WANG Chong-Jun , LUO Bin , CHEN Shi-Fu , CHEN Shi-Fu
2008, 19(7):1644-1653.
Abstract:In agent architecture, an active goal is a functionally self-contained entity with independent control flow. The related syntax of active goal and the operational semantics of active goal execution are presented. Furthermore, the BDI agent architecture driven by active goals is formally specified. Different from some former BDI agent architectures, goals are explicitly represented in the agent architecture as active entities. Parallel goals are supported in the architecture level by a very natural fashion, which is considered to be an important aspect of the rational behavior of agent. What’s more, the explicit definition of goals provides convenience to the reconsideration of commitments for agents situated in dynamic environments.
2008, 19(7):1654-1665.
Abstract:In order to collect sufficiently many samples and keep their distinguishability in online sketchy symbol recognition, this paper proposes a detector-generation based clonal selection algorithm and an evaluation method. The algorithm generates detectors with an r-contiguous-bits unchanged rule (r-CBUR) and a p-receptor editing to search in a wide feature space and try to avoid local convergences. Hand-Written Chinese characters are selected as experimental samples, for which the influence of the training parameters is analyzed. The experimental results show the improvements of the training process and the classification results of sketchy symbol recognition.
WEN Gui-Hua , JIANG Li-Jun , WEN Jun
2008, 19(7):1666-1673.
Abstract:Locally linear embedding is a kind of very competitive nonlinear dimensionality reduction with good representational capacity for a broader range of manifolds and high computational efficiency. However, they are based on the assumption that the whole data manifolds are evenly distributed so that they determine the neighborhood for all points with the same neighborhood size. Accordingly, they fail to nicely deal with most real problems that are unevenly distributed. This paper presents a new approach that takes the general conceptual framework of Hessian locally linear embedding so as to guarantee its correctness in the setting of local isometry to an open connected subset but dynamically determines the local neighborhood size for each point. This approach estimates the approximate geodesic distance between any two points by the shortest path in the local neighborhood graph, and then determines the neighborhood size for each point by using the relationship between its local estimated geodesic distance matrix and local Euclidean distance matrix. This approach has clear geometry intuition as well as the better performance and stability to deal with the sparsely sampled or noise contaminated data sets that are often unevenly distributed. The conducted experiments on benchmark data sets validate the proposed approach.
2008, 19(7):1674-1682.
Abstract:This paper proposes a stereo vision cooperative algorithm for high quality dense disparity mapping. This algorithm iteratively performs the local adaptive aggregation and inhibitive magnification based on the morphologic similarity with adaptive weight, and generates high quality dense disparity map effectively. This paper also extends the cooperative algorithm to trinocular stereo vision system. By rebuilding the camera coordinate system, the trinocular images are rectified, and the support area and trinocular inhibition area are established in disparity space based on the continuity and uniqueness constrains. Experimental results show that the trinocular stereo vision cooperative algorithm can generate accurate real dense disparity maps, and the occlusions in multiple baseline directions can also be detected. This algorithm is especially suitable for stereo vision system with multiple cheap camera to realize high quality dense disparity mapping without more hardware and software.
LEI Xiao-Feng , XIE Kun-Qing , LIN Fan , XIA Zheng-Yi
2008, 19(7):1683-1692.
Abstract:K-Means is the most popular clustering algorithm with the convergence to one of numerous local minima, which results in much sensitivity to initial representatives. Many researches are made to overcome the sensitivity of K-Means algorithm. However, this paper proposes a novel clustering algorithm called K-MeanSCAN by means of the local optimality and sensitivity of K-Means. The core idea is to build the connectivity between sub-clusters based on the multiple clustering results of K-Means, where these clustering results are distinct because of local optimality and sensitivity of K-Means. Then a weighted connected graph of the sub-clusters is constructed using the connectivity, and the sub-clusters are merged by the graph search algorithm. Theoretic analysis and experimental demonstrations show that K-MeanSCAN outperforms existing algorithms in clustering quality and efficiency.
2008, 19(7):1693-1706.
Abstract:EPON (Ethernet-based passive optical network) is optical-fiber-based access technology and is becoming a primary one for the next generation access network. But the Polling mechanism of IEEE 802.3ah EPON has low bandwidth efficiency, and many available bandwidths are wasted through UWR (unused window remainder), USR (unused slot remainder), UQR (unused queue remainder), and UPR (unused package remainder) produced by its DBA (dynamic bandwidth allocation) algorithm. IPACT (interleaved Polling with adaptive cycle time) EPON presents a new Polling mechanism with high bandwidth efficiency, but its DBA algorithm has the same disadvantages as IEEE 802.3ah. Based on IPACT, this paper presents an implement mechanism on distributed EPON utility DBA, it can provide user service different treatment according to his SLA (service level agreement). By a concentrated and a distributed recursive utility algorithms, it effectively diminishes UWR-reproduced and UQR. By a distributed UPR diminishing mechanism, it reduces UPR. By putting forward a USR diminish mechanism based on interleaved baton, it increases the success probability of diminishing USR, and by adding the USR of baton-sender to the grant of his baton-receiver, it improves the bandwidth efficiency. The simulation results verify it merits well.
ZHANG Heng-Yang , XU Dan , LIU Yun-Hui , CAI Xuan-Ping
2008, 19(7):1707-1715.
Abstract:Simulation is an indispensable tool in protocol design and evaluation. Typically, simulations of mobile wireless sensor networks rely upon mobility models. This paper presents a smooth Gauss-semi-Markov mobility model (SGM) that can reflect the realistic node movement and has more controllable parameters. Time stationary average speed and uniform spatial node distribution of SGM are proved by Markov process and renewal process theory. Thus, SGM model can be applied to simulation of mobile wireless sensor networks.
JING Qi , TANG Li-Yong , CHEN Zhong
2008, 19(7):1716-1730.
Abstract:Cryptography based security solutions are not enough for WSNs when there are attacks from interior, which are caused by compromised nodes. Trust management can deal with this problem efficiently, and enhance the security, reliability and impartiality of the system. This paper gives a detailed introduction to the characteristics, the taxonomy, and the design of the framework, the vulnerability analysis, the attack models and the countermeasures. Among which the design of the framework, including the trust factors, the computation models and the application of trust, is the core of a trust management system and is given a deep insight into. In the end, several typical trust management systems are introduced. A panoramic view and detailed analysis of current trust based systems in WSNs are given.
SUN Yi , ZHANG Yu-Cheng , FENG Bin , FANG Geng-Fa , SHI Jing-Lin , DUTKIEWICZ Eryk
2008, 19(7):1731-1742.
Abstract:Aiming at the features of wireless mobile communication, this paper proposes a new resource reservation scheme, Fast RSVP (resource reservation protocol), to guarantee the QoS of sessions for mobile IPv6. The scheme adopts cross-layer design, it cooperates two modules at different layers: Mobile IP module and RSVP module. By adding some primitives, the scheme lets the two modules work together to guarantee the QoS of sessions for mobile users. Fast RSVP imports a series of new mechanisms such as advanced resource reservation on neighbor tunnels, resource reservation on optimized routes, resource reservation for handover sessions, path merging etc. Simulation results show that Fast RSVP scheme, compared with other traditional RSVP extensions for mobile environments, has the following advantages: (1) it realizes a mobile node handover with QoS guarantees; (2) it avoids resource wasting caused by triangular routes and duplicate reservations in mobile IP handover process; (3) it distinguishes different types of reservation requests, greatly reducing the handover session forced termination rate while maintaining high performance of the network.
WANG Xing-Wei , WANG Qi , HUANG Min , TIAN Ye
2008, 19(7):1743-1752.
Abstract:QoS (quality of service) multicast routing is essential to NGI (next generation Internet). On one hand, due to difficulty in exact measurement and expression of NGI network status, the necessary QoS routing information should be fuzzy. On the other hand, with the gradual commercialization of network operation, paying for network usage calls for QoS pricing and accounting. However, benefit conflicts between network providers and users ask the so-called both-win to be supported. Thus, a fuzzy integral and game theory based QoS multicast routing scheme is proposed and has been implemented by simulation. It consists of three parts: Edge evaluation, game analysis, and multicast tree construction. It does comprehensive evaluation on candidate edges based on fuzzy integral and adaptability membership degree functions for edge parameters, determines whether Nash equilibrium between network provider utility and user utility has been achieved on candidate edges by gaming analysis, and attempts to construct a multicast routing tree with not only user QoS requirements satisfied but also Pareto optimum under Nash equilibrium on network provider utility and user utility achieved or approached by the proposed algorithm. Simulation results show that performance of the proposed scheme is better than that of some well known schemes, including QoSMIC.
2008, 19(7):1753-1757.
Abstract:In 2005, Wang, et al. proposed a threshold proxy multi designated-verifiers signature scheme (Wang-Fu) by combining the properties of the threshold proxy signature and the multi designated-verifiers signature. In the same year, Chen, et al. also proposed a designated-verifier signature scheme. It is shown that the manager of the set of all verifiers can directly forge signatures, so that, each verifier should give zero knowledge proof for the partial data generated in verifying phase by using his secret key, and that CFT (Chen-Feng-Tan) scheme does not satisfy the non-transferability, i.e., the designated-verifier can prove to a third party that the signature is generated by the signer. The reason is that the scheme directly follows from the technique in Schnorr signature. The designated-verifier can easily transform the signature into a common signature with respect to the signer's public parameters.
2008, 19(7):1758-1765.
Abstract:This paper revisits Paillier's trapdoor one-way function, focusing on the computational problem underlying its one-wayness. A new computational problem called the one-more Paillier inversion problem is formulated. It is a natural extension of Paillier inversion problem to the setting where adversaries have access to an inversion oracle and a challenge oracle. The relation between the one-more Paillier inversion problem and the one-more RSA problem introduced by Bellare, et al. It is shown that the one-more Paillier inversion problem is hard if and only if the one-more RSA problem is hard. Based on this, a new identification scheme is proposed. It is shown that the assumed hardness of the one-more Paillier inversion problem leads to a proof that the proposed identification scheme achieves security against concurrent impersonation attack.
LI Jing , WANG Wen-Cheng , WU En-Hua
2008, 19(7):1766-1782.
Abstract:This paper presents a new method for convex decomposition of polyhedrons. In comparison with existing methods, the new method can improve the efficiency greatly with complexities reduced in many aspects of execution time, storage and added new vertices, and it is more advantageous in treating the polyhedrons with reflex edges in higher numbers. Its strategy is to gradually decompose a polyhedron in local operations by the occlusion relationships between facets and edges along certain orthogonal directions. In treating general polyhedrons in practice, the new method has its time and storage complexities both in O(n) approximately, and its produced new vertices are in a number not more than O(r+n0.5), here n is the vertex number and r is the reflex edge number. By testing a large number of complex polyhedrons, it shows that, compared with the popularly used "cutting & splitting" method, this new method can run 14~120 times faster, reduce the storage requirement to 1/2.3~1/7.4, and reduce the new points to at most 1/28, and even needs no new point in some cases. Because most convex polyhedrons decomposed by the method are tetrahedrons, the resulted convex polyhedrons by the new method are more than those by the "cutting & splitting" method. However, if convex polyhedrons are required to be further decomposed to tetrahedrons, the new method can produce much fewer tetrahedrons, due to much fewer added vertices for decomposition by the new method. Besides, the new method can be conveniently used to treat the polyhedrons with holes, or even the non-manifold polyhedrons that contain isolated facets, edges or vertices.
SUN Xin , ZHOU Kun , SHI Jiao-Ying
2008, 19(7):1783-1793.
Abstract:This paper proposes a method of interactive global illumination rendering with spatial-variant dynamic materials under complex illumination. With the spatial-variant dynamic materials, the materials of the scene can be changed while rendering, and the changes to different parts of an object can be different. The non-linear relationship between materials and out-going radiance prevents users from changing the materials with most of existing interactive global illumination rendering algorithms. If different parts of an object are covered with different materials, the materials take much more complex effects on the out-going radiance. So there is still not an interactive global illumination rendering algorithm allow users to make different changes to different parts of an object. This paper approximates a region with spatial-variant dynamic materials by dividing it into numbers of sub-regions, the material in each sub-region is uniform and consistent. The radiance transferred in the scene may be reflected by different sub-regions successively, and the paper divides the out-going radiance according to different sequences of the reflection sub-regions. This paper also represents all materials with a linear basis, and applied the basis to all sub-regions to get all different distributions of the material basis. This paper precomputes all parts of radiance of all material basis distributions. In rendering process, it uses the materials' coefficients of basis to combine corresponding precomputed data to achieve the global illumination effects with interactive performance.
2008, 19(7):1794-1805.
Abstract:Solid reconstruction from engineering drawings is one of the efficient technologies to product solid models, which has become one of the important research topics in both computer graphics and artificial intelligence. The problem of solid reconstruction from engineering drawings is introduced. The taxonomy for solid reconstruction techniques is presented with the typical algorithms reviewed. After comparing the application areas of the algorithms, the open research issues are analyzed. Finally, future work in the research field is also pointed out.
2008, 19(7):1806-1816.
Abstract:An effective method of depth image based rendering is proposed by applying texture mapping onto virtual but pre-defined multiple planes in the scene, called virtual planes. The method allows the viewpoint either static or moving around, including crossing the plane of the source images. By the approach, the relief textures from depth images are mapped onto the virtual planes, and through the pre-warping process, the virtual planes are converted into standard polygonal textures. After the virtual plane mosaics, the resultant image that supports 3D objects and immersive scenes can be generated by polygonal texture mapping. In addition, both hardware and software implementation of the method can increase the power of conventional texture mapping in image based rendering. In particular, the scope of the viewpoint can be extended into the inner space of depth images, and as a result, a novel framework is provided for constructing real-time walkthrough systems as well as for panoramic modeling from an arbitrary viewpoint in the depth image space.
HE Li-Li , FANG Gui-Sheng , KONG Fan-Sheng
2008, 19(7):1817-1827.
Abstract:A simple, rapid and efficient approach to constructing 3D conceptual solid models on sketch-based user interface is presented. It emulates the traditional sketching design approach with paper and pencil to sketch strokes on the calligraphic interface, and then applies the feature-based modeling method to construct different sweep feature-based primitives according to the shape and location relationship of feature strokes. In order to construct complex solid models, the feature joining and cutting mechanism based on intention capturing is introduced. This paper illustrates the implementation of the algorithm, and experiment shows that the algorithm is efficient and well suitable for real time on-line 3D conceptual model creation.
CHEN Yi-Jiao , LU Xi-Cheng , SHI Xiang-Quan , SUN Zhi-Gang
2008, 19(7):1828-1836.
Abstract:Based on the related work on load balancing, a load balancing algorithm named adaptive load balancing algorithm based on Minimum Sessions First is proposed. Simulation results show that the scheme achieves significant improvement in session's integrality disruption and has a fairly good load balance both in packets level and bits level. It is a sample algorithm and also can be easily implemented in hardware. The algorithm had been implemented in the high speed network security device designed by National University of Defense Technology.
WU Tong , JIN Shi-Yao , LIU Hua-Feng , CHEN Ji-Ming
2008, 19(7):1837-1846.
Abstract:Existing weakly hard real-time scheduling algorithms can not guarantee the meeting ratio of executing sequence of which the length islarger than fixed window-size. Therefore, this paper, based on the (m,p) constraint, proposes an algorithm which is named as CDBS (cut-down based scheduling). Since the discrimination of the satisfiability of (m,p) constraintneeds to go over the whole executing sequence of the task, it is very difficult and infeasible. For this reason, this paper introduces an efficient algorithm of cutting down the sequence, proves the correctness of the algorithm. This paper uses proper data structures so that the complexity of judgment is irrelevant to the length of sequence. Experimental results show the efficiency. Furthermore, this paper compares CDBS with other classical algorithms, such as EDF (earliest deadline first), DBP (distance-based priority), DWCS (dynamic window constraint schedule), and the results show its competence.
YI Peng , HU Hong-Chao , YU Jing , WANG Bin-Qiang
2008, 19(7):1847-1855.
Abstract:Scheduling algorithm is very important for network design to implement per hop behaviors (PHBs) in DiffServ model. Most of the presented DiffServ supporting scheduling algorithms are based on output queued (OQ) switches or input queued (IQ) switches, which are not suitable to be used in high speed network. This paper proposes a distributed DiffServ supporting scheduling (DDSS) algorithm based on combined input-crosspoint -queued (CICQ) switches. Theoretical analysis illuminates that the DDSS algorithm can obtain good fairness. The DDSS algorithm adopts a two-stage flow control mechanism based on periodic statistic to achieve fair bandwidth allocation for expedited forwarding (EF) and assured forwarding (AF) traffic, and uses a priority scheduling mechanism to provide lower delay for EF traffic. The time complexity of the DDSS algorithm is only O(log N), hence is practical and scalable for high speed network. Simulation results show that DDSS algorithm can obtain good fairness and delay performance. It is more appropriate to be used to support the DiffServ model.
HU Hong-Chao , YI Peng , GUO Yun-Fei , LI Yu-Feng
2008, 19(7):1856-1864.
Abstract:Scheduling policies are playing significant roles in guaranteeing the performance of core routing and switching devices. The limitations in complexities and extensibilities of current combined input and cross-point queueing switching fabric's scheduling policies are first analyzed. Then, based on this analysis, the principle for designing high extensible scheduling policies and the concept of virtual channel are proposed. Based on the principle and virtual channel, it comes up with a dynamic round robin scheduling algorithm-FDR (fair service and dynamic round robin), which is simple, and of high efficiency and fair service. FDR is based on round robin mechanism, whose complexity is only O(1). It allocates the scheduling share for each virtual channel according to its current states, thus, FDR has good dynamic and real-time performance, and it can adapt to unbalanced traffic load network environment. Simulation results under SPES (switching performance evaluation system) show that FDR exhibits good delay, throughput and anti-burst performance.