2019, 30(11):3243-3258. DOI: 10.13328/j.cnki.jos.005567 CSTR:
Abstract:Synthesizing ranking functions of loop programs is the dominant method for checking their termination. In this study, the synthesis of ranking functions of a class of loop program is reduced to the detection of positive polynomial on the standard simplex. This then enables the computation of the desired ranking functions by linear programming tool. Different from the existing methods based on cylindrical algebraic decomposition, the proposed method in the study can get more expressive polynomial ranking functions within an acceptable time.
GAN Shui-Tao , WANG Lin-Zhang , XIE Xiang-Hui , QIN Xiao-Jun , ZHOU Lin , CHEN Zuo-Ning
2019, 30(11):3259-3280. DOI: 10.13328/j.cnki.jos.005562 CSTR:
Abstract:This paper introduces a novel approach called OPT-SSE to speed up and scale symbolic execution-guided symbolic execution method based on program function label slice. The OPT-SSE is constituted by two parts. One part is static analysis, which decomposes the whole program execution paths by different program function label slices through establishing the mapping between function tags and program CFG. The other part is specified function tags guided rule symbolic execution, which cuts unrelated branch path timely according to specified function tags flow when taking symbolic execution analysis. Through analyzing specified function label slice or different function label slices concurrently, OPT-SSE could avoid stuck in complex cycle, which is beneficial for speeding up goal-guided process and scaling the symbolic execution. OPT-SSE is evaluated on twenty applications from ten famous open softwares, like binutils, gzip, coreutils, and so on. Through comparison with KLEE, OPT-SSE speeded up goal-guided by 4.238 times, increased the goal-guided success rate about 31%, and increased instruction coverage about 8.95%, branch coverage about 8.28%.
SUN Quan , XU Lei , XIA Xin-Meng , ZHANG Wei-Feng
2019, 30(11):3281-3296. DOI: 10.13328/j.cnki.jos.005582 CSTR:
Abstract:The Android system has always dominated the mobile operating system. Its unique event-driven model and multi-threaded model also cause concurrency defects while enhancing the user experience and improving the program performance. In concurrent programs, the non-determinism of thread scheduling and the complexity of its reproducibility are the reasons for the difficulty of concurrency bug detection. The existing technologies mainly focus on the analysis of happens-before relationships based on the dynamic analysis, and then detect the concurrency bugs of Andriod applications (App for short). Nevertheless, there are still some problems of low coverage and high false positive (FP) due to the shortage of dynamic method. In this study, data races in Android applications are detected by the shared variable analysis and the constraint solving method, and detection tool, namely RaceDetector, is implemented. The tool firstly extracts the relevant information according to the characteristics of Android system and the definition of data race, and further expands the shared variable analysis to improve the accuracy and performance, and then it obtains a suspicious data race set with suspicious data race analyzing. Next, it identifies the feasible implementation of the path and the order of happens-before relationships according to every suspicious data race candidate through the method of constraint solving and finally detects the real data races. In experimental part, 15 popular applications with APK files are collected from Google Play and other sourcesas data sets. RaceDetector reports 340 data races on average, include 13% (44/340) of FP. Compared to existing tool, EventRacer, which triggers data races with 300 random events and reports 2 harmful data races on average, RaceDetector covers all thread schedules and event schedules, and it reports 15 harmful data races on average.
CHEN Xing , HUANG Zhi-Ming , YE Xin-Shu , MA Yun , CHEN Yi-Yan , GUO Wen-Zhong
2019, 30(11):3297-3312. DOI: 10.13328/j.cnki.jos.005802 CSTR:
Abstract:As the infrastructure supporting smart home evolves, smart home has entered a new stage featured by intelligent services. A large number of complex and heterogeneous smart devices cooperate with each other, and make up plenty of intelligent and integrated smart home applications, in which context-aware services can be regarded as typical representatives. The context-aware services aim to provide accurate services to users according to their contexts. Developers usually design and develop these services based on scenario, and face huge challenges from device and demand variations. They first have to be familiar with the APIs provided by smart devices and then build the program upon them according to functional and nonfunctional requirements of services. In order to customize and develop these services more efficiently, this study proposes an approach to model and execute context-aware services at runtime, which introduces the knowledge graph into development process. First, concepts and relations of context-aware services are defined in the concept model of knowledge map. Second, runtime instances of concepts and relations in knowledge map are used to represent the knowledge of user's context. Third, knowledge reasoning based on the runtime knowledge map is implemented to perform device functions automatically. The proposed framework is evaluated on a prototype system, and the results show that the proposed approach can model and execute context-aware services at runtime and LOC (lines of code) is reduced by 90%.
ZHOU Jun-Zuo , ZHU Zong-Kui , HE Zheng-Qiu , CHEN Wen-Liang , ZHANG Min
2019, 30(11):3313-3325. DOI: 10.13328/j.cnki.jos.005862 CSTR:
Abstract:With the development of human-machine dialogue, it is of great significance for the computer to accurately understand the user's query intention in human-machine dialogue systems. Intention classification aims at judging the user's intention in human machine dialogue and improves the accuracy and naturalness of the human machine dialogue system. This study first analyzes the advantages and disadvantages of multiple classification models in the intention classification task. On this basis, this study proposes a hybrid neural network model to comprehensively utilize the diversity outputs of multiple deep network models. To further improve the perfoance, the language model embedding is used in the input feature preprocessing and the semantic mining ability possessed for the hybrid network which can effectively improve the expression ability of the model. The proposed model achieves 2.95% and 3.85% performance improvement on the two data sets respectively compared to the best benchmark model. The proposed model also achieves the top performance in a shared task.
ZHANG Nan , DING Shi-Fei , ZHANG Jian , ZHAO Xing-Yu
2019, 30(11):3326-3339. DOI: 10.13328/j.cnki.jos.005574 CSTR:
Abstract:Stacking restricted Boltzmann machines (RBM) to create deep networks, such as deep belief networks (DBN), has become one of the most important research fields in deep learning. Point-wise gated restricted Boltzmann machines (pgRBM), an RBM variant, can effectively find the task-relevant patterns from data containing irrelevant patterns and thus achieves satisfied classification results. Given that train data is composed of noisy data and clean data, how the clean data is applied to promote the performance of the pgRBM is a problem. To address the problem, this study first proposes a method, named as pgRBM based on random noisy data and clean data (pgrncRBM). The pgrncRBM makes use of RBM and the clean data to obtain the initial values of the task-relevant weights, so it can learn the "clean" data from the data containing random noisy. In the pgrncRBM, the general RBM is used to pre-train the weights of task-relevant patterns from data and irrelevant patterns. If the noise is an image, the pgrncRBM cannot learn the task-relevant patterns from the noisy data. Spike-and-Slab RBM, an RBM variant, uses two types of hidden layers to determine the mean and covariance of each visible unit. Threrfore, this study combines ssRBM with pgRBM and proposes a method, named as pgRBM based on image noisy data and clean data (pgincRBM). The pgincRBM uses the ssRBM to model the noise, so it can learn the "clean" data from the data containing image noisy. And then, this study stacks pgrncRBM, pgincRBM, and RBMs to create deep networks, and discusses the feasibility that the weight uncertainty method is developed to prevent overfitting in the proposed networks. Experimental results on MNIST variation datasets show that pgrncRBM and pgincRBM are effective neural networks learning methods.
SU Chang , FU Ze , ZHENG Fa-Kui , CHEN Yi-Jiang
2019, 30(11):3340-3354. DOI: 10.13328/j.cnki.jos.005634 CSTR:
Abstract:Metaphor computation is an important problem of natural language processing. In this work, an intensive study of metaphor identification is carried out from different angles, including linguistical, psychological, and cognitive angles. Human categorization is a dynamic process of measuring difference between objects from different angles. Therefore, an idea about dynamic categorization is proposed from multiple angles to recognize metaphors, which is different from traditional metaphor recognition methods. The research involves three aspects:how to get features of concepts, how to choose angles by features, and how to measure difference based on a specific angle. Then, an experiment of nominal metaphor recognition is performed based on dynamic categorization. The experimental results show that the accuracy of metaphorical/literal references recognition can reach 85.4%. It supports the validity, efficiency of the proposed method.
YANG Ming-Qi , LI Zhan-Shan , ZHANG Jia-Chen
2019, 30(11):3355-3363. DOI: 10.13328/j.cnki.jos.005559 CSTR:
Abstract:Table constraints define an arbitrary constraint explicitly as a set of solutions or non-solutions. Generalized arc consistency (GAC) is the most widely used consistency for solving non-binary constraint satisfaction problems (CSPs). Simple tabular reduction (STR), which dynamically deletes invalid tuples during search, speeds up the process of updating supports for variables and can restore in constant time when backtrack occurs. STR achieves dynamically maintaining valid parts of tables during search, which has been shown to be efficient for enforcing GAC. Recent research on improving STR mainly focuses on the compressed representation of tables. In this study, a new algorithm is proposed based on dynamically maintaining valid parts of tables, which deletes invalid tuples and updates supports when enforcing GAC on table constraints. The proposed algorithm is applied to original table constraints and can also restore in constant time. Experimental evaluations show that the proposed algorithm outperforms existing STR algorithms without table compression. In some classes of problems, the proposed algorithm is even more efficient than state-of-the-art compression based STR algorithms.
GAO Jin-Tao , LI Zhan-Huai , DU Hong-Tao , LIU Wen-Jie
2019, 30(11):3364-3381. DOI: 10.13328/j.cnki.jos.005579 CSTR:
Abstract:Sort-merge join is an important implementation method of join in database system, and is more widely used than hash join. Under distributed environment, data is sharded and distributed across many nodes, and usually needed to be transmitted by network which is very expensive. Therefore, it is far more challenging to efficiently process sort-merge join in distributed database. Traditional strategy firstly sorts data, and then carries out merge-join based on sorted data, which are both related with original data. But original data usually has useless data blocks, which does not participate in join, but will increase the extra cost during join including network cost. The bigger of data size, the higher of possibility of useless data blocks. Traditional sort-merge join strategy does not prehandle these useless data. In this study, a parallel sort-merge join is proposed based on prune, called Pr_PSMJ, which can efficiently prune the useless data ahead from join data, and improve the efficiency of join. Firstly, a bilateral adjacency list (BAL) is constructed by the statistic information from shards of join data. Using BAL, the useless data of join data can be pruned and the correctness of final join result is guaranteed. Secondly, after the pruning, the optimal local-join executing place can be computed by BAL, and the quantity of data mitigating among nodes is minimized. Finally, during the join step, for the independence of local-join guaranteed by BAL, the executing of sort-merge join can be easily parallelled, and in every executing node, it is natural to parallel the local-partitial-join using multi-core environment. The final result is achieved by merging local-result. Because high efficient prune operation is done before executing join, Pr_PSMJ is almost fit for every sort-merge strategy, and it is a good lesson for other join strategies. The correctness, efficiency, and adaptability of algorithm are analyzed based on Pr_PSMJ. By experiments, it is proved that under distributed environment, orienting large data, Pr_PSMJ can effectively decrease the overhead of network and improve the join efficiency than other strategies.
LI Lin , ZHU Ge , XIE Qing , SU Chang , YANG Zheng-Lu
2019, 30(11):3382-3396. DOI: 10.13328/j.cnki.jos.005542 CSTR:
Abstract:It is a popular way that makes use of users' rating data to recommend products or items to users. Currently, more and more users have contributed their reviews to recommender system for better online shopping experiences. Researchers have become interested in using review texts as extra information to improve recommendation quality. It is argued that reviews written by a user implicitly represent his/her preferences. In this study, a preference guidance recommendation approach is proposed that simultaneously learns latent factors from rating data and latent topics from review texts. More specifically, the learned latent topics are assumed to be positively correlated with both of the corresponding user factors and item factors, which can further improve the accuracy of recommendation prediction. The proposed approach has two advantages. One is that in order to capture such a dependent correlation, a transformation function is used for simultaneously learning latent features, i.e., latent factors and latent topics. The other is that the predicted ratings of items are influenced by the implicit tastes of users, i.e., the latent topics from review texts. Experiments are conducted on the data from Amazon consisting of 28 categories. Experimental results show that the proposed approach obtains 3.32% improvement than the recent TopicMF approach in some category dataset and the average improvement is 0.92% in terms of mean square error.
MA Hui-Fang , ZHANG Di , ZHAO Wei-Zhong , SHI Zhong-Zhi
2019, 30(11):3397-3412. DOI: 10.13328/j.cnki.jos.005545 CSTR:
Abstract:Recommending valuable and interesting contents for microblog users is an important way to improve the user experience. In this study, tags are considered as the users' interests and a microblog recommendation method based on hypergraph random walk tag Extension and tag probability correlation is proposed via the analysis of characteristics and the existing limitations of microblog recommendation algorithm. Firstly, microblogs are considered as hyperedges, while each term is taken as the hypervertex, and the weighting strategies for both hyperedges and hypervertexes are established. A random walk is conducted on the hypergraph to obtain a number of keywords for the expansion of microblog users. And then the weight of the tag for each user is enhanced based on the relevance weighting scheme and the user tag matrix can be constructed. Probability correlation between tags is calculated to construct the tag similarity matrix, which can be used to update the matrix is updated using the label similarity matrix, which contains both the user interest information and the relationship between tags and tags. Experimental results show that the algorithm is effective in microblog recommendation.
2019, 30(11):3413-3426. DOI: 10.13328/j.cnki.jos.005571 CSTR:
Abstract:Residue interaction network alignment plays an important role in the research of the relations between protein structure and its function. In this study, protein sequence information (residue matching degree) is introduced to the optimization function of MAGNA algorithm, which carries out network alignment through network topological information, and studied the influence of topological information and sequence information on the residue interaction network alignment. Then, an SI-MAGNA algorithm suitable for residue interaction network alignment is proposed. The experiment showed that SI-MAGNA algorithm has higher accuracy EC (edge correctness) compared with the classical alignment methods (GRAAL, MI-GRAAL, MAGNA, and CytoGEDEVO) based on network topological information. At last, using SI-MAGNA algorithm to align and analyze the residue interaction networks of biological homologous proteins from different heat-resistance temperatures, the influence of protein structure on the thermal stability is studied.
YANG Hai-Feng , ZHANG Yong-Bo , HUANG Yu-Liang , FU Hui-Min
2019, 30(11):3427-3439. DOI: 10.13328/j.cnki.jos.005569 CSTR:
Abstract:High-precision indoor positioning has broad market prospect. In traditional indoor positioning algorithm based on WKNN, it is difficult to deal with a target space of large area, and its position estimation results face the matters of inaccurate and instability as rebounding or clustering. To solve these problems, this study proposes a WKNN indoor positioning algorithm based on spatial characteristics partition and former location restriction. According to the proposed algorithm, target space of large area is divided into multiple partitions by its spatial characteristics, which solved the problem that one fingerprint database cannot achieve total coverage. It also introduced the restricted relationship between the former and the present position, which improved the quality of candidate reference points and thus improved the smoothness of the estimation results. Results of a large number of indoor positioning experiments in real environment show that the proposed algorithm can effectively improve the indoor positioning accuracy when compared with the traditional WKNN.
ZHU Jin-Qi , SUN Hua-Zhi , HUANG Yong-Xin , LIU Ming
2019, 30(11):3440-3456. DOI: 10.13328/j.cnki.jos.005655 CSTR:
Abstract:Software defined networking may need to frequently update their data planes to optimize network performance due to flow dynamics or traffic load transfer. Most existing strategies first determine a target route configuration based on the current network flow status. Then, flows in the network are updated to the target route configuration. However, since the low operation speed of ternary content addressable memory (TCAM) for flow tables update, route updates usually gets long delay. Moreover, route updates delay will get longer in a large or topology frequently changed network. According to recent works, most flows have short durations and the total flows intensity may vary after a certain time period. Hence, the new route configuration may be inefficient if the route update delay takes too long. In this study, the real time route update for SDN is addressed and a delay satisfied route selection and updating scheme (DSRSU) is proposed. Different from most existing studies, DSRSU jointly considers the flow route selection in the control plane and route update scheduling in the date plane to reduce the route update delay. More specially, only a subset of flows is chosen for route updates in route selection. To further improve the update speed, an update dependency graph is established to explore the scheduling order of the flows during update scheduling. Simulation results demonstrate DSRSU can largely reduce the route update delay compared with previous route update strategies while maintaining a similar route performance.
SHI Ke , SONG Xiao-Mei , Wang Xin-Da , HU Wen-Biao
2019, 30(11):3457-3468. DOI: 10.13328/j.cnki.jos.005577 CSTR:
Abstract:Indoor positioning is fundamental to many smartphone applications, attracting a great deal of research efforts in recent years. Among them, RSSI (received signal strength indication) based fingerprinting has become an increasingly popular technique for realizing indoor smartphone positioning using existing WiFi infrastructures. However, wireless signal transmission is easily affected by the environment, which may result in the deviation of WiFi signal fingerprint positioning. To address this issue, multi-sensors assisted WiFi fingerprinting method is proposed to improve the performance of RSSI fingerprinting. In this method, the data obtained from the smartphone's built-in sensors like accelerometer and gyroscopeis is utilized to estimate user's trajectory along with its credibility. Then a probability model which combines the RSSI fingerprint and users' trajectory is established to implement a match between fingerprint and location. Experiments show that compared with the classical fingerprint-matching algorithm, the proposed method can effectively reduce the adverse effects of environmental changes on positioning and improve average localization accuracy by making use of the sensor data.
MA Hui , TANG Yong , LIANG Rui-Shi
2019, 30(11):3469-3485. DOI: 10.13328/j.cnki.jos.005623 CSTR:
Abstract:In contrast to the paths queries under private transportation, which mainly concern the length or the driving time of a path, the paths queries under public transportation have to consider the time sequences between subsequent edges, as well as the total cost over a path. This study looks into three types of queries:given a source node, a destination node, a time interval, and a cost constraint, to find out a path within the given time interval which is restricted by the cost constraint, and has (i) the earliest arrival time, or (ii) the latest departure time, or (iii) the shortest duration. Firstly, a modified Dijkstra's algorithm called Dijk-CCMTP is presented based on derived algorithms respectively for the three types of queries above. After that, an effective indexing structure ACCTL (approximate cost constrained time labelling) is proposed. ACCTL invokes Dijk-CCMTP to precompute some canonical paths from (and, to) each node in the graph. For any query from source node s to destination node d, the approximate answer path can be obtain by matching those canonical paths that are from s (and, to d) in a database table join manner, so as to avoid traversing on the entire graph. The preprocessing time to build ACCTL is O(|V|·△max·|E|·(log|E|+△max)), where|V|is the number of nodes,|E|the number of edges, and △max the maximal degree among the nodes in the graph. Finally, the efficiency and effectiveness of ACCTL are confirmed. Experimental results show that the query algorithm under ACCTL is 2~3 orders of magnitude faster than the Dijkstra variant methods. The index preprocessing time and index size are also analyzed.
2019, 30(11):3486-3502. DOI: 10.13328/j.cnki.jos.005575 CSTR:
Abstract:Zhao and Chan recently proposed a bitcoin voting protocol that allows n voters to vote for one of two candidates to receive bitcoin funding. Voters first generated their votes by secret sharing, commitment, and zero knowledge proof techniques, and then voted and funded the candidates by Bitcoins through bitcoin transactions, which protected the privacy of voters. This study supports n voters to produce general votes for m candidates, and to vote and fund the candidates by Ethereum coins through smart contracts. Meanwhile, the smart contract in this study does not rely on a threshold signature scheme and is more efficient, and the main business logic of the contract is tested in a model checking tool.
XU Jian , WANG An-Di , BI Meng , ZHOU Fu-Cai
2019, 30(11):3503-3517. DOI: 10.13328/j.cnki.jos.005573 CSTR:
Abstract:k-nearest neighbor (kNN) classifier has wide applications in many areas such as bioinformatics, stock forecasting, Web-page classification, and Iris classification prediction. With the increasing awareness of user privacy protection, kNN classifier classification also needs to provide supports for encrypted data, so privacy-preserving kNN classifier (PP-kNN) is designed to keep the privacy of user data. Firstly, the operation of kNN classifier is analyzed, and a set of basic operations is extracted, including addition, multiplication, comparison, inner product, etc. Then, two homomorphic encryption schemes and one fully homomorphic encryption scheme are selected to encrypt the data. Security protocols are designed for each of these, which outputs are consistent with the same operation over plaintext data and proved that protocol is secure in the semi-honest model. Finally, these security protocols are designed in a modules composable way to achieve the encryption of the kNN classifier. The PP-kNN classifier is implemented and evaluated based on real data, the result show that the classifier could classify the ciphertext data with higher efficiency, and also provide privacy protection for user data.
ZHANG Gui-Min , LI Qing-Bao , ZHANG Ping , CHENG San-Jun
2019, 30(11):3518-3534. DOI: 10.13328/j.cnki.jos.005539 CSTR:
Abstract:Code reuse attacks (CRAs) and their defense technologies have been the hot topic in network security field. However, current defense technologies usually focus on a single type of attacks and can be easily bypassed by other attacks. This paper presents a method called RCMon to defend CRAs based on running characteristics monitoring to overcome this problem. RCMon defines the running characteristics model (RCMod) according to the realize theory of CRAs and designs a safety verification automaton to verify whether current status meets the constraints in the RCMod. When RCMon is implemented, monitor code is instrumented into the target executable directly so that target program will trap in the Hypervisor when it runs to monitoring nodes, then the construction of running characteristics databse and safety verifications will be both performed by the Hypervisor. The experiment results show that RCMon can effectively detect and defense mostly CRAs, and induces average 22% performance penalty.
GONG Lin-Ming , LI Shun-Dong , DOU Jia-Wei , WANG Dao-Shun
2019, 30(11):3535-3548. DOI: 10.13328/j.cnki.jos.005556 CSTR:
Abstract:Privacy-preserving tests are studied for social-willing:Alice and Bob can test whether they are suitable to do something jointly in an ideal area without either party revealing any other information about each other's location. Nowadays, most mobile intelligent devices come pre-equipped with location (GPS) sensing capabilities, allowing developers to create a wide variety of location-aware applications and services. While location awareness provides novel features and functionality, it opens the door to many privacy nightmares. In many occasions, however, users are not willing to share their actual location or the range of their activities, but to determine whether they are able to do something in some area (a place is convenient for each user), which is practically one bit of information. Private social-willing protocols allow this functionality without any further information leakage. Firstly, a homomorphic encryption scheme is developed, assisted by cloud server and based on the intractable problem of decisional composite residuosity. Then, a novel protocol is proposed based on the developed homomorphic encryption scheme, and security in ideal/real model is proved.
HUANG Yan , HE Ze-Wen , ZHANG Wen-Sheng
2019, 30(11):3549-3566. DOI: 10.13328/j.cnki.jos.005835 CSTR:
Abstract:Makeup transfer is a task that can transfer the reference-makeup to the before-makeup face, where makeup style is maintained. It provides a fast and efficient solution for visualizing candidate makeups, receiving extensive academical and industrial attention. However, the difference, such as the human face, eyebrow distance, and lip shape is more or less ignored by recent works, leading to a problem that the facial structures are lost. Moreover, the lack of datasets which are composed of before-makeup and after-makeup images also provides additional difficulties. To this end, this study proposes a regional fast makeup transfer multi-way network. Specifically, the key points of faces are firstly detected and these points are used to align different faces in an end-to-end style. Then, three way-specifical losses are used to optimize the makeup transfer network jointly. These losses are designed with the apparent properties of the eyeshadow, the lipstick, and the foundation, which shows better makeup results. Finally, poisson matting is used to fuse multi-way outputs. Compared to the previous works, the proposed model requires smaller storage space and has faster speed. It can keep the structure of the original face and lead to balancer eyeshadow, more vivid lipstick color, and more detailed foundation make-up. The proposed model is estimated on two universal makeup transfer datasets (i.e., VMU and DLMT). The experimental results show that this proposed method achieves a better visual effect. Besides, it also outperforms the alternatives in terms of makeup speed, the effect of transferring different style to the same person, and transferring the same style to the various people.
YU Deng , LIU Yu-Jie , XING Min-Min , LI Zong-Min , LI Hua
2019, 30(11):3567-3577. DOI: 10.13328/j.cnki.jos.005570 CSTR:
Abstract:The purpose of this paper is to introduce a new approach for the free-hand sketch representation in the sketch-based image retrieval (SBIR), where the sketches are treated as the queries to search for the natural photos in the natural image dataset. This task is known as an extremely challenging work for 3 main reasons:(1) Sketches show a highly abstract visual appearance versus natural photos, fewer context can be extracted as descriptors using the existing methods. (2) For the same object, different people provide widely different sketches, making sketch-photo matching harder. (3) Mapping the sketches and photos into a common domain is also a challenging task. In this study, the cross-domain question is addressed using a strategy of mapping sketches and natural photos in multiple layers. For the first time, a multi-layer deep CNN framework is introduced to train the multi-layer representation of free hand sketches and natural photos. Flickr15k dataset is used as the benchmark for the retrieval and it is shown that the learned representation significantly outperforms both hand-crafted features as well as deep features trained by sketches or photos.