Abstract:Software safety is a key property that determines whether software is vulnerable to malicious attacks. Nowadays, Internet attacks are ubiquitous, thus it is important to evaluate the number and category of defects in the software. Users need not only evaluate the safety of software that is unreleased or released recently, but also evaluate the software that is already published for a while. For example, when users want to evaluate the safety of several competitive software systems before they decide their purchase, they need a low cost, objective evaluation approach. In this paper, a natural language data driven approach is proposed for evaluating the safety of software that is released already. This approach crawls natural language data adaptively, and applies a dual training to evaluate the software safety. As the self-adaptive Web crawler adjusts feature words from the feedback and acquires heterogeneous data from search engines, software safety evaluation utilizes extensive data sources automatically. Furthermore, by customizing a machine translation model, it is quite efficient to convert natural language to its semantic encoding. Hence, a machine learning model is built for intelligently evaluating software safety based on semantic characteristics of natural language. Experiments are conducted on the Common Vulnerabilities and Exposures (CVE) and the National Vulnerability Database (NVD). The results show that the presented approach is able to make safety evaluations precisely on the amount, impact and category of defects in software.