Abstract:With the development of big data and the wide application of machine learning, data from all walks of life is growing massively. High dimensionality is one of its most important characteristics, and applying feature selection to reduce dimensions is one of the preprocessing methods of high dimensional data. Stability of feature selection is an important research direction, and it stands for the robustness of results with respect to small changes in the dataset composition. Improving the stability of feature selection can help to identify relevant features, increase experts' confidence to the results, and further reduce the complexity and costs of getting original data. This paper reviews current methods for improving the stability, and presents a classification of those methods with analysis and comparison on the characteristics and range of application of each category. Then it summarizes the evaluations of stability of feature selection, and analyzes the performance of stability measurement and validates the effectiveness of four ensemble approaches through experiments. Finally, it discusses the localization of current works and a perspective of the future work in this research area.