Abstract:Stacking restricted Boltzmann machines (RBM) to create deep networks, such as deep belief networks (DBN), has become one of the most important research fields in deep learning. Point-wise gated restricted Boltzmann machines (pgRBM), an RBM variant, can effectively find the task-relevant patterns from data containing irrelevant patterns and thus achieves satisfied classification results. Given that train data is composed of noisy data and clean data, how the clean data is applied to promote the performance of the pgRBM is a problem. To address the problem, this study first proposes a method, named as pgRBM based on random noisy data and clean data (pgrncRBM). The pgrncRBM makes use of RBM and the clean data to obtain the initial values of the task-relevant weights, so it can learn the "clean" data from the data containing random noisy. In the pgrncRBM, the general RBM is used to pre-train the weights of task-relevant patterns from data and irrelevant patterns. If the noise is an image, the pgrncRBM cannot learn the task-relevant patterns from the noisy data. Spike-and-Slab RBM, an RBM variant, uses two types of hidden layers to determine the mean and covariance of each visible unit. Threrfore, this study combines ssRBM with pgRBM and proposes a method, named as pgRBM based on image noisy data and clean data (pgincRBM). The pgincRBM uses the ssRBM to model the noise, so it can learn the "clean" data from the data containing image noisy. And then, this study stacks pgrncRBM, pgincRBM, and RBMs to create deep networks, and discusses the feasibility that the weight uncertainty method is developed to prevent overfitting in the proposed networks. Experimental results on MNIST variation datasets show that pgrncRBM and pgincRBM are effective neural networks learning methods.