Abstract:No reference video quality assessment (NR-VQA) measures distorted videos quantitatively without the reference of original high quality videos. Conventional NR-VQA methods are generally designed for specific types of distortions, or not consistent with human's perception. This paper innovatively introduces 3D deep convolutional neural network (3D-CNN) into VQA and proposes a 3D-CNN based NR-VQA method, which is universal for non-specific types of distortions. First, the proposed method utilizes 3D patches to learn spatio-temporal features that represent video content effectively. Second, the original 3D-CNN model is modified which is used to classify videos to make it adapt to VQA task. Experiments demonstrate that the proposed method is highly consistent with human's perception across numerous distortions and metrics. Compared with other state-of-the-art no-reference VQA methods, the proposed method runs much faster while keeping the similar performance. As a no-reference VQA method, it is even comparable with many of the state-of-the-art full-reference VQA methods, which provides the proposed method with better application prospects.