Abstract:The performance of image classification is limited by the diversity of visual information and the influence of background noise. Existing works usually applies cross-modal constraints or heterogeneous alignment algorithms to learn visual representations with strong discrimination. However, the difference in feature distribution from different sources limit the effective learning of visual representations. To address this problem, this paper proposes an image classification framework CMIF based on cross-modal semantic information inference and fusion, and introduces the semantic description of images and statistical knowledge as privileged information, using the privileged information learning paradigm to guide the mapping of image features from visual space to semantic space in the training stage, a class-aware information selection algorithm (CIS) is proposed to learn the cross-modal enhanced representation of images. Aiming at the heterogeneous feature differences in representation learning problem, using Partial Heterogeneous Alignment algorithm (PHA) to achieve cross-modal alignment of visual features and semantic features extracted from privileged information. In order to further suppress the noise from visual space to semantic space, the CIS algorithm selection based on graph fusion reconstructs the key information in the semantic representation to form an effective supplement to the visual prediction information. Experiments on the cross-modal classification datasets VireoFood-172 and NUS-WIDE show that CMIF can learn robust visual-semantic features, and it has achieved stable performance improvement on the convolution-based ResNet-50 and Transform-based ViT image classification models.