Abstract:Matrix is a commonly used data structure in the field of engineering. In the field of deep learning, matrix multiplication is one of the key technologies in neural network training. Faced with the problem of operations of large matrices, the block matrix technology can be used to convert large matrix operations into small matrix operations to realize parallel computation, which can greatly reduce matrix operation steps and improve the efficiency of matrix operation. Firstly, this study systematically summarizes the current matrix formalization work in academia and analyzes the main methods of matrix formalization. Secondly, it introduces and improves the matrix formalization method based on Coq record type, which includes putting forward a new definition of matrix equivalence, sorting out and perfecting the previous formalization work and proving a new set of lemmas; then on this basis, the formalization of block matrix operations is further realized, and the difficulties and solutions of this type of inductive proof are discussed. Finally, basic libraries with different types for matrix and block matrix formalization are realized.