[关键词]
[摘要]
给出批量矩阵的LU分解和批量求逆算法在GPU上实现及优化方法. 针对批量LU分解问题, 分析Left-looking和Right-looking等常用LU分解块算法在GPU上实现时对全局内存的数据读写次数, 针对GPU架构特点, 选择具有较少访存数据量的Left-looking块算法. 在LU分解的选主元过程, 采用适合GPU架构的并行二叉树搜索算法. 此外, 为了降低选主元引起的行交换过程对算法性能的影响, 提出Warp分组行交换和行交换延迟2个优化技术. 针对LU分解后的批量求逆问题, 分析矩阵求逆过程中修正方法, 为了减少修正过程对全局内存的访问, 在批量求逆的GPU实现中采用延迟修正的矩阵求逆块算法. 同时, 为了加快数据读写速度, 采用更多利用寄存器和共享内存的优化方法和减少访存数据量的列交换优化方法. 另外, 为了避免线程的闲置和共享内存等GPU资源浪费, 提出运行时动态GPU资源分配方法, 相较于一次性分配的静资源分配方法性能得到明显提升. 最终, 在TITAN V GPU上, 对10000个规模在33–190之间的随机矩阵进行测试, 测试的数据类型为单精度复数、双精度复数、单精度实数和双精度实数. 所实现的批量LU分解算法的浮点计算性能分别可达到约2 TFLOPS、1.2 TFLOPS、1 TFLOPS、0.67 TFLOPS, 与CUBLAS中的实现相比加速比最高分别达到了约9×、8×、12×、13×, 与MAGMA中的实现相比加速比分别达到了约1.2×–2.5×、1.2×–3.2×、1.1×–3×、1.1×–2.7×. 批量求逆算法的浮点计算性能分别可达到约4 TFLOPS、2 TFLOPS、2.2 TFLOPS、1.2 TFLOPS, 与CUBLAS中的实现相比加速比最高分别达到了约5×、4×、7×、7×, 与MAGMA中的实现相比加速比分别达到了约2×–3×、2×–3×、2.8×–3.4×、1.6×–2×.
[Key word]
[Abstract]
This study presents the existing and optimized implementation methods for batched lower-upper (LU) matrix decomposition and batched inversion algorithms on the graphics processing unit (GPU). For batched LU decomposition, the study analyzes the number of reads and writes to the global memory when the Left-looking, Right-looking, and other commonly used blocked LU decomposition algorithms are implemented on the GPU. The blocked Left-looking algorithm with less memory access data is selected due to the characteristics of the GPU architecture. In the process of pivoting during LU decomposition, a parallel binary tree search algorithm suitable for the GPU architecture is adopted. In addition, to reduce the impact of the row interchange process caused by the pivoting on the performance of the algorithm, this study proposes two optimization techniques, namely, the Warp-based packet row interchange and row interchange delay. For batched inversion after LU decomposition, this study investigates the correction method employed in the matrix inversion process. When batched inversion is implemented on the GPU, a blocked matrix inversion algorithm with delayed correction is adopted to reduce access to the global memory during the correction. Furthermore, to speed up data reading and writing, the study adopts the optimization method of using more registers and shared memory and that of performing column interchange to reduce memory access data. In addition, a method of dynamic GPU resource allocation during operation is proposed to avoid the idleness of threads and the waste of shared memory and other GPU resources. Compared with the static one-time resource allocation method, the dynamic allocation method improves the performance of the algorithm significantly. Finally, 10000 random matrices with sizes between 33 and 190 data are tested on the TITAN V GPU, and the types of the tested data are single-precision complex, double-precision complex, single-precision real, and double-precision real. The floating-point arithmetic performance of the batched LU decomposition algorithm implemented in this study reaches about 2 TFLOPS, 1.2 TFLOPS, 1 TFLOPS, and 0.67 TFLOPS, respectively. This algorithm achieves the highest speedup of about 9×, 8×, 12×, and 13×, respectively, compared with the implementation in CUBLAS. The highest speedup achieved is about 1.2×–2.5×, 1.2×–3.2×, 1.1×–3×and 1.1×–2.7×, respectively, compared with the implementation in MAGMA. The floating-point arithmetic performance of the proposed batched inversion algorithm can reach about 4 TFLOPS, 2 TFLOPS, 2.2 TFLOPS, and 1.2 TFLOPS, respectively. This algorithm achieves the highest speedup of about 5×, 4×, 7×, and 7×, respectively, compared with the implementation in CUBLAS. The speedup is about 2×–3×, 2×–3×, 2.8×–3.4×and 1.6×–2×, respectively, compared with the implementation in MAGMA.
[中图分类号]
TP301
[基金项目]
国家重点研发计划(2020YFB0204802, 2017YFB0202202); 中国科学院战略性先导科技专项(C类)(XDC05000000); 光合基金A类(20210701)