Abstract:GitHub is one of the most popular open-source project management platforms. Due to the need for team collaboration, GitHub introduced an issue tracking function to facilitate project users in submitting and tracking problems or new feature requests. When resolving issues, contributors of open-source projects typically need to execute failure reproducing test cases to reproduce the problems mentioned in the issue and verify whether the issue has been resolved. However, empirical research conducted on the SWE-bench Lite dataset reveals that nearly 90% of issues are submitted without failure reproducing test cases, leading contributors to write additional failure reproducing test cases when resolving the issues, bringing additional work burden. Existing failure reproducing test case generation methods usually rely on stack trace information, but GitHub issues do not explicitly require such information. Therefore, this study proposes a failure reproducing test case generation method based on a large language model, aimed at automatically generating failure reproducing test cases for GitHub issues, assisting issue contributors in reproducing, understanding, and verifying issues, and improving the efficiency of issue resolution. This method first retrieves diverse code context information related to the issue, including error root functions, import statements, and test case examples, then constructs precise prompts to guide the large language model in generating effective failure reproducing test cases. This study conducts comparative and ablation experiments to verify the effectiveness of this method in generating failure reproducing test cases for GitHub issues.