Abstract:Derivative-free optimization is commonly employed in tasks such as black-box tuning of language-model-as-a-service and hyper-parameter tuning of machine learning models, where the mapping between the solution space of the optimization task and the performance indicator is intricate and complex, making it challenging to explicitly formulate an objective function. Accurate and stable evaluation of solutions is crucial for derivative-free optimization methods. The evaluation of the quality of a solution often requires running the model on the entire dataset, and the optimization process sometimes requires a large number of evaluations of solution quality. The growing complexity of machine learning models and the expanding size of training datasets result in escalating time and computational costs for accurate and stable solution evaluation, contradicting the principle of green and low-carbon machine learning and optimization. In view of this, this study proposes a green derivative-free optimization framework with dynamic batch evaluation (GRACE). Based on the similarity of training subsets, GRACE adaptively and dynamically adjusts the sample size used for evaluating solutions during the optimization process, thereby ensuring optimization performance while reducing optimization costs and computational expenses, achieving the goal of green, low-carbon, and efficient optimization. Experiments are conducted on tasks such as black-box tuning of language-model-as-a-service and hyper-parameter optimization of models. By comparing with the comparative methods and the degraded versions of GRACE, the effectiveness, efficiency, and green and low-carbon merits of GRACE are verified. The results also show the hyper-parameter robustness of GRACE.