高维贝叶斯优化研究综述
作者:
中图分类号:

TP311

基金项目:

科技部2030新一代人工智能项目(2021ZD0113303); 国家自然科学基金(62192783, 62276128); 中央高校基础研究基金(14380128)


Survey on High-dimensional Bayesian Optimization
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [109]
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    贝叶斯优化是一种优化黑盒函数的技术, 高效的样本利用率使其在众多科学和工程领域中得到了广泛应用, 如深度模型调参、化合物设计、药物开发和材料设计等. 然而, 当输入空间维度较高时, 贝叶斯优化的性能会显著下降. 为了克服这一限制, 许多研究对贝叶斯优化方法进行了高维扩展. 为了深入剖析高维贝叶斯优化的研究方法, 根据不同工作的假设与特征将高维贝叶斯优化方法分为3类: 基于有效低维度假设的方法、基于加性假设的方法以及基于局部搜索的方法, 并对这些方法进行阐述和分析. 首先着重分析这3类方法的研究进展, 然后比较各类方法在贝叶斯优化应用中的优劣势, 最后总结当前阶段高维贝叶斯优化的主要研究趋势, 并对未来发展方向展开讨论.

    Abstract:

    Bayesian optimization is a technique for optimizing black-box functions. Due to its high sample utilization efficiency, it is widely applied across various scientific and engineering fields, such as hyperparameters tuning of deep models, compound design, drug development, and material design. However, the performance of Bayesian optimization significantly deteriorates when the input space is of high dimensionality. To overcome this limitation, numerous studies carry out high-dimensional extensions on Bayesian optimization methods. To deeply analyze research methods of high-dimensional Bayesian optimization, this study categorizes these methods into three types based on assumptions and characteristics of different kinds of work: methods based on the effective low-dimensional hypothesis, methods based on additive assumptions, and methods based on local search. Then, this study elaborates on and analyzes these methods. This study first focuses on analyzing the research progress of these three types of methods. Then, the advantages and disadvantages of each method in the application of Bayesian optimization are compared. Finally, the main research trends in high-dimensional Bayesian optimization at the current stage are summarized, and future development directions are discussed.

    参考文献
    [1] Mo?kus J. On Bayesian methods for seeking the extremum. In: Proc. of the 1975 IFIP Technical Conf. on Optimization Techniques. Novosibirsk: Springer, 1975. 400–404. [doi: 10.1007/3-540-07165-2_55]
    [2] Snoek J, Larochelle H, Adams RP. Practical Bayesian optimization of machine learning algorithms. In: Proc. of the 25th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: ACM, 2012. 2951–2959.
    [3] Hutter F, Hoos HH, Leyton-Brown K. Sequential model-based optimization for general algorithm configuration. In: Proc. of the 5th Int’l Conf. on Learning and Intelligent Optimization. Rome: Springer, 2011. 507–523. [doi: 10.1007/978-3-642-25566-3_40]
    [4] Klein A, Falkner S, Bartels S, Hennig P, Hutter F. Fast Bayesian optimization of machine learning hyperparameters on large datasets. In: Proc. of the 20th Int’l Conf. on Artificial Intelligence and Statistics. Fort Lauderdale: PMLR, 2017. 528–536.
    [5] Letham B, Karrer B, Ottoni G, Bakshy E. Constrained Bayesian optimization with noisy experiments. Bayesian Analysis, 2019, 14(2): 495–519.
    [6] Negoescu DM, Frazier PI, Powell WB. The knowledge-gradient algorithm for sequencing experiments in drug discovery. INFORMS Journal on Computing, 2011, 23(3): 346–363.
    [7] Kandasamy K, Neiswanger W, Schneider J, Póczos B, Xing EP. Neural architecture search with Bayesian optimisation and optimal transport. In: Proc. of the 32nd Int’l Conf. on Neural Information Processing Systems. Montréal: ACM, 2018. 2020–2029.
    [8] Zhou HP, Yang MH, Wang J, Pan W. BayesNAS: A Bayesian approach for neural architecture search. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 7603–7613.
    [9] Gómez-Bombarelli R, Wei JN, Duvenaud D, Hernández-Lobato JM, Sánchez-Lengeling B, Sheberla D, Aguilera-Iparraguirre J, Hirzel TD, Adams RP, Aspuru-Guzik A. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 2018, 4(2): 268–276.
    [10] Grosnit A, Tutunov R, Maraval AM, Griffiths RR, Cowen-Rivers AI, Yang L, Zhu L, Lyu WL, Chen ZT, Wang J, Peters J, Bou-Ammar H. High-dimensional Bayesian optimisation with variational autoencoders and deep metric learning. arXiv:2106.03609, 2021.
    [11] Ru BX, Cobb AD, Blaas A, Gal Y. BayesOpt adversarial attack. In: Proc. of the 8th Int’l Conf. on Learning Representations. Addis Ababa: OpenReview.net, 2020.
    [12] Lizotte D, Wang T, Bowling M, Schuurmans D. Automatic gait optimization with Gaussian process regression. In: Proc. of the 20th Int’l Joint Conf. on Artifical Intelligence. Hyderabad: ACM, 2007. 944–949.
    [13] Calandra R, Seyfarth A, Peters J, Deisenroth MP. Bayesian optimization for learning gaits under uncertainty: An experimental comparison on a dynamic bipedal walker. Annals of Mathematics and Artificial Intelligence, 2016, 76(1): 5–23.
    [14] Jaquier N, Rozo LD, Calinon S, Bürger M. Bayesian optimization meets riemannian manifolds in robot learning. In: Proc. of the 3rd Annual Conf. on Robot Learning. Osaka: PMLR, 2019. 233–246.
    [15] Yogatama D, Kong LP, Smith NA. Bayesian optimization of text representations. In: Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing. Lisbon: Association for Computational Linguistics, 2015. 2100–2105. [doi: 10.18653/v1/D15-1251]
    [16] Wilson A, Fern A, Tadepalli P. Using trajectory data to improve Bayesian optimization for reinforcement learning. The Journal of Machine Learning Research, 2014, 15(1): 253–282.
    [17] Marco A, Berkenkamp F, Hennig P, Schoellig AP, Krause A, Schaal S, Trimpe S. Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization. In: Proc. of the 2017 IEEE Int’l Conf. on Robotics and Automation. Singapore: IEEE, 2017. 1557–1563. [doi: 10.1109/ICRA.2017.7989186]
    [18] Frazier PI. A tutorial on Bayesian optimization. arXiv:1807.02811, 2018.
    [19] Kandasamy K, Schneider J, Póczos B. High dimensional Bayesian optimisation and bandits via additive models. In: Proc. of the 32nd Int’l Conf. on Machine Learning. Lille: ACM, 2015. 295–304.
    [20] Hutter F, Hoos HH, Leyton-Brown K. Automated configuration of mixed integer programming solvers. In: Proc. of the 7th Int’l Conf. on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems. Bologna: Springer, 2010. 186–202. [doi: 10.1007/978-3-642-13520-0_23]
    [21] Bergstra J, Yamins D, Cox DD. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In: Proc. of the 30th Int’l Conf. on Machine Learning. Atlanta: ACM, 2013. 115–123.
    [22] González J, Longworth J, James DC, Lawrence ND. Bayesian optimization for synthetic gene design. arXiv:1505.01627, 2015.
    [23] Wang ZY, Hutter F, Zoghi M, Matheson D, De Feitas N. Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research, 2016, 55: 361–387.
    [24] Eriksson D, Pearce M, Gardner JR, Turner R, Poloczek M. Scalable global optimization via local Bayesian optimization. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2019. 493.
    [25] Shahriari B, Swersky K, Wang ZY, Adams RP, De Freitas N. Taking the human out of the loop: A review of Bayesian optimization. Proc. of the IEEE, 2016, 104(1): 148–175.
    [26] Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning. Cambridge: MIT Press, 2006.
    [27] Bergstra J, Bardenet R, Bengio Y, Kégl B. Algorithms for hyper-parameter optimization. In: Proc. of the 24th Int’l Conf. on Neural Information Processing Systems. Granada: ACM, 2011. 2546–2554.
    [28] Watanabe S, Hutter F. c-TPE: Tree-structured Parzen estimator with inequality constraints for expensive hyperparameter optimization. In: Proc. of the 32nd Int’l Joint Conf. on Artificial Intelligence. Macao: ACM, 2023. 486. [doi: 10.24963/ijcai.2023/486]
    [29] Snoek J, Rippel O, Swersky K, Kiros R, Satish N, Sundaram N, Patwary MMA, Prabhat P, Adams RP. Scalable Bayesian optimization using deep neural networks. In: Proc. of the 32nd Int’l Conf. on Machine Learning. Lille: ACM, 2015. 2171–2180.
    [30] Springenberg JT, Klein A, Falkner S, Hutter F. Bayesian optimization with robust Bayesian neural networks. In: Proc. of the 30th Int’l Conf. on Neural Information Processing Systems. Barcelona: ACM, 2016. 4141–4149.
    [31] Wilson AG, Hu ZT, Salakhutdinov R, Xing EP. Deep kernel learning. In: Proc. of the 19th Int’l Conf. on Artificial Intelligence and Statistics. Cadiz: JMLR, 2016. 370–378.
    [32] 陆大?, 张颢. 随机过程及其应用. 2版, 北京: 清华大学出版社, 2012.
    Lu DJ, Zhang H. Random Process and Its Application. 2nd ed., Beijing: Tsinghua University Press, 2012 (in Chinese).
    [33] Snelson EL, Ghahramani Z. Sparse Gaussian processes using pseudo-inputs. In: Proc. of the 18th Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2005. 1257–1264.
    [34] Titsias MK. Variational learning of inducing variables in sparse Gaussian processes. In: Proc. of the 12th Int’l Conf. on Artificial Intelligence and Statistics. Clearwater Beach: JMLR, 2009. 567–574.
    [35] Burt DR, Rasmussen CE, van der Wilk M. Convergence of sparse variational inference in Gaussian processes regression. The Journal of Machine Learning Research, 2020, 21(1): 131.
    [36] Saat?i Y. Scalable Inference for Structured Gaussian Process Models. Cambridge: University of Cambridge, 2011.
    [37] Snoek J, Swersky K, Zemel RS, Adams RP. Input warping for Bayesian optimization of non-stationary functions. In: Proc. of the 31st Int’l Conf. on Machine Learning. Beijing: ACM, 2014. 1674–1682.
    [38] Oh C, Gavves E, Welling M. BOCK: Bayesian optimization with cylindrical kernels. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholm: PMLR, 2018. 3865–3874.
    [39] Bull AD. Convergence rates of efficient global optimization algorithms. The Journal of Machine Learning Research, 2011, 12: 2879–2904.
    [40] Berkenkamp F, Schoellig AP, Krause A. No-regret Bayesian optimization with unknown hyperparameters. The Journal of Machine Learning Research, 2019, 20(1): 1868–1891.
    [41] Jones DR, Schonlau M, Welch WJ. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 1998, 13(4): 455–492.
    [42] Viana FAC, Haftka RT. Surrogate-based optimization with parallel simulations using the probability of improvement. In: Proc. of the 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conf. Fort Worth: AIAA, 2010. 9392. [doi: 10.2514/6.2010-9392]
    [43] Srinivas N, Krause A, Kakade SM, Seeger MW. Gaussian process optimization in the bandit setting: No regret and experimental design. In: Proc. of the 27th Int’l Conf. on Machine Learning. Haifa: ACM, 2010. 1015–1022.
    [44] Gy?rfi L, Kohler M, Krzyzak A, Walk H. A Distribution-Free Theory of Nonparametric Regression. New York: Springer, 2002. [doi: 10.1007/b97848]
    [45] Jones DR, Perttunen CD, Stuckman BE. Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications, 1993, 79(1): 157–181.
    [46] Li C, Gupta S, Rana S, Nguyen V, Venkatesh S, Shilton A. High dimensional Bayesian optimization using dropout. In: Proc. of the 26th Int’l Joint Conf. on Artificial Intelligence. Melbourne: ACM, 2017. 2096–2102.
    [47] Rana S, Li C, Gupta S, Nguyen V, Venkatesh S. High dimensional Bayesian optimization with elastic Gaussian process. In: Proc. of the 34th Int’l Conf. on Machine Learning. Sydney: ACM, 2017. 2883–2891.
    [48] Siivola E, Paleyes A, González J, Vehtari A. Good practices for Bayesian optimization of high dimensional structured spaces. Applied AI Letters, 2021, 2(2): e24.
    [49] Qian H, Hu YQ, Yu Y. Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: Proc. of the 25th Int’l Joint Conf. on Artificial Intelligence. New York: ACM, 2016. 1946–1952.
    [50] Woodruff DP. Sketching as a tool for numerical linear algebra. Foundations and Trends? in Theoretical Computer Science, 2014, 10(1–2): 1–157.
    [51] Nayebi A, Munteanu A, Poloczek M. A framework for Bayesian optimization in embedded subspaces. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 4752–4761.
    [52] Letham B, Calandra R, Rai A, Bakshy E. Re-examining linear embeddings for high-dimensional Bayesian optimization. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2020. 131.
    [53] Papenmeier L, Nardi L, Poloczek M. Increasing the scope as you learn: Adaptive Bayesian optimization in nested subspaces. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: ACM, 2022. 842.
    [54] Binois M, Ginsbourger D, Roustant O. On the choice of the low-dimensional domain for global optimization via random embeddings. Journal of Global Optimization, 2020, 76(1): 69–90.
    [55] Binois M, Ginsbourger D, Roustant O. A warped kernel improving robustness in Bayesian optimization via random embeddings. In: Proc. of the 9th Int’l Conf. on Learning and Intelligent Optimization. Lille: Springer, 2015. 281–286. [doi: 10.1007/978-3-319-19084-6_28]
    [56] Shen YH, Kingsford C. Computationally efficient high-dimensional Bayesian optimization via variable selection. In: Proc. of the 2nd Int’l Conf. on Automated Machine Learning. Potsdam: PMLR, 2023. 15.
    [57] Hansen N. The CMA evolution strategy: A tutorial. arXiv:1604.00772, 2016.
    [58] Garnett R, Osborne MA, Hennig P. Active learning of linear embeddings for Gaussian processes. In: Proc. of the 30th Conf. on Uncertainty in Artificial Intelligence. Quebec City: ACM, 2014. 230–239.
    [59] Djolonga J, Krause A, Cevher V. High-dimensional Gaussian process bandits. In: Proc. of the 26th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: ACM, 2013. 1025–1033.
    [60] Zhang M, Li HQ, Su SW. High dimensional Bayesian optimization via supervised dimension reduction. In: Proc. of the 18th Int’l Joint Conf. on Artificial Intelligence. Macao: IJCAI, 2019. 4292–4298.
    [61] Li KC. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 1991, 86(414): 316–327.
    [62] Moriconi R, Deisenroth MP, Kumar KSS. High-dimensional Bayesian optimization using low-dimensional feature spaces. Machine Learning, 2020, 109(9–10): 1925–1943.
    [63] Swersky K, Snoek J, Adams RP. Multi-task Bayesian optimization. In: Proc. of the 26th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: ACM, 2013. 2004–2012.
    [64] Kajino H. Molecular hypergraph grammar with its application to molecular optimization. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 3183–3191.
    [65] Griffiths RR, Hernández-Lobato JM. Constrained Bayesian optimization for automatic chemical design using variational autoencoders. Chemical Science, 2020, 11(2): 577–586.
    [66] Kusner MJ, Paige B, Hernández-Lobato JM. Grammar variational autoencoder. In: Proc. of the 34th Int’l Conf. on Machine Learning. Sydney: ACM, 2017. 1945–1954.
    [67] Dai HJ, Tian YT, Dai B, Skiena S, Song L. Syntax-directed variational autoencoder for structured data. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: ICLR, 2018.
    [68] Jin WG, Barzilay R, Jaakkola TS. Junction tree variational autoencoder for molecular graph generation. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholm: PMLR, 2018. 2328–2337.
    [69] Mahmood O, Hernández-Lobato JM. A COLD approach to generating optimal samples. arXiv:1905.09885, 2019.
    [70] Daxberger E, Hernández-Lobato JM. Bayesian variational autoencoders for unsupervised out-of-distribution detection. arXiv:1912.05651, 2019.
    [71] Eismann S, Levy D, Shu R, Bartzsch S, Ermon S. Bayesian optimization and attribute adjustment. In: Proc. of the 34th Conf. on Uncertainty in Artificial Intelligence. Monterey: AUAI Press, 2018. 1042–1052.
    [72] Tripp A, Daxberger E, Hernández-Lobato JM. Sample-efficient optimization in the latent space of deep generative models via weighted retraining. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2020. 945.
    [73] Kirschner J, Mutny M, Hiller N, Ischebeck R, Krause A. Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 3429–3438.
    [74] Eriksson D, Jankowiak M. High-dimensional Bayesian optimization with sparse axis-aligned subspaces. In: Proc. of the 37th Conf. on Uncertainty in Artificial Intelligence. AUAI Press, 2021. 493–503.
    [75] Gardner JR, Guo C, Weinberger KQ, Garnett R, Grosse RB. Discovering and exploiting additive structure for Bayesian optimization. In: Proc. of the 20th Int’l Conf. on Artificial Intelligence and Statistics. Fort Lauderdale: PMLR, 2017. 1311–1319.
    [76] 茆诗松, 汤银才. 贝叶斯统计. 第2版, 北京: 中国统计出版社, 2012.
    Mao SS, Tang YC. Bayesian Statistics. 2nd ed., Beijing: China Statistics Press, 2012 (in Chinese).
    [77] Wang Z, Li CT, Jegelka S, Kohli P. Batched high-dimensional Bayesian optimization via structural kernel learning. In: Proc. of the 34th Int’l Conf. on Machine Learning. Sydney: ACM, 2017. 3656–3664.
    [78] Rolland P, Scarlett J, Bogunovic I, Cevher V. High-dimensional Bayesian optimization via additive models with overlapping groups. In: Proc. of the 21st Int’l Conf. on Artificial Intelligence and Statistics. Playa Blanca: PMLR, 2018. 298–307.
    [79] Han E, Arora I, Scarlett J. High-dimensional Bayesian optimization via tree-structured additive models. In: Proc. of the 35th AAAI Conf. on Artificial Intelligence. AAAI Press, 2021. 7630–7638. [doi: 10.1609/aaai.v35i9.16933]
    [80] Ziomek JK, Bou-Ammar H. Are random decompositions all we need in high dimensional Bayesian optimisation? In: Proc. of the 40th Int’l Conf. on Machine Learning. Honolulu: ACM, 2023. 1825.
    [81] Li CL, Kandasamy K, Póczos B, Schneider JG. High dimensional Bayesian optimization via restricted projection pursuit models. In: Proc. of the 19th Int’l Conf. on Artificial Intelligence and Statistics. Cadiz: JMLR, 2016. 884–892.
    [82] Gilboa E, Saat?i Y, Cunningham JP. Scaling multidimensional Gaussian processes using projected additive approximations. In: Proc. of the 30th Int’l Conf. on Machine Learning. Atlanta: ACM, 2013. I-454–I-461.
    [83] Wan XC, Nguyen V, Ha H, Ru BX, Lu C, Osborne MA. Think global and act local: Bayesian optimisation over high-dimensional categorical and mixed search spaces. In: Proc. of the 38th Int’l Conf. on Machine Learning. PMLR, 2021. 10663–10674.
    [84] Munos R. Optimistic optimization of a deterministic function without the knowledge of its smoothness. In: Proc. of the 24th Int’l Conf. on Neural Information Processing Systems. Granada: ACM, 2011. 783–791.
    [85] Wang ZY, Shakibi B, Jin L, De Freitas N. Bayesian multi-scale optimistic optimization. In: Proc. of the 17th Int’l Conf. on Artificial Intelligence and Statistics. Reykjavik: JMLR, 2014. 1005–1014.
    [86] Kawaguchi K, Kaelbling LP, Lozano-Pérez T. Bayesian optimization with exponential convergence. In: Proc. of the 28th Int’l Conf. on Neural Information Processing Systems. Montreal: ACM, 2015. 2809–2817.
    [87] Kim B, Lee K, Lim S, Kaelbling L, Lozano-Pérez T. Monte Carlo tree search in continuous spaces using Voronoi optimistic optimization with regret bounds. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI Press, 2020. 9916–9924.
    [88] Wang LN, Fonseca R, Tian YD. Learning search space partition for black-box optimization using Monte Carlo tree search. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2020. 1637.
    [89] Slivkins A. Introduction to multi-armed bandits. Foundations and Trends? in Machine Learning, 2019, 12(1–2): 1–286.
    [90] Wang Z, Gehring C, Kohli P, Jegelka S. Batched large-scale Bayesian optimization in high-dimensional spaces. In: Proc. of the 21st Int’l Conf. on Artificial Intelligence and Statistics. Playa Blanca: PMLR, 2018. 745–754.
    [91] Müller S, Von Rohr A, Trimpe S. Local policy search with Bayesian optimization. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. ACM, 2021. 1584.
    [92] Nguyen Q, Wu KW, Gardner JR, Garnett R. Local Bayesian optimization via maximizing probability of descent. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: ACM, 2022. 958.
    [93] Fr?hlich LP, Zeilinger MN, Klenske ED. Cautious Bayesian optimization for efficient and scalable policy search. In: Proc. of the 3rd Annual Conf. on Learning for Dynamics and Control. PMLR, 2021. 227–240.
    [94] Maher S, Miltenberger M, Pedroso JP, Rehfeldt D, Schwarz R, Serrano F. PYSCIPOPT: Mathematical programming in Python with the SCIP optimization suite. In: Proc. of the 5th Int’l Conf. on Mathematical Software. Berlin: Springer, 2016. 301–307. [doi: 10.1007/978-3-319-42432-3_37]
    [95] Salem MB, Bachoc F, Roustant O, Gamboa F, Tomaso L. Gaussian process-based dimension reduction for goal-oriented sequential design. SIAM/ASA Journal on Uncertainty Quantification, 2019, 7(4): 1369–1397.
    [96] Spagnol A, Riche RL, Veiga SD. Global sensitivity analysis for optimization with variable selection. SIAM/ASA Journal on Uncertainty Quantification, 2019, 7(2): 417–443.
    [97] Sehic K, Gramfort A, Salmon J, Nardi L. LassoBench: A high-dimensional hyperparameter optimization benchmark suite for Lasso. In: Proc. of the 2022 Int’l Conf. on Automated Machine Learning. Baltimore: PMLR, 2022. 2/1–24.
    [98] Ulmasov D, Baroukh C, Chachuat B, Deisenroth MP, Misener R. Bayesian optimization with dimension scheduling: Application to biological systems. Computer Aided Chemical Engineering, 2016, 38: 1051–1056.
    [99] Tu S, Recht B. Least-squares temporal difference learning for the linear quadratic regulator. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholm: PMLR, 2018. 5012–5021.
    [100] Mania H, Guy A, Recht B. Simple random search of static linear policies is competitive for reinforcement learning. In: Proc. of the 32nd Int’l Conf. on Neural Information Processing Systems. Montréal: ACM, 2018. 1805–1814.
    [101] Marco A, Hennig P, Bohg J, Schaal S, Trimpe S. Automatic LQR tuning based on Gaussian process global optimization. In: Proc. of the 2016 IEEE Int’l Conf. on Robotics and Automation. Stockholm: IEEE, 2016. 270–277. [doi: 10.1109/ICRA.2016.7487144]
    [102] Balandat M, Karrer B, Jiang DR, Daulton S, Letham B, Wilson AG, Bakshy E. BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: ACM, 2020. 1807.
    [103] Cowen-Rivers AI, Lyu WL, Tutunov R, Wang Z, Grosnit A, Griffiths RR, Maraval AM, Hao JY, Wang J, Peters J, Bou-Ammar H. HEBO: Pushing the limits of sample-efficient hyper-parameter optimisation. Journal of Artificial Intelligence Research, 2022, 74: 1269–1349.
    [104] Grosnit A, Cowen-Rivers AI, Tutunov R, Griffiths RR, Wang J, Bou-Ammar H. Are we forgetting about compositional optimisers in Bayesian optimisation? The Journal of Machine Learning Research, 2021, 22(1): 160.
    [105] Daulton S, Eriksson D, Balandat M, Bakshy E. Multi-objective Bayesian optimization over high-dimensional search spaces. In: Proc. of the 38th Conf. on Uncertainty in Artificial Intelligence. Eindhoven: PMLR, 2022. 507–517.
    [106] Eriksson D, Chuang PIJ, Daulton S, Xia P, Shrivastava A, Babu A, Zhao SC, Aly A, Venkatesh G, Balandat M. Latency-aware neural architecture search with multi-objective Bayesian optimization. arXiv:2106.11890, 2021.
    [107] Zhao YY, Wang LN, Yang K, Zhang TJ, Guo T, Tian YD. Multi-objective optimization by learning space partitions. arXiv:2110.03173, 2021.
    相似文献
    引证文献
    引证文献 [0] 您输入的地址无效!
    没有找到您想要的资源,您输入的路径无效!

    网友评论
    网友评论
    分享到微博
    发 布
引用本文

陈泉霖,陈奕宇,霍静,曹宏业,高阳,李栋,郝建业.高维贝叶斯优化研究综述.软件学报,,():1-28

复制
分享
文章指标
  • 点击次数:92
  • 下载次数: 47
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2023-08-09
  • 最后修改日期:2024-04-08
  • 在线发布日期: 2025-03-05
文章二维码
您是第19731690位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号