Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-25T07:00:10.318Z Has data issue: false hasContentIssue false

Optimal experimental design: Formulations and computations

Published online by Cambridge University Press:  04 September 2024

Xun Huan
Affiliation:
University of Michigan, 1231 Beal Ave, Ann Arbor, MI 48109, USA Email: xhuan@umich.edu
Jayanth Jagalur
Affiliation:
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550, USA Email: jagalur1@llnl.gov
Youssef Marzouk
Affiliation:
Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA Email: ymarz@mit.edu
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Questions of ‘how best to acquire data’ are essential to modelling and prediction in the natural and social sciences, engineering applications, and beyond. Optimal experimental design (OED) formalizes these questions and creates computational methods to answer them. This article presents a systematic survey of modern OED, from its foundations in classical design theory to current research involving OED for complex models. We begin by reviewing criteria used to formulate an OED problem and thus to encode the goal of performing an experiment. We emphasize the flexibility of the Bayesian and decision-theoretic approach, which encompasses information-based criteria that are well-suited to nonlinear and non-Gaussian statistical models. We then discuss methods for estimating or bounding the values of these design criteria; this endeavour can be quite challenging due to strong nonlinearities, high parameter dimension, large per-sample costs, or settings where the model is implicit. A complementary set of computational issues involves optimization methods used to find a design; we discuss such methods in the discrete (combinatorial) setting of observation selection and in settings where an exact design can be continuously parametrized. Finally we present emerging methods for sequential OED that build non-myopic design policies, rather than explicit designs; these methods naturally adapt to the outcomes of past experiments in proposing new experiments, while seeking coordination among all experiments to be performed. Throughout, we highlight important open questions and challenges.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

References

Agarwal, P. K., Har-Peled, S. and Varadarajan, K. R. (2005), Geometric approximation via coresets, Combin. Comput. Geom. 52, 130.Google Scholar
Aggarwal, R., Demkowicz, M. J. and Marzouk, Y. M. (2016), Information-driven experimental design in materials science, in Information Science for Materials Discovery and Design (Lookman, T., Alexander, F. and Rajan, K., eds), Springer, pp. 1344.10.1007/978-3-319-23871-5_2CrossRefGoogle Scholar
Alexanderian, A. (2021), Optimal experimental design for infinite-dimensional Bayesian inverse problems governed by PDEs: A review, Inverse Problems 37, art. 043001.10.1088/1361-6420/abe10cCrossRefGoogle Scholar
Alexanderian, A. and Saibaba, A. K. (2018), Efficient D-optimal design of experiments for infinite-dimensional Bayesian linear inverse problems, SIAM J. Sci. Comput. 40, A2956A2985.10.1137/17M115712XCrossRefGoogle Scholar
Alexanderian, A., Gloor, P. J. and Ghattas, O. (2016a), On Bayesian A- and D-optimal experimental designs in infinite dimensions, Bayesian Anal. 11, 671695.10.1214/15-BA969CrossRefGoogle Scholar
Alexanderian, A., Nicholson, R. and Petra, N. (2022), Optimal design of large-scale nonlinear Bayesian inverse problems under model uncertainty. Available at arXiv:2211.03952.Google Scholar
Alexanderian, A., Petra, N., Stadler, G. and Ghattas, O. (2014), A-optimal design of experiments for infinite-dimensional Bayesian linear inverse problems with regularized ${\mathrm{\ell}}_0$ -sparsification, SIAM J. Sci. Comput. 36, A2122A2148.10.1137/130933381CrossRefGoogle Scholar
Alexanderian, A., Petra, N., Stadler, G. and Ghattas, O. (2016b), A fast and scalable method for A-optimal design of experiments for infinite-dimensional Bayesian nonlinear inverse problems, SIAM J. Sci. Comput. 38, A243A272.10.1137/140992564CrossRefGoogle Scholar
Alexanderian, A., Petra, N., Stadler, G. and Sunseri, I. (2021), Optimal design of large-scale Bayesian linear inverse problems under reducible model uncertainty: Good to know what you don’t know, SIAM/ASA J. Uncertain. Quantif. 9, 163184.10.1137/20M1347292CrossRefGoogle Scholar
Allen-Zhu, Z., Li, Y., Singh, A. and Wang, Y. (2017), Near-optimal design of experiments via regret minimization, in Proceedings of the 34th International Conference on Machine Learning (ICML 2017) , Vol. 70 of Proceedings of Machine Learning Research, PMLR, pp. 126135.Google Scholar
Allen-Zhu, Z., Liao, Z. and Orecchia, L. (2015), Spectral sparsification and regret minimization beyond matrix multiplicative updates, in Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC 2015) , ACM, pp. 237245.Google Scholar
Ao, Z. and Li, J. (2020), An approximate KLD based experimental design for models with intractable likelihoods, in Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics , Vol. 108 of Proceedings of Machine Learning Research, PMLR, pp. 32413251.Google Scholar
Ao, Z. and Li, J. (2024), On estimating the gradient of the expected information gain in Bayesian experimental design, in Proceedings of the 38th AAAI Conference on Artificial Intelligence (Wooldridge, M., Dy, J. and Natarajan, S., eds), AAAI Press, pp. 2031120319.Google Scholar
Artzner, P., Delbaen, F., Eber, J. and Heath, D. (1999), Coherent measures of risk, Math. Finance 9, 203228.10.1111/1467-9965.00068CrossRefGoogle Scholar
Asmussen, S. and Glynn, P. W. (2007), Stochastic Simulation: Algorithms and Analysis , Springer.10.1007/978-0-387-69033-9CrossRefGoogle Scholar
Atkinson, A. C., Donev, A. N. and Tobias, R. D. (2007), Optimum Experimental Designs, with SAS , Oxford University Press.10.1093/oso/9780199296590.001.0001CrossRefGoogle Scholar
Attia, A., Alexanderian, A. and Saibaba, A. K. (2018), Goal-oriented optimal design of experiments for large-scale Bayesian linear inverse problems, Inverse Problems 34, art. 095009.10.1088/1361-6420/aad210CrossRefGoogle Scholar
Attia, A., Leyffer, S. and Munson, T. (2023), Robust A-optimal experimental design for Bayesian inverse problems. Available at arXiv:2305.03855.Google Scholar
Atwood, C. L. (1969), Optimal and efficient designs of experiments, Ann. Math. Statist. 40, 15701602.10.1214/aoms/1177697374CrossRefGoogle Scholar
Audet, C. (2004), Convergence results for generalized pattern search algorithms are tight, Optim. Engrg 5, 101122.10.1023/B:OPTE.0000033370.66768.a9CrossRefGoogle Scholar
Audet, C. and Dennis, J. E. (2002), Analysis of generalized pattern searches, SIAM J. Optim. 13, 889903.10.1137/S1052623400378742CrossRefGoogle Scholar
Bach, F. (2013), Learning with submodular functions: A convex optimization perspective, Found. Trends Mach. Learn. 6, 145373.10.1561/2200000039CrossRefGoogle Scholar
Bach, F. R. and Jordan, M. I. (2002), Kernel independent component analysis, J. Mach. Learn. Res. 3, 148.Google Scholar
Baptista, R., Cao, L., Chen, J., Ghattas, O., Li, F., Marzouk, Y. M. and Oden, J. T. (2024), Bayesian model calibration for block copolymer self-assembly: Likelihood-free inference and expected information gain computation via measure transport, J. Comput. Phys. 503, art. 112844.10.1016/j.jcp.2024.112844CrossRefGoogle Scholar
Baptista, R., Hosseini, B., Kovachki, N. B. and Marzouk, Y. (2023a), Conditional sampling with monotone GANs: From generative models to likelihood-free inference. Available at arXiv:2006.06755.10.1137/23M1581546CrossRefGoogle Scholar
Baptista, R., Marzouk, Y. and Zahm, O. (2022), Gradient-based data and parameter dimension reduction for Bayesian models: An information theoretic perspective. Available at arXiv:2207.08670.Google Scholar
Baptista, R., Marzouk, Y. and Zahm, O. (2023b), On the representation and learning of monotone triangular transport maps, Found. Comput. Mat h. Available at doi:10.1007/s10208-023-09630-x.CrossRefGoogle Scholar
Barber, D. and Agakov, F. (2003), The IM algorithm: A variational approach to information maximization, in Advances in Neural Information Processing Systems 16 , MIT Press, pp. 201208.Google Scholar
Batson, J. D., Spielman, D. A. and Srivastava, N. (2009), Twice-Ramanujan sparsifiers, in Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC 2009) , ACM, pp. 255262.Google Scholar
Beck, J. and Guillas, S. (2016), Sequential design with mutual information for computer experiments (MICE): Emulation of a tsunami model, SIAM/ASA J. Uncertain. Quantif. 4, 739766.10.1137/140989613CrossRefGoogle Scholar
Beck, J., Dia, B. M., Espath, L. and Tempone, R. (2020), Multilevel double loop Monte Carlo and stochastic collocation methods with importance sampling for Bayesian optimal experimental design, Int. J. Numer. Methods Engrg 121, 34823503.10.1002/nme.6367CrossRefGoogle Scholar
Beck, J., Dia, B. M., Espath, L. F., Long, Q. and Tempone, R. (2018), Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain, Comput. Methods Appl. Mech. Engrg 334, 523553.10.1016/j.cma.2018.01.053CrossRefGoogle Scholar
Belghazi, M. I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Courville, A. and Hjelm, R. D. (2018), Mutual information neural estimation, in Proceedings of the 35th International Conference on Machine Learning (ICML 2018) , Vol. 80 of Proceedings of Machine Learning Research, PMLR, pp. 531540.Google Scholar
Ben-Tal, A. and Nemirovski, A. (2001), Lectures on Modern Convex Optimization , SIAM.10.1137/1.9780898718829CrossRefGoogle Scholar
Benner, P., Gugercin, S. and Willcox, K. (2015), A survey of projection-based model reduction methods for parametric dynamical systems, SIAM Rev. 57, 483531.10.1137/130932715CrossRefGoogle Scholar
Berger, J. O. (1985), Statistical Decision Theory and Bayesian Analysis , Springer Series in Statistics, Springer.10.1007/978-1-4757-4286-2CrossRefGoogle Scholar
Berger, J. O. (1994), An overview of robust Bayesian analysis (with discussion), Test 3, 5124.10.1007/BF02562676CrossRefGoogle Scholar
Berger, M. P. F. and Wong, W. K. (2009), An Introduction to Optimal Designs for Social and Biomedical Research , Wiley.10.1002/9780470746912CrossRefGoogle Scholar
Bernardo, J. M. (1979), Expected information as expected utility, Ann . Statist. 7, 686690.10.1214/aos/1176344689CrossRefGoogle Scholar
Bernardo, J. M. and Smith, A. F. M. (2000), Bayesian Theory , Wiley.Google Scholar
Berry, S. M., Carlin, B. P., Lee, J. J. and Müller, P. (2010), Bayesian Adaptive Methods for Clinical Trials , Chapman & Hall/CRC.10.1201/EBK1439825488CrossRefGoogle Scholar
Bertsekas, D. P. (2005), Dynamic Programming and Optimal Control , Vol. 1, Athena Scientific.Google Scholar
Bhatnagar, S., Prasad, H. L. and Prashanth, L. A. (2013), Stochastic Recursive Algorithms for Optimization , Springer.10.1007/978-1-4471-4285-0CrossRefGoogle Scholar
Bian, A. A., Buhmann, J. M., Krause, A. and Tschiatschek, S. (2017), Guarantees for greedy maximization of non-submodular functions with applications, in Proceedings of the 34th International Conference on Machine Learning (ICML 2017) (Precup, D. and Teh, Y. W., eds), Vol. 70 of Proceedings of Machine Learning Research, PMLR, pp. 498507.Google Scholar
Blackwell, D. (1951), Comparison of experiments, in Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability , University of California Press, pp. 93102.10.1525/9780520411586-009CrossRefGoogle Scholar
Blackwell, D. (1953), Equivalent comparisons of experiments, Ann . Math. Statist. 24, 265272.10.1214/aoms/1177729032CrossRefGoogle Scholar
Blanchard, A. and Sapsis, T. (2021), Output-weighted optimal sampling for Bayesian experimental design and uncertainty quantification, SIAM/ASA J. Uncertain. Quantif. 9, 564592.10.1137/20M1347486CrossRefGoogle Scholar
Blau, T., Bonilla, E. V., Chades, I. and Dezfouli, A. (2022), Optimizing sequential experimental design with deep reinforcement learning, in Proceedings of the 39th International Conference on Machine Learning (ICML 2022) (Chaudhuri, K. et al., eds), Vol. 162 of Proceedings of Machine Learning Research, PMLR, pp. 21072128.Google Scholar
Blum, J. R. (1954), Multidimensional stochastic approximation methods, Ann. Math. Statist. 25, 737744.10.1214/aoms/1177728659CrossRefGoogle Scholar
Bochkina, N. (2019), Bernstein–von Mises theorem and misspecified models: A review, in Foundations of Modern Statistics (Belomestny, D. et al., eds), Vol. 425 of Springer Proceedings in Mathematics & Statistics, Springer, pp. 355380.10.1007/978-3-031-30114-8_10CrossRefGoogle Scholar
Bogachev, V. I., Kolesnikov, A. V. and Medvedev, K. V. (2005), Triangular transformations of measures, Sbornik Math. 196, art. 309.10.1070/SM2005v196n03ABEH000882CrossRefGoogle Scholar
Bogunovic, I., Zhao, J. and Cevher, V. (2018), Robust maximization of non-submodular objectives, in Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (Storkey, A. and Perez-Cruz, F., eds), Vol. 84 of Proceedings of Machine Learning Research, PMLR, pp. 890899.Google Scholar
Borges, C. and Biros, G. (2018), Reconstruction of a compactly supported sound profile in the presence of a random background medium, Inverse Problems 34, art. 115007.10.1088/1361-6420/aadbc5CrossRefGoogle Scholar
Bose, R. C. (1939), On the construction of balanced incomplete block designs, Ann. Eugen. 9, 353399.10.1111/j.1469-1809.1939.tb02219.xCrossRefGoogle Scholar
Bose, R. C. and Nair, K. R. (1939), Partially balanced incomplete block designs, Sankhyā 4, 337372.Google Scholar
Box, G. E. P. (1992), Sequential experimentation and sequential assembly of designs, Qual. Engrg 5, 321330.10.1080/08982119208918971CrossRefGoogle Scholar
Boyd, S. P. and Vandenberghe, L. (2004), Convex Optimization , Cambridge University Press.10.1017/CBO9780511804441CrossRefGoogle Scholar
Brockwell, A. E. and Kadane, J. B. (2003), A gridding method for Bayesian sequential decision problems, J. Comput. Graph. Statist. 12, 566584.10.1198/1061860032274CrossRefGoogle Scholar
Buchbinder, N., Feldman, M., Naor, J. and Schwartz, R. (2014), Submodular maximization with cardinality constraints, in Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms , SIAM, pp. 14331452.Google Scholar
Bui-Thanh, T., Ghattas, O., Martin, J. and Stadler, G. (2013), A computational framework for infinite-dimensional Bayesian inverse problems I: The linearized case, with application to global seismic inversion, SIAM J. Sci. Comput. 35, A2494A2523.10.1137/12089586XCrossRefGoogle Scholar
Caflisch, R. E. (1998), Monte Carlo and quasi-Monte Carlo methods, Acta Numer. 7, 149.10.1017/S0962492900002804CrossRefGoogle Scholar
Calinescu, G., Chekuri, C., Pál, M. and Vondrák, J. (2011), Maximizing a monotone submodular function subject to a matroid constraint, SIAM J. Comput. 40, 17401766.10.1137/080733991CrossRefGoogle Scholar
Campbell, T. and Beronov, B. (2019), Sparse variational inference: Bayesian coresets from scratch, in Advances in Neural Information Processing Systems 32 (Wallach, H. et al., eds), Curran Associates, pp. 1146111472.Google Scholar
Campbell, T. and Broderick, T. (2018), Bayesian coreset construction via greedy iterative geodesic ascent, in Proceedings of the 35th International Conference on Machine Learning (ICML 2018) , Vol. 80 of Proceedings of Machine Learning Research, PMLR, pp. 698706.Google Scholar
Campbell, T. and Broderick, T. (2019), Automated scalable Bayesian inference via Hilbert coresets, J. Mach. Learn. Res. 20, 551588.Google Scholar
Carlier, G., Chernozhukov, V. and Galichon, A. (2016), Vector quantile regression: An optimal transport approach, Ann. Statist. 44, 11651192.10.1214/15-AOS1401CrossRefGoogle Scholar
Carlin, B. P., Kadane, J. B. and Gelfand, A. E. (1998), Approaches for optimal sequential decision analysis in clinical trials, Biometrics 54, 964975.10.2307/2533849CrossRefGoogle ScholarPubMed
Carlon, A. G., Dia, B. M., Espath, L., Lopez, R. H. and Tempone, R. (2020), Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization, Comput. Methods Appl. Mech. Engrg 363, art. 112909.10.1016/j.cma.2020.112909CrossRefGoogle Scholar
Carmona, C. U. and Nicholls, G. K. (2020), Semi-modular inference: Enhanced learning in multi-modular models by tempering the influence of components, in Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics , Vol. 108 of Proceedings of Machine Learning Research, PMLR, pp. 42264235.Google Scholar
Caselton, W. F. and Zidek, J. V. (1984), Optimal monitoring network designs, Statist. Probab. Lett. 2, 223227.10.1016/0167-7152(84)90020-8CrossRefGoogle Scholar
Cavagnaro, D. R., Myung, J. I., Pitt, M. A. and Kujala, J. V. (2010), Adaptive design optimization: A mutual information-based approach to model discrimination in cognitive science, Neural Comput. 22, 887905.10.1162/neco.2009.02-09-959CrossRefGoogle ScholarPubMed
Chaloner, K. (1984), Optimal Bayesian experimental design for linear models, Ann. Statist. 12, 283300.10.1214/aos/1176346407CrossRefGoogle Scholar
Chaloner, K. and Larntz, K. (1989), Optimal Bayesian design applied to logistic regression experiments, J. Statist. Plann. Infer. 21, 191208.10.1016/0378-3758(89)90004-9CrossRefGoogle Scholar
Chaloner, K. and Verdinelli, I. (1995), Bayesian experimental design: A review, Statist. Sci. 10, 273304.10.1214/ss/1177009939CrossRefGoogle Scholar
Chang, K.-H. (2012), Stochastic Nelder–Mead simplex method: A new globally convergent direct search method for simulation optimization, European J. Oper. Res. 220, 684694.10.1016/j.ejor.2012.02.028CrossRefGoogle Scholar
Chekuri, C., Vondrák, J. and Zenklusen, R. (2014), Submodular function maximization via the multilinear relaxation and contention resolution schemes, SIAM J. Comput. 43, 18311879.10.1137/110839655CrossRefGoogle Scholar
Chen, X., Wang, C., Zhou, Z. and Ross, K. (2021), Randomized ensembled double Q-learning: Learning fast without a model, in 9th International Conference on Learning Representations (ICLR 2021). Available at https://openreview.net/forum?id=AY8zfZm0tDd.Google Scholar
Chevalier, C. and Ginsbourger, D. (2013), Fast computation of the multi-points expected improvement with applications in batch selection, in Learning and Intelligent Optimization , Vol. 7997 of Lecture Notes in Computer Science, Springer, pp. 5969.10.1007/978-3-642-44973-4_7CrossRefGoogle Scholar
Chowdhary, A., Tong, S., Stadler, G. and Alexanderian, A. (2023), Sensitivity analysis of the information gain in infinite-dimensional Bayesian linear inverse problems. Available at arXiv:2310.16906.Google Scholar
Christen, J. A. and Nakamura, M. (2003), Sequential stopping rules for species accumulation, J. Agric. Biol. Environ. Statist. 8, 184195.10.1198/1085711031553CrossRefGoogle Scholar
Clyde, M. A. (2001), Experimental design: Bayesian designs, in International Encyclopedia of the Social & Behavioral Sciences (Smelser, N. J. and Baltes, P. B., eds), Science Direct, pp. 50755081.10.1016/B0-08-043076-7/00421-6CrossRefGoogle Scholar
Cohn, D. A., Ghahramani, Z. and Jordan, M. I. (1996), Active learning with statistical models, J. Artificial Intelligence Res. 4, 129145.10.1613/jair.295CrossRefGoogle Scholar
Conforti, M. and Cornuéjols, G. (1984), Submodular set functions, matroids and the greedy algorithm: Tight worst-case bounds and some generalizations of the Rado–Edmonds theorem, Discrete Appl. Math. 7, 251274.10.1016/0166-218X(84)90003-9CrossRefGoogle Scholar
Conn, A. R., Scheinberg, K. and Vicente, L. N. (2009), Introduction to Derivative-Free Optimization , SIAM.10.1137/1.9780898718768CrossRefGoogle Scholar
Cook, R. D. and Nachtsheim, C. J. (1980), A comparison of algorithms for constructing exact D-optimal designs, Technometrics 22, 315324.10.1080/00401706.1980.10486162CrossRefGoogle Scholar
Cotter, S. L., Roberts, G. O., Stuart, A. M. and White, D. (2013), MCMC methods for functions: Modifying old algorithms to make them faster, Statist. Sci. 28, 424446.10.1214/13-STS421CrossRefGoogle Scholar
Cover, T. A. and Thomas, J. A. (2006), Elements of Information Theory , second edition, Wiley.Google Scholar
Cox, R. T. (1946), Probability, frequency and reasonable expectation, Amer. J. Phys. 14, 113.10.1119/1.1990764CrossRefGoogle Scholar
Craig, C. C. and Fisher, R. A. (1936), The design of experiments, Amer. Math. Monthly 43, 180.10.2307/2300364CrossRefGoogle Scholar
Cui, T. and Tong, X. T. (2022), A unified performance analysis of likelihood-informed subspace methods, Bernoulli 28, 27882815.10.3150/21-BEJ1437CrossRefGoogle Scholar
Cui, T., Dolgov, S. and Zahm, O. (2023), Scalable conditional deep inverse Rosenblatt transports using tensor trains and gradient-based dimension reduction, J. Comput. Phys. 485, art. 112103.10.1016/j.jcp.2023.112103CrossRefGoogle Scholar
Cui, T., Law, K. J. H. and Marzouk, Y. M. (2016), Dimension-independent likelihood-informed MCMC, J. Comput. Phys. 304, 109137.10.1016/j.jcp.2015.10.008CrossRefGoogle Scholar
Cui, T., Martin, J., Marzouk, Y. M., Solonen, A. and Spantini, A. (2014), Likelihood-informed dimension reduction for nonlinear inverse problems, Inverse Problems 30, art. 114015.10.1088/0266-5611/30/11/114015CrossRefGoogle Scholar
Czyż, P., Grabowski, F., Vogt, J., Beerenwinkel, N. and Marx, A. (2023), Beyond normal: On the evaluation of mutual information estimators, in Advances in Neural Information Processing Systems 36 (Oh, A. et al., eds), Curran Associates, pp. 1695716990.Google Scholar
Das, A. and Kempe, D. (2011), Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection, in Proceedings of the 28th International Conference on Machine Learning (ICML 2011) (Getoor, L. and Scheffer, T., eds), ACM, pp. 10571064.Google Scholar
DasGupta, A. (1995), Review of optimal Bayes designs. Technical report, Purdue University, West Lafayette, IN.Google Scholar
Dasgupta, S. (2011), Two faces of active learning, Theoret. Comput. Sci. 412, 17671781.10.1016/j.tcs.2010.12.054CrossRefGoogle Scholar
Dashti, M. and Stuart, A. M. (2017), The Bayesian approach to inverse problems, in Handbook of Uncertainty Quantification (Ghanem, R., Higdon, D. and Owhadi, H., eds), Springer, pp. 311428.10.1007/978-3-319-12385-1_7CrossRefGoogle Scholar
Dashti, M., Law, K. J., Stuart, A. M. and Voss, J. (2013), MAP estimators and their consistency in Bayesian nonparametric inverse problems, Inverse Problems 29, art. 095017.10.1088/0266-5611/29/9/095017CrossRefGoogle Scholar
Dewaskar, M., Tosh, C., Knoblauch, J. and Dunson, D. B. (2023), Robustifying likelihoods by optimistically re-weighting data. Available at arXiv:2303.10525.Google Scholar
Dick, J., Kuo, F. Y. and Sloan, I. H. (2013), High-dimensional integration: The quasi-Monte Carlo way, Acta Numer. 22, 133288.10.1017/S0962492913000044CrossRefGoogle Scholar
Dong, J., Jacobsen, C., Khalloufi, M., Akram, M., Liu, W., Duraisamy, K. and Huan, X. (2024), Variational Bayesian optimal experimental design with normalizing flows. Available at arXiv:2404.13056.Google Scholar
Donsker, M. D. and Varadhan, S. R. S. (1983), Asymptotic evaluation of certain Markov process expectations for large time IV, Commun. Pure Appl. Math. 36, 183212.10.1002/cpa.3160360204CrossRefGoogle Scholar
Dror, H. A. and Steinberg, D. M. (2008), Sequential experimental designs for generalized linear models, J. Amer. Statist. Assoc. 103, 288298.10.1198/016214507000001346CrossRefGoogle Scholar
Drovandi, C. C., McGree, J. M. and Pettitt, A. N. (2013), Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data, Comput. Statist. Data Anal. 57, 320335.10.1016/j.csda.2012.05.014CrossRefGoogle Scholar
Drovandi, C. C., McGree, J. M. and Pettitt, A. N. (2014), A sequential Monte Carlo algorithm to incorporate model uncertainty in Bayesian sequential design, J. Comput. Graph. Statist. 23, 324.10.1080/10618600.2012.730083CrossRefGoogle Scholar
Duncan, T. E. (1970), On the calculation of mutual information, SIAM J. Appl. Math. 19, 215220.10.1137/0119020CrossRefGoogle Scholar
Duong, D.-L., Helin, T. and Rojo-Garcia, J. R. (2023), Stability estimates for the expected utility in Bayesian optimal experimental design, Inverse Problems 39, art. 125008.10.1088/1361-6420/ad04ecCrossRefGoogle Scholar
El Moselhy, T. A. and Marzouk, Y. M. (2012), Bayesian inference with optimal maps, J. Comput. Phys. 231, 78157850.10.1016/j.jcp.2012.07.022CrossRefGoogle Scholar
Elfving, G. (1952), Optimum allocation in linear regression theory, Ann. Math. Statist. 23, 255262.10.1214/aoms/1177729442CrossRefGoogle Scholar
Englezou, Y., Waite, T. W. and Woods, D. C. (2022), Approximate Laplace importance sampling for the estimation of expected Shannon information gain in high-dimensional Bayesian design for nonlinear models, Statist. Comput. 32, art. 82.10.1007/s11222-022-10159-2CrossRefGoogle Scholar
Eskenazis, A. and Shenfeld, Y. (2024), Intrinsic dimensional functional inequalities on model spaces, J. Funct. Anal. 286, art. 110338.10.1016/j.jfa.2024.110338CrossRefGoogle Scholar
Fan, K. (1967), Subadditive functions on a distributive lattice and an extension of Szász’s inequality, J. Math. Anal. Appl. 18, 262268.10.1016/0022-247X(67)90056-XCrossRefGoogle Scholar
Fan, K. (1968), An inequality for subadditive functions on a distributive lattice, with application to determinantal inequalities, Linear Algebra Appl. 1, 3338.10.1016/0024-3795(68)90045-1CrossRefGoogle Scholar
Fedorov, V. V. (1972), Theory of Optimal Experiments , Academic Press.Google Scholar
Fedorov, V. V. (1996), Design of spatial experiments: Model fitting and prediction. Technical report, Oak Ridge National Laboratory, Oak Ridge, TN.Google Scholar
Fedorov, V. V. and Flanagan, D. (1997), Optimal monitoring network design based on Mercer’s expansion of covariance kernel, J. Combin. Inform. System Sci. 23, 237250.Google Scholar
Fedorov, V. V. and Hackl, P. (1997), Model-Oriented Design of Experiments , Vol. 125 of Lecture Notes in Statistics, Springer.10.1007/978-1-4612-0703-0CrossRefGoogle Scholar
Fedorov, V. V. and Müller, W. G. (2007), Optimum design for correlated fields via covariance kernel expansions, in mODa 8: Advances in Model-Oriented Design and Analysis (López-Fidalgo, J., Rodríguez-Díaz, J. M. and Torsney, B., eds), Contributions to Statistics, Physica, Springer, pp. 5766.10.1007/978-3-7908-1952-6_8CrossRefGoogle Scholar
Feldman, D. and Langberg, M. (2011), A unified framework for approximating and clustering data, in Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC 2011) , ACM, pp. 569578.Google Scholar
Feng, C. and Marzouk, Y. M. (2019), A layered multiple importance sampling scheme for focused optimal Bayesian experimental design. Available at arXiv:1903.11187.Google Scholar
Fisher, M. L., Nemhauser, G. L. and Wolsey, L. A. (1978), An analysis of approximations for maximizing submodular set functions II, Math. Program. 8, 7387.10.1007/BFb0121195CrossRefGoogle Scholar
Fisher, R. A. (1936), Design of experiments, Brit. Med. J. 1(3923), 554.10.1136/bmj.1.3923.554-aCrossRefGoogle Scholar
Ford, I., Titterington, D. M. and Kitsos, C. P. (1989), Recent advances in nonlinear experimental design, Technometrics 31, 4960.10.1080/00401706.1989.10488475CrossRefGoogle Scholar
Foster, A., Ivanova, D. R., Malik, I. and Rainforth, T. (2021), Deep adaptive design: Amortizing sequential Bayesian experimental design, in Proceedings of the 38th International Conference on Machine Learning (ICML 2021) (Meila, M. and Zhang, T., eds), Vol. 139 of Proceedings of Machine Learning Research, PMLR, pp. 33843395.Google Scholar
Foster, A., Jankowiak, M., Bingham, E., Horsfall, P., Teh, Y. W., Rainforth, T. and Goodman, N. (2019), Variational Bayesian optimal experimental design, in Advances in Neural Information Processing Systems 32 (Wallach, H. et al., eds), Curran Associates, pp. 1403614047.Google Scholar
Foster, A., Jankowiak, M., O’Meara, M., Teh, Y. W. and Rainforth, T. (2020), A unified stochastic gradient approach to designing Bayesian-optimal experiments, in Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics , Vol. 108 of Proceedings of Machine Learning Research, PMLR, pp. 29592969.Google Scholar
Frazier, P. I. (2018), Bayesian optimization, INFORMS TutORials in Operations Research 2018, 255278.Google Scholar
Freund, Y., Seung, H. S., Shamir, E. and Tishby, N. (1997), Selective sampling using the query by committee algorithm, Mach. Learn. 28, 133168.10.1023/A:1007330508534CrossRefGoogle Scholar
Fujishige, S. (2005), Submodular Functions and Optimization , Vol. 58 of Annals of Discrete Mathematics, second edition, Elsevier.Google Scholar
Gantmacher, F. R. and Kreĭn, M. G. (1960), Oszillationsmatrizen, Oszillationskerne und kleine Schwingungen mechanischer Systeme , Vol. 5, Akademie.10.1515/9783112708156CrossRefGoogle Scholar
Gao, W., Oh, S. and Viswanath, P. (2018), Demystifying fixed k-nearest neighbor information estimators, IEEE Trans. Inform. Theory 64, 56295661.10.1109/TIT.2018.2807481CrossRefGoogle Scholar
Gautier, R. and Pronzato, L. (2000), Adaptive control for sequential design, Discuss. Math. Probab. Statist. 20, 97113.Google Scholar
Ghattas, O. and Willcox, K. (2021), Learning physics-based models from data: Perspectives from inverse problems and model reduction, Acta Numer. 30, 445554.10.1017/S0962492921000064CrossRefGoogle Scholar
Giles, M. B. (2015), Multilevel Monte Carlo methods, Acta Numer. 24, 259328.10.1017/S096249291500001XCrossRefGoogle Scholar
Giné, E. and Nickl, R. (2021), Mathematical Foundations of Infinite-Dimensional Statistical Models , Cambridge University Press.Google Scholar
Ginebra, J. (2007), On the measure of the information in a statistical experiment, Bayesian Anal. 2, 167212.10.1214/07-BA207CrossRefGoogle Scholar
Giraldi, L., Le Maître, O. P., Hoteit, I. and Knio, O. M. (2018), Optimal projection of observations in a Bayesian setting, Comput. Statist. Data Anal. 124, 252276.10.1016/j.csda.2018.03.002CrossRefGoogle Scholar
Gneiting, T. and Raftery, A. E. (2007), Strictly proper scoring rules, prediction, and estimation, J. Amer. Statist. Assoc. 102, 359378.10.1198/016214506000001437CrossRefGoogle Scholar
Go, J. and Isaac, T. (2022), Robust expected information gain for optimal Bayesian experimental design using ambiguity sets, in Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence , Vol. 180 of Proceedings of Machine Learning Research, PMLR, pp. 718727.Google Scholar
Goda, T., Hironaka, T., Kitade, W. and Foster, A. (2022), Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs, SIAM J. Sci. Comput. 44, A286A311.10.1137/20M1338848CrossRefGoogle Scholar
Goda, T., Hironaka, T. and Iwamoto, T. (2020), Multilevel Monte Carlo estimation of expected information gains, Stoch. Anal. Appl. 38, 581600.10.1080/07362994.2019.1705168CrossRefGoogle Scholar
Gorodetsky, A. and Marzouk, Y. (2016), Mercer kernels and integrated variance experimental design: Connections between Gaussian process regression and polynomial approximation, SIAM/ASA J. Uncertain. Quantif. 4, 796828.10.1137/15M1017119CrossRefGoogle Scholar
Gramacy, R. B. (2020), Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences , CRC Press.10.1201/9780367815493CrossRefGoogle Scholar
Gramacy, R. B. (2022), plgp: Particle learning of Gaussian processes. Available at https://cran.r-project.org/package=plgp.Google Scholar
Gramacy, R. B. and Apley, D. W. (2015), Local Gaussian process approximation for large computer experiments, J. Comput. Graph. Statist. 24, 561578.10.1080/10618600.2014.914442CrossRefGoogle Scholar
Grünwald, P. and van Ommen, T. (2017), Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it, Bayesian Anal. 12, 10691103.10.1214/17-BA1085CrossRefGoogle Scholar
Gürkan, G., Özge, A. Y. and Robinson, S. M. (1994), Sample-path optimization in simulation, in Proceedings of the 1994 Winter Simulation Conference (WSC ’94) (Sadowski, D. A. et al., eds), ACM, pp. 247254.Google Scholar
Haber, E., Horesh, L. and Tenorio, L. (2008), Numerical methods for experimental design of large-scale linear ill-posed inverse problems, Inverse Problems 24, art. 055012.10.1088/0266-5611/24/5/055012CrossRefGoogle Scholar
Haber, E., Horesh, L. and Tenorio, L. (2009), Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems, Inverse Problems 26, art. 025002.10.1088/0266-5611/26/2/025002CrossRefGoogle Scholar
Hainy, M., Drovandi, C. C. and McGree, J. M. (2016), Likelihood-free extensions for Bayesian sequentially designed experiments, in mODa 11: Advances in Model-Oriented Design and Analysis (Kunert, J., Müller, C. and Atkinson, A., eds), Contributions to Statistics, Springer, pp. 153161.Google Scholar
Hainy, M., Price, D. J., Restif, O. and Drovandi, C. (2022), Optimal Bayesian design for model discrimination via classification, Statist. Comput. 32, art. 25.10.1007/s11222-022-10078-2CrossRefGoogle ScholarPubMed
Hairer, M., Stuart, A. M. and Voss, J. (2011), Signal processing problems on function space: Bayesian formulation, stochastic PDEs and effective MCMC methods, in The Oxford Handbook of Nonlinear Filtering (Crisan, D. and Rozovskii, B., eds), Oxford University Press, pp. 833873.Google Scholar
Harari, O. and Steinberg, D. M. (2014), Optimal designs for Gaussian process models via spectral decomposition, J. Statist. Plann. Infer. 154, 87101.10.1016/j.jspi.2013.11.013CrossRefGoogle Scholar
He, X. D., Kou, S. and Peng, X. (2022), Risk measures: Robustness, elicitability, and backtesting, Annu. Rev. Statist. Appl. 9, 141166.10.1146/annurev-statistics-030718-105122CrossRefGoogle Scholar
Healy, K. and Schruben, L. W. (1991), Retrospective simulation response optimization, in Proceedings of the 1991 Winter Simulation Conference (WSC ’91) (Nelson, B. L. et al., eds), IEEE Computer Society, pp. 901906.10.1109/WSC.1991.185703CrossRefGoogle Scholar
Hedayat, A. (1981), Study of optimality criteria in design of experiments, in Statistics and Related Topics: International Symposium Proceedings (Csorgo, M., ed.), Elsevier Science, pp. 3956.Google Scholar
Helin, T. and Kretschmann, R. (2022), Non-asymptotic error estimates for the Laplace approximation in Bayesian inverse problems, Numer. Math. 150, 521549.10.1007/s00211-021-01266-9CrossRefGoogle Scholar
Helin, T., Hyvönen, N. and Puska, J.-P. (2022), Edge-promoting adaptive Bayesian experimental design for X-ray imaging, SIAM J. Sci. Comput. 44, B506B530.10.1137/21M1409330CrossRefGoogle Scholar
Herrmann, L., Schwab, C. and Zech, J. (2020), Deep neural network expression of posterior expectations in Bayesian PDE inversion, Inverse Problems 36, art. 125011.10.1088/1361-6420/abaf64CrossRefGoogle Scholar
Hoang, T. N., Low, B. K. H., Jaillet, P. and Kankanhalli, M. (2014), Nonmyopic ε-Bayes-optimal active learning of Gaussian processes, in Proceedings of the 31st International Conference on Machine Learning (ICML 2014) , Vol. 32 of Proceedings of Machine Learning Research, PMLR, pp. 739747.Google Scholar
Hochba, D. S. (1997), Approximation algorithms for NP-hard problems, ACM SIGACT News 28, 4052.10.1145/261342.571216CrossRefGoogle Scholar
Huan, X. (2015), Numerical approaches for sequential Bayesian optimal experimental design. PhD thesis, Massachusetts Institute of Technology.Google Scholar
Huan, X. and Marzouk, Y. M. (2013), Simulation-based optimal Bayesian experimental design for nonlinear systems, J. Comput. Phys. 232, 288317.10.1016/j.jcp.2012.08.013CrossRefGoogle Scholar
Huan, X. and Marzouk, Y. M. (2014), Gradient-based stochastic optimization methods in Bayesian experimental design, Int. J. Uncertain. Quantif. 4, 479510.10.1615/Int.J.UncertaintyQuantification.2014006730CrossRefGoogle Scholar
Huan, X. and Marzouk, Y. M. (2016), Sequential Bayesian optimal experimental design via approximate dynamic programming. Available at arXiv:1604.08320.Google Scholar
Huang, C.-W., Chen, R. T. Q., Tsirigotis, C. and Courville, A. (2020), Convex potential flows: Universal probability distributions with optimal transport and convex optimization. Available at arXiv:2012.05942.Google Scholar
Huggins, J., Campbell, T. and Broderick, T. (2016), Coresets for scalable Bayesian logistic regression, in Advances in Neural Information Processing Systems 29 (Lee, D. et al., eds), Curran Associates, pp. 40804088.Google Scholar
Huggins, J. H. and Miller, J. W. (2023), Reproducible model selection using bagged posteriors, Bayesian Anal. 18, 79104.10.1214/21-BA1301CrossRefGoogle ScholarPubMed
Ivanova, D. R., Foster, A., Kleinegesse, S., Gutmann, M. U. and Rainforth, T. (2021), Implicit deep adaptive design: Policy-based experimental design without likelihoods, in Advances in Neural Information Processing Systems 34 (Ranzato, M. et al., eds), Curran Associates, pp. 2578525798.Google Scholar
Jacob, P. E., Murray, L. M., Holmes, C. C. and Robert, C. P. (2017), Better together? Statistical learning in models made of modules. Available at arXiv:1708.08719.Google Scholar
Jagalur-Mohan, J. and Marzouk, Y. (2021), Batch greedy maximization of non-submodular functions: Guarantees and applications to experimental design, J. Mach. Learn. Res. 22, 1139711458.Google Scholar
Jaynes, E. T. and Bretthorst, G. L. (2003), Probability Theory: The Logic of Science , Cambridge University Press.10.1017/CBO9780511790423CrossRefGoogle Scholar
Johnson, C. R. and Barrett, W. W. (1985), Spanning-tree extensions of the Hadamard–Fischer inequalities, Linear Algebra Appl. 66, 177193.10.1016/0024-3795(85)90131-4CrossRefGoogle Scholar
Johnson, M. E. and Nachtsheim, C. J. (1983), Some guidelines for constructing exact D-optimal designs on convex design spaces, Technometrics 25, 271277.Google Scholar
Johnson, M. E., Moore, L. M. and Ylvisaker, D. (1990), Minimax and maximin distance designs, J. Statist. Plann. Infer. 26, 131148.10.1016/0378-3758(90)90122-BCrossRefGoogle Scholar
Jones, D. R., Schonlau, M. and Welch, W. J. (1998), Efficient global optimization of expensive black-box functions, J. Global Optim. 13, 455492.10.1023/A:1008306431147CrossRefGoogle Scholar
Joseph, V. R., Gul, E. and Ba, S. (2015), Maximum projection designs for computer experiments, Biometrika 102, 371380.10.1093/biomet/asv002CrossRefGoogle Scholar
Joseph, V. R., Gul, E. and Ba, S. (2020), Designing computer experiments with multiple types of factors: The MaxPro approach, J. Qual. Technol. 52, 343354.10.1080/00224065.2019.1611351CrossRefGoogle Scholar
Jourdan, A. and Franco, J. (2010), Optimal Latin hypercube designs for the Kullback–Leibler criterion, AStA Adv. Statist. Anal. 94, 341351.10.1007/s10182-010-0145-yCrossRefGoogle Scholar
Kaelbling, L. P., Littman, M. L. and Cassandra, A. R. (1998), Planning and acting in partially observable stochastic domains, Artif. Intell. 101, 99134.10.1016/S0004-3702(98)00023-XCrossRefGoogle Scholar
Kaelbling, L. P., Littman, M. L. and Moore, A. W. (1996), Reinforcement learning: A survey, J. Artif. Intell. Res. 4, 237285.10.1613/jair.301CrossRefGoogle Scholar
Kaipio, J. and Kolehmainen, V. (2013), Approximate marginalization over modelling errors and uncertainties in inverse problems, in Bayesian Theory and Applications (Damien, P. et al., eds), Oxford University Press, pp. 644672.10.1093/acprof:oso/9780199695607.003.0032CrossRefGoogle Scholar
Kaipio, J. and Somersalo, E. (2006), Statistical and Computational Inverse Problems , Vol. 160 of Applied Mathematical Sciences, Springer.Google Scholar
Kaipio, J. and Somersalo, E. (2007), Statistical inverse problems: Discretization, model reduction and inverse crimes, J. Comput. Appl. Math. 198, 493504.10.1016/j.cam.2005.09.027CrossRefGoogle Scholar
Karaca, O. and Kamgarpour, M. (2018), Exploiting weak supermodularity for coalition-proof mechanisms, in 2018 IEEE Conference on Decision and Control (CDC) , IEEE, pp. 11181123.10.1109/CDC.2018.8619337CrossRefGoogle Scholar
Karhunen, K. (1947), Über lineare Methoden in der Wahrscheinlichkeitsrechnung, Am. Acad. Sci. Fennicade, Ser. A, I 37, 379.Google Scholar
Kelmans, A. K. and Kimelfeld, B. N. (1983), Multiplicative submodularity of a matrix’s principal minor as a function of the set of its rows and some combinatorial applications, Discrete Math. 44, 113116.10.1016/0012-365X(83)90011-0CrossRefGoogle Scholar
Kennamer, N., Walton, S. and Ihler, A. (2023), Design amortization for Bayesian optimal experimental design, in Proceedings of the 37th AAAI Conference on Artificial Intelligence (Williams, B., Chen, Y. and Neville, J., eds), AAAI Press, pp. 82208227.Google Scholar
Kennedy, M. C. and O’Hagan, A. (2001), Bayesian calibration of computer models, J. R. Statist. Soc. Ser. B. Statist. Methodol. 63, 425464.10.1111/1467-9868.00294CrossRefGoogle Scholar
Kiefer, J. (1958), On the nonrandomized optimality and randomized nonoptimality of symmetrical designs, Ann. Math. Statist. 29, 675699.10.1214/aoms/1177706530CrossRefGoogle Scholar
Kiefer, J. (1959), Optimum experimental designs, J. R. Statist. Soc. Ser. B. Statist. Methodol. 21, 272304.10.1111/j.2517-6161.1959.tb00338.xCrossRefGoogle Scholar
Kiefer, J. (1961a), Optimum designs in regression problems II, Ann. Math. Statist. 32, 298325.10.1214/aoms/1177705160CrossRefGoogle Scholar
Kiefer, J. (1961b), Optimum experimental designs V, with applications to systematic and rotatable designs, in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability , Vol. 1, University of California Press, pp. 381405.Google Scholar
Kiefer, J. (1974), General equivalence theory for optimum designs (approximate theory), Ann. Statist. 2, 849879.10.1214/aos/1176342810CrossRefGoogle Scholar
Kiefer, J. and Wolfowitz, J. (1952), Stochastic estimation of the maximum of a regression function, Ann. Math. Statist. 23, 462466.10.1214/aoms/1177729392CrossRefGoogle Scholar
Kiefer, J. and Wolfowitz, J. (1959), Optimum designs in regression problems, Ann. Math. Statist. 30, 271294.10.1214/aoms/1177706252CrossRefGoogle Scholar
Kiefer, J. and Wolfowitz, J. (1960), The equivalence of two extremum problems, Canad. J. Statist. 12, 363366.Google Scholar
Kim, W., Pitt, M. A., Lu, Z.-L., Steyvers, M. and Myung, J. I. (2014), A hierarchical adaptive approach to optimal experimental design, Neural Comput. 26, 2565–2492.10.1162/NECO_a_00654CrossRefGoogle ScholarPubMed
King, J. and Wong, W.-K. (2000), Minimax D-optimal designs for the logistic model, Biometrics 56, 12631267.10.1111/j.0006-341X.2000.01263.xCrossRefGoogle ScholarPubMed
Kleijn, B. J. K. and van der Vaart, A. W. (2012), The Bernstein–von-Mises theorem under misspecification, Electron. J. Statist. 6, 354381.10.1214/12-EJS675CrossRefGoogle Scholar
Kleinegesse, S. and Gutmann, M. U. (2020), Bayesian experimental design for implicit models by mutual information neural estimation, in Proceedings of the 37th International Conference on Machine Learning (ICML 2020) (Daumé, H. and Singh, A., eds), Vol. 119 of Proceedings of Machine Learning Research, PMLR, pp. 53165326.Google Scholar
Kleinegesse, S. and Gutmann, M. U. (2021), Gradient-based Bayesian experimental design for implicit models using mutual information lower bounds. Available at arXiv:2105.04379.Google Scholar
Kleinegesse, S., Drovandi, C. and Gutmann, M. U. (2021), Sequential Bayesian experimental design for implicit models via mutual information, Bayesian Anal. 16, 773802.10.1214/20-BA1225CrossRefGoogle Scholar
Kleywegt, A. J., Shapiro, A. and Mello, T. Homem-de (2002), The sample average approximation method for stochastic discrete optimization, SIAM J. Optim. 12, 479502.10.1137/S1052623499363220CrossRefGoogle Scholar
Knapik, B. T., van der Vaart, A. W. and van Zanten, J. H. (2011), Bayesian inverse problems with Gaussian priors, Ann. Statist. 39, 26262657.10.1214/11-AOS920CrossRefGoogle Scholar
Knothe, H. (1957), Contributions to the theory of convex bodies, Michigan Math. J. 4, 3952.10.1307/mmj/1028990175CrossRefGoogle Scholar
Ko, C.-W., Lee, J. and Queyranne, M. (1995), An exact algorithm for maximum entropy sampling, Oper. Res. 43, 684691.10.1287/opre.43.4.684CrossRefGoogle Scholar
Kobyzev, I., Prince, S. J. and Brubaker, M. A. (2020), Normalizing flows: An introduction and review of current methods, IEEE Trans. Pattern Anal. Mach. Intell. 43, 39643979.10.1109/TPAMI.2020.2992934CrossRefGoogle Scholar
Konda, V. R. and Tsitsiklis, J. N. (1999), Actor–critic algorithms, in Advances in Neural Information Processing Systems 12 (Solla, S. et al., eds), MIT Press, pp. 10081014.Google Scholar
Korkel, S., Bauer, I., Bock, H. G. and Schloder, J. P. (1999), A sequential approach for nonlinear optimum experimental design in DAE systems, in Scientific Computing in Chemical Engineering II (Keil, F. et al., eds), Springer, pp. 338345.Google Scholar
Kotelyanskiĭ, D. M. (1950), On the theory of nonnegative and oscillating matrices, Ukrains’kyi Matematychnyi Zhurnal 2, 94101.Google Scholar
Kouri, D. P., Jakeman, J. D. and Huerta, J. Gabriel (2022), Risk-adapted optimal experimental design, SIAM/ASA J. Uncertain. Quantif. 10, 687716.10.1137/20M1357615CrossRefGoogle Scholar
Koval, K., Alexanderian, A. and Stadler, G. (2020), Optimal experimental design under irreducible uncertainty for linear inverse problems governed by PDEs, Inverse Problems 36, art. 075007.10.1088/1361-6420/ab89c5CrossRefGoogle Scholar
Koval, K., Herzog, R. and Scheichl, R. (2024), Tractable optimal experimental design using transport maps. Available at arXiv:2401.07971.Google Scholar
Kozachenko, L. F. and Leonenko, N. N. (1987), A statistical estimate for the entropy of a random vector, Probl. Inf. Transm. 23, 916.Google Scholar
Kraskov, A., Stögbauer, H. and Grassberger, P. (2004), Estimating mutual information, Phys. Rev. E 69, art. 066138.10.1103/PhysRevE.69.066138CrossRefGoogle ScholarPubMed
Krause, A. and Golovin, D. (2014), Submodular function maximization, in Tractability, Practical Approaches to Hard Problems (Bordeaux, L., Hamadi, Y. and Kohli, P., eds), Cambridge University Press, pp. 71104.10.1017/CBO9781139177801.004CrossRefGoogle Scholar
Krause, A., Singh, A. and Guestrin, C. (2008), Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies, J. Mach. Learn. Res. 9, 235284.Google Scholar
Kuhn, D., Esfahani, P. M., Nguyen, V. A. and Shafieezadeh-Abadeh, S. (2019), Wasserstein distributionally robust optimization: Theory and applications in machine learning, in Operations Research & Management Science in the Age of Analytics , INFORMS, pp. 130166.10.1287/educ.2019.0198CrossRefGoogle Scholar
Kushner, H. J. and Yin, G. G. (2003), Stochastic Approximation and Recursive Algorithms and Applications , second edition, Springer.Google Scholar
Lam, R. and Willcox, K. (2017), Lookahead Bayesian optimization with inequality constraints, in Advances in Neural Information Processing Systems 30 (Guyon, I. et al., eds), Curran Associates, pp. 18901900.Google Scholar
Larson, J., Menickelly, M. and Wild, S. M. (2019), Derivative-free optimization methods, Acta Numer. 28, 287404.10.1017/S0962492919000060CrossRefGoogle Scholar
Lau, L. C. and Zhou, H. (2020), A spectral approach to network design, in Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing (STOC 2020) , ACM, pp. 826839.10.1145/3357713.3384313CrossRefGoogle Scholar
Lau, L. C. and Zhou, H. (2022), A local search framework for experimental design, SIAM J. Comput. 51, 900951.10.1137/20M1386542CrossRefGoogle Scholar
Le Cam, L. (1964), Sufficiency and approximate sufficiency, Ann. Math. Statist. 35, 14191455.10.1214/aoms/1177700372CrossRefGoogle Scholar
Lehmann, E. L. and Casella, G. (1998), Theory of Point Estimation , Springer Texts in Statistics, Springer.Google Scholar
Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., VanBriesen, J. and Glance, N. (2007), Cost-effective outbreak detection in networks, in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , ACM, pp. 420429.10.1145/1281192.1281239CrossRefGoogle Scholar
Letizia, N. A., Novello, N. and Tonello, A. M. (2023), Variational f-divergence and derangements for discriminative mutual information estimation. Available at arXiv:2305.20025.Google Scholar
Lewis, D. D. (1995), A sequential algorithm for training text classifiers: Corrigendum and additional data, SIGIR Forum 29, 1319.10.1145/219587.219592CrossRefGoogle Scholar
Li, F., Baptista, R. and Marzouk, Y. (2024a), Expected information gain estimation via density approximations: Sample allocation and dimension reduction. Forthcoming.Google Scholar
Li, M. T. C., Marzouk, Y. and Zahm, O. (2024b), Principal feature detection via ϕ-Sobolev inequalities. To appear in Bernoulli. Available at https://bernoullisociety.org/publications/ bernoulli-journal/bernoulli-journal-papers.10.3150/23-BEJ1702CrossRefGoogle Scholar
Liepe, J., Filippi, S., Komorowski, M. and Stumpf, M. P. H. (2013), Maximizing the information content of experiments in systems biology, PLoS Comput. Biol. 9, e1002888.10.1371/journal.pcbi.1002888CrossRefGoogle ScholarPubMed
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D. and Wierstra, D. (2016), Continuous control with deep reinforcement learning, in Proceedings of the 4th International Conference on Learning Representations (ICLR 2016) (Bengio, Y. and LeCun, Y., eds).Google Scholar
Lindley, D. V. (1956), On a measure of the information provided by an experiment, Ann. Math. Statist. 27, 9861005.10.1214/aoms/1177728069CrossRefGoogle Scholar
Loève, M. (1948), Fonctions aléatoires du second ordre, in Processus Stochastique et Mouvement Brownien (Lévy, P., ed.), Gauthier-Villars.Google Scholar
Long, Q., Scavino, M., Tempone, R. and Wang, S. (2013), Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations, Comput. Methods Appl. Mech. Engrg 259, 2439.10.1016/j.cma.2013.02.017CrossRefGoogle Scholar
Loredo, T. J. (2011), Rotating stars and revolving planets: Bayesian exploration of the pulsating sky, in Bayesian Statistics 9: Proceedings of the Ninth Valencia International Meeting (Bernardo, J. M., Bayarri, M. J. and Berger, J. O., eds), Oxford University Press, pp. 361392.10.1093/acprof:oso/9780199694587.003.0012CrossRefGoogle Scholar
Lovász, L. (1983), Submodular functions and convexity, in Mathematical Programming The State of the Art: Bonn 1982 (Bachem, A., Korte, B. and Grötschel, M., eds), Springer, pp. 235257.10.1007/978-3-642-68874-4_10CrossRefGoogle Scholar
Lovász, L. (2007), Combinatorial Problems and Exercises , second edition, American Mathematical Society.Google Scholar
MacKay, D. J. C. (1992), Information-based objective functions for active data selection, Neural Comput. 4, 590604.10.1162/neco.1992.4.4.590CrossRefGoogle Scholar
Madan, V., Nikolov, A., Singh, M. and Tantipongpipat, U. (2020), Maximizing determinants under matroid constraints, in 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) , IEEE, pp. 565576.Google Scholar
Madan, V., Singh, M., Tantipongpipat, U. and Xie, W. (2019), Combinatorial algorithms for optimal design, in Proceedings of the 32nd Conference on Learning Theory , Vol. 99 of Proceedings of Machine Learning Research, PMLR, pp. 22102258.Google Scholar
Mak, W.-K., Morton, D. P. and Wood, R. K. (1999), Monte Carlo bounding techniques for determining solution quality in stochastic programs, Oper. Res. Lett. 24, 4756.10.1016/S0167-6377(98)00054-6CrossRefGoogle Scholar
Manole, T., Balakrishnan, S., Niles-Weed, J. and Wasserman, L. (2021), Plugin estimation of smooth optimal transport maps. Available at arXiv:2107.12364.Google Scholar
Markowitz, H. (1952), Portfolio selection, J. Finance 7, 7791.Google Scholar
Marzouk, Y. and Xiu, D. (2009), A stochastic collocation approach to Bayesian inference in inverse problems, Commun. Comput. Phys. 6, 826847.10.4208/cicp.2009.v6.p826CrossRefGoogle Scholar
Marzouk, Y., Moselhy, T., Parno, M. and Spantini, A. (2016), Sampling via measure transport: An introduction, in Handbook of Uncertainty Quantification (Ghanem, R., Higdon, D. and Owhadi, H., eds), Springer, pp. 141.Google Scholar
McAllester, D. and Stratos, K. (2020), Formal limitations on the measurement of mutual information, in Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics , Vol. 108 of Proceedings of Machine Learning Research, PMLR, pp. 875884.Google Scholar
McGree, J., Drovandi, C. and Pettitt, A. (2012), A sequential Monte Carlo approach to the sequential design for discriminating between rival continuous data models. Technical report, Queensland University of Technology, Brisbane, Australia.Google Scholar
McKay, M. D., Beckman, R. J. and Conover, W. J. (1979), A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics 21, 239245.Google Scholar
Mertikopoulos, P., Hallak, N., Kavis, A. and Cevher, V. (2020), On the almost sure convergence of stochastic gradient descent in non-convex problems, in Advances in Neural Information Processing Systems 33 (Larochelle, H. et al., eds), Curran Associates, pp. 11171128.Google Scholar
Meyer, R. K. and Nachtsheim, C. J. (1995), The coordinate-exchange algorithm for constructing exact optimal experimental designs, Technometrics 37, 6069.10.1080/00401706.1995.10485889CrossRefGoogle Scholar
Miller, J. W. and Dunson, D. B. (2019), Robust Bayesian inference via coarsening, J. Amer. Statist. Assoc. 114, 11131125.10.1080/01621459.2018.1469995CrossRefGoogle ScholarPubMed
Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A., Vondrák, J. and Krause, A. (2015), Lazier than lazy greedy, in Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI 2015) , pp. 18121818.Google Scholar
Mirzasoleiman, B., Karbasi, A., Sarkar, R. and Krause, A. (2013), Distributed submodular maximization: Identifying representative elements in massive data, in Advances in Neural Information Processing Systems 26 (Burges, C. J. et al., eds), Curran Associates, pp. 20492057.Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. and Hassabis, D. (2015), Human-level control through deep reinforcement learning, Nature 518, 529533.10.1038/nature14236CrossRefGoogle ScholarPubMed
Močkus, J. (1975), On Bayesian methods for seeking the extremum, in Optimization Techniques IFIP Technical Conference (Marchuk, G. I., ed.), Springer, pp. 400404.10.1007/978-3-662-38527-2_55CrossRefGoogle Scholar
Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A. (2020), Monte Carlo gradient estimation in machine learning, J. Mach. Learn. Res. 21, 51835244.Google Scholar
Morrison, R. E., Oliver, T. A. and Moser, R. D. (2018), Representing model inadequacy: A stochastic operator approach, SIAM/ASA J. Uncertain. Quantif. 6, 457496.10.1137/16M1106419CrossRefGoogle Scholar
Müller, P., Berry, D. A., Grieve, A. P., Smith, M. and Krams, M. (2007), Simulation-based sequential Bayesian design, J. Statist. Plann. Infer. 137, 31403150.10.1016/j.jspi.2006.05.021CrossRefGoogle Scholar
Müller, P., Duan, Y. and Tec, M. Garcia (2022), Simulation-based sequential design, Pharma. Statist. 21, 729739.10.1002/pst.2216CrossRefGoogle ScholarPubMed
Murphy, S. A. (2003), Optimal dynamic treatment regimes, J. R. Statist. Soc. Ser. B. Statist. Methodol. 65, 331366.10.1111/1467-9868.00389CrossRefGoogle Scholar
Myung, J. I. and Pitt, M. A. (2009), Optimal experimental design for model discrimination, Psychol. Rev. 116, 499518.10.1037/a0016104CrossRefGoogle ScholarPubMed
Nelder, J. A. and Mead, R. (1965), A simplex method for function minimization, Comput. J. 7, 308313.10.1093/comjnl/7.4.308CrossRefGoogle Scholar
Nemhauser, G. L. and Wolsey, L. A. (1978), Best algorithms for approximating the maximum of a submodular set function, Math. Oper. Res. 3, 177188.10.1287/moor.3.3.177CrossRefGoogle Scholar
Nemhauser, G. L., Wolsey, L. A. and Fisher, M. L. (1978), An analysis of approximations for maximizing submodular set functions I, Math. Program. 14, 265294.10.1007/BF01588971CrossRefGoogle Scholar
Nesterov, Y. (1983), A method of solving a convex programming problem with convergence rate $\mathcal{O}\left(1/{k}^2\right)$ , Soviet Math. Doklady 27, 372376.Google Scholar
Ng, A. and Russell, S. (2000), Algorithms for inverse reinforcement learning, in Proceedings of the 17th International Conference on Machine Learning (ICML 2000) , Morgan Kaufmann, pp. 663670.Google Scholar
Nguyen, X., Wainwright, M. J. and Jordan, M. I. (2010), Estimating divergence functionals and the likelihood ratio by convex risk minimization, IEEE Trans. Inform. Theory 56, 58475861.10.1109/TIT.2010.2068870CrossRefGoogle Scholar
Niederreiter, H. (1992), Random Number Generation and Quasi-Monte Carlo Methods , SIAM.10.1137/1.9781611970081CrossRefGoogle Scholar
Nikolov, A. and Singh, M. (2016), Maximizing determinants under partition constraints, in Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC 2016) , ACM, pp. 192201.Google Scholar
Nikolov, A., Singh, M. and Tantipongpipat, U. (2022), Proportional volume sampling and approximation algorithms for A-optimal design, Math. Oper. Res. 47, 847877.10.1287/moor.2021.1129CrossRefGoogle Scholar
Nocedal, J. and Wright, S. J. (2006), Numerical Optimization, Springer.Google Scholar
Norkin, V., Pflug, G. and Ruszczynski, A. (1998), A branch and bound method for stochastic global optimization, Math. Program. 83, 425450.10.1007/BF02680569CrossRefGoogle Scholar
O’Hagan, A., Buck, C. E., Daneshkhah, A., Eiser, J. R., Garthwaite, P. H., Jenkinson, D. J., Oakley, J. E. and Rakow, T. (2006), Uncertain Judgements: Eliciting Experts’ Probabilities , Wiley.10.1002/0470033312CrossRefGoogle Scholar
Orozco, R., Herrmann, F. J. and Chen, P. (2024), Probabilistic Bayesian optimal experimental design using conditional normalizing flows. Available at arXiv:2402.18337.Google Scholar
Overstall, A. M. (2022), Properties of Fisher information gain for Bayesian design of experiments, J. Statist. Plann. Infer. 218, 138146.10.1016/j.jspi.2021.10.006CrossRefGoogle Scholar
Overstall, A. M. and Woods, D. C. (2017), Bayesian design of experiments using approximate coordinate exchange, Technometrics 59, 458470.10.1080/00401706.2016.1251495CrossRefGoogle Scholar
Overstall, A. M., McGree, J. M. and Drovandi, C. C. (2018), An approach for finding fully Bayesian optimal designs using normal-based approximations to loss functions, Statist. Comput. 28, 343358.10.1007/s11222-017-9734-xCrossRefGoogle Scholar
Owen, A. B. (1992), Orthogonal arrays for computer experiments, integration and visualization, Statist. Sinica 2, 439452.Google Scholar
Owen, A. B. (2013), Monte Carlo theory, methods and examples. Available at https://artowen.su.domains/mc/.Google Scholar
Papadimitriou, C. H. and Steiglitz, K. (1998), Combinatorial Optimization: Algorithms and Complexity , Courier Corporation.Google Scholar
Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S. and Lakshminarayanan, B. (2021), Normalizing flows for probabilistic modeling and inference, J. Mach. Learn. Res. 22, 26172680.Google Scholar
Pardo-Igúzquiza, E. (1998), Maximum likelihood estimation of spatial covariance parameters, Math. Geol. 30, 95108.10.1023/A:1021765405952CrossRefGoogle Scholar
Peters, J. and Schaal, S. (2008), Natural actor–critic, Neurocomput. 71, 11801190.10.1016/j.neucom.2007.11.026CrossRefGoogle Scholar
Pilz, J. (1991), Bayesian Estimation and Experimental Design in Linear Regression Models , Wiley.Google Scholar
Polyak, B. T. and Juditsky, A. B. (1992), Acceleration of stochastic approximation by averaging, SIAM J. Control Optim. 30, 838855.10.1137/0330046CrossRefGoogle Scholar
Pompe, E. and Jacob, P. E. (2021), Asymptotics of cut distributions and robust modular inference using posterior bootstrap. Available at arXiv:2110.11149.Google Scholar
Pooladian, A.-A. and Niles-Weed, J. (2021), Entropic estimation of optimal transport maps. Available at arXiv:2109.12004.Google Scholar
Poole, B., Ozair, S., Van Den Oord, A., Alemi, A. and Tucker, G. (2019), On variational bounds of mutual information, in Proceedings of the 36th International Conference on Machine Learning (ICML 2019) , Vol. 97 of Proceedings of Machine Learning Research, PMLR, pp. 51715180.Google Scholar
Powell, W. B. (2011), Approximate Dynamic Programming: Solving the Curses of Dimensionality , second edition, Wiley.10.1002/9781118029176CrossRefGoogle Scholar
Prangle, D., Harbisher, S. and Gillespie, C. S. (2023), Bayesian experimental design without posterior calculations: An adversarial approach, Bayesian Anal. 18, 133163.10.1214/22-BA1306CrossRefGoogle Scholar
Pronzato, L. and Müller, W. G. (2012), Design of computer experiments: Space filling and beyond, Statist. Comput. 22, 681701.10.1007/s11222-011-9242-3CrossRefGoogle Scholar
Pronzato, L. and Thierry, É. (2002), Sequential experimental design and response optimisation, Statist. Methods Appl. 11, 277292.10.1007/BF02509828CrossRefGoogle Scholar
Pronzato, L. and Walter, E. (1985), Robust experiment design via stochastic approximation, Math. Biosci. 75, 103120.10.1016/0025-5564(85)90068-9CrossRefGoogle Scholar
Pukelsheim, F. (2006), Optimal Design of Experiments , SIAM.10.1137/1.9780898719109CrossRefGoogle Scholar
Rahimian, H. and Mehrotra, S. (2019), Distributionally robust optimization: A review. Available at arXiv:1908.05659.Google Scholar
Raiffa, H. and Schlaifer, R. (1961), Applied Statistical Decision Theory , Wiley.Google Scholar
Rainforth, T., Cornish, R., Yang, H., Warrington, A. and Wood, F. (2018), On nesting Monte Carlo estimators, in Proceedings of the 35th International Conference on Machine Learning (ICML 2018) , Vol. 80 of Proceedings of Machine Learning Research, PMLR, pp. 42674276.Google Scholar
Rainforth, T., Foster, A., Ivanova, D. R. and Smith, F. B. (2023), Modern Bayesian experimental design, Statist. Sci. 39, 100114.Google Scholar
Rasmussen, C. E. and Williams, C. K. I. (2006), Gaussian Processes for Machine Learning , MIT Press.Google Scholar
Rhee, C. H. and Glynn, P. W. (2015), Unbiased estimation with square root convergence for SDE models, Oper. Res. 63, 10261043.10.1287/opre.2015.1404CrossRefGoogle Scholar
Riis, C., Antunes, F., Hüttel, F., Azevedo, C. Lima and Pereira, F. (2022), Bayesian active learning with fully Bayesian Gaussian processes, in Advances in Neural Information Processing Systems 35 (Koyejo, S. et al., eds), Curran Associates, pp. 1214112153.Google Scholar
Riley, Z. B., Perez, R. A., Bartram, G. W., Spottswood, S. M., Smarslok, B. P. and Beberniss, T. J. (2019), Aerothermoelastic experimental design for the AEDC/VKF Tunnel C: Challenges associated with measuring the response of flexible panels in high-temperature, high-speed wind tunnels, J. Sound Vib. 441, 96105.10.1016/j.jsv.2018.10.022CrossRefGoogle Scholar
Robbins, H. and Monro, S. (1951), A stochastic approximation method, Ann. Math. Statist. 22, 400407.10.1214/aoms/1177729586CrossRefGoogle Scholar
Robertazzi, T. and Schwartz, S. (1989), An accelerated sequential algorithm for producing D-optimal designs, SIAM J. Sci. Statist. Comput. 10, 341358.10.1137/0910022CrossRefGoogle Scholar
Rockafellar, R. T. and Royset, J. O. (2015), Measures of residual risk with connections to regression, risk tracking, surrogate models, and ambiguity, SIAM J. Optim. 25, 11791208.10.1137/151003271CrossRefGoogle Scholar
Rockafellar, R. T. and Uryasev, S. (2002), Conditional value-at-risk for general loss distributions, J. Bank. Finance 26, 14431471.10.1016/S0378-4266(02)00271-6CrossRefGoogle Scholar
Rockafellar, R. T. and Uryasev, S. (2013), The fundamental risk quadrangle in risk management, optimization and statistical estimation, Surv. Oper. Res. Manag. Sci. 18, 3353.Google Scholar
Rosenblatt, M. (1952), Remarks on a multivariate transformation, Ann. Math. Statist. 23, 470472.10.1214/aoms/1177729394CrossRefGoogle Scholar
Royset, J. O. (2022), Risk-adaptive approaches to learning and decision making: A survey. Available at arXiv:2212.00856.Google Scholar
Rudolf, D. and Sprungk, B. (2018), On a generalization of the preconditioned Crank–Nicolson Metropolis algorithm, Found. Comput. Math. 18, 309343.10.1007/s10208-016-9340-xCrossRefGoogle Scholar
Ruppert, D. (1988), Efficient estimations from a slowly convergent Robbins–Monro process. Technical report, Cornell University. Available at http://ecommons.cornell.edu/bitstream/handle/1813/8664/TR000781.pdf?sequence=1http://ecommons.cornell.edu/bitstream/handle/1813/8664/TR000781.pdf?sequence=1.Google Scholar
Ruthotto, L., Chung, J. and Chung, M. (2018), Optimal experimental design for inverse problems with state constraints, SIAM J. Sci. Comput. 40, B1080B1100.10.1137/17M1143733CrossRefGoogle Scholar
Ryan, E. G., Drovandi, C. C., McGree, J. M. and Pettitt, A. N. (2016), A review of modern computational algorithms for Bayesian optimal design, Int. Statist. Rev. 84, 128154.10.1111/insr.12107CrossRefGoogle Scholar
Ryan, K. J. (2003), Estimating expected information gains for experimental designs with application to the random fatigue-limit model, J. Comput. Graph. Statist. 12, 585603.10.1198/1061860032012CrossRefGoogle Scholar
Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P. (1989), Design and analysis of computer experiments, Statist. Sci. 4, 118128.Google Scholar
Santambrogio, F. (2015), Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modelin g, Vol. 87 of Progress in Nonlinear Differential Equations and their Applications, Springer.10.1007/978-3-319-20828-2CrossRefGoogle Scholar
Santner, T. J., Williams, B. J. and Notz, W. I. (2018), The Design and Analysis of Computer Experiments , second edition, Springer.10.1007/978-1-4939-8847-1CrossRefGoogle Scholar
Sargsyan, K., Huan, X. and Najm, H. N. (2019), Embedded model error representation for Bayesian model calibration, Int. J. Uncertain. Quantif. 9, 365394.10.1615/Int.J.UncertaintyQuantification.2019027384CrossRefGoogle Scholar
Sargsyan, K., Najm, H. N. and Ghanem, R. G. (2015), On the statistical calibration of physical models, Int. J. Chem. Kinet. 47, 246276.10.1002/kin.20906CrossRefGoogle Scholar
Schein, A. I. and Ungar, L. H. (2007), Active learning for logistic regression: An evaluation, Mach. Learn. 68, 235265.10.1007/s10994-007-5019-5CrossRefGoogle Scholar
Schillings, C. and Schwab, C. (2016), Scaling limits in computational Bayesian inversion, ESAIM Math. Model. Numer. Anal. 50, 18251856.10.1051/m2an/2016005CrossRefGoogle Scholar
Schillings, C., Sprungk, B. and Wacker, P. (2020), On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems, Numer. Math. 145, 915971.10.1007/s00211-020-01131-1CrossRefGoogle Scholar
Schrijver, A. (2003), Combinatorial Optimization: Polyhedra and Efficiency, Springer.Google Scholar
Sebastiani, P. and Wynn, H. P. (2000), Maximum entropy sampling and optimal Bayesian experimental design, J. R. Statist. Soc. Ser. B. Statist. Methodol. 62, 145157.10.1111/1467-9868.00225CrossRefGoogle Scholar
Seo, S., Wallat, M., Graepel, T. and Obermayer, K. (2000), Gaussian process regression: Active data selection and test point rejection, in Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN 2000) , Springer, pp. 2734.Google Scholar
Settles, B. (2009), Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison.Google Scholar
Shah, K. R. and Sinha, B. K. (1989), Theory of Optimal Designs , Vol. 54 of Lecture Notes in Statistics, Springer.10.1007/978-1-4612-3662-7CrossRefGoogle Scholar
Shahriari, B., Swersky, K., Wang, Z., Adams, R. P. and de Freitas, N. (2016), Taking the human out of the loop: A review of Bayesian optimization, Proc. IEEE 104, 148175.10.1109/JPROC.2015.2494218CrossRefGoogle Scholar
Shapiro, A. (1991), Asymptotic analysis of stochastic programs, Ann . Oper. Res. 30, 169186.10.1007/BF02204815CrossRefGoogle Scholar
Shapiro, A., Dentcheva, D. and Ruszczynski, A. (2021), Lectures on Stochastic Programming: Modeling and Theory , third edition, SIAM.10.1137/1.9781611976595CrossRefGoogle Scholar
Shashaani, S., Hashemi, F. S. and Pasupathy, R. (2018), ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization, SIAM J. Optim. 28, 31453176.10.1137/15M1042425CrossRefGoogle Scholar
Shen, W. (2023), Reinforcement learning based sequential and robust Bayesian optimal experimental design. PhD thesis, University of Michigan.Google Scholar
Shen, W. and Huan, X. (2021), Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning. Available at arXiv:2110.15335.Google Scholar
Shen, W. and Huan, X. (2023), Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning, Comput. Methods Appl. Mech. Engrg 416, art. 116304.10.1016/j.cma.2023.116304CrossRefGoogle Scholar
Shen, W., Dong, J. and Huan, X. (2023), Variational sequential optimal experimental design using reinforcement learning. Available at arXiv:2306.10430.Google Scholar
Shewry, M. C. and Wynn, H. P. (1987), Maximum entropy sampling, J. Appl. Statist. 14, 165170.10.1080/02664768700000020CrossRefGoogle Scholar
Siade, A. J., Hall, J. and Karelse, R. N. (2017), A practical, robust methodology for acquiring new observation data using computationally expensive groundwater models, Water Resour. Res. 53, 98609882.10.1002/2017WR020814CrossRefGoogle Scholar
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D. and Riedmiller, M. (2014), Deterministic policy gradient algorithms, in Proceedings of the 31st International Conference on Machine Learning (ICML 2014) , Vol. 32 of Proceedings of Machine Learning Research, PMLR, pp. 387395.Google Scholar
Singh, M. and Xie, W. (2020), Approximation algorithms for D-optimal design, Math. Oper. Res. 45, 15121534.10.1287/moor.2019.1041CrossRefGoogle Scholar
Solonen, A., Haario, H. and Laine, M. (2012), Simulation-based optimal design using a response variance criterion, J. Comput. Graph. Statist. 21, 234252.10.1198/jcgs.2011.10070CrossRefGoogle Scholar
Song, J. and Ermon, S. (2020), Understanding the limitations of variational mutual information estimators, in Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Available at https://openreview.net/forum?id=B1x62TNtDS.Google Scholar
Song, J., Chen, Y. and Yue, Y. (2019), A general framework for multi-fidelity Bayesian optimization with Gaussian processes, in Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics , Vol. 89 of Proceedings of Machine Learning Research, PMLR, pp. 31583167.Google Scholar
Spall, J. C. (1998a), Implementation of the simultaneous perturbation algorithm for stochastic optimization, IEEE Trans. Aerosp. Electron. Systems 34, 817823.10.1109/7.705889CrossRefGoogle Scholar
Spall, J. C. (1998b), An overview of the simultaneous perturbation method for efficient optimization, Johns Hopkins APL Tech . Digest 19, 482492.Google Scholar
Spantini, A., Bigoni, D. and Marzouk, Y. (2018), Inference via low-dimensional couplings, J. Mach. Learn. Res. 19, 26392709.Google Scholar
Spantini, A., Cui, T., Willcox, K., Tenorio, L. and Marzouk, Y. (2017), Goal-oriented optimal approximations of Bayesian linear inverse problems, SIAM J. Sci. Comput. 39, S167S196.10.1137/16M1082123CrossRefGoogle Scholar
Spantini, A., Solonen, A., Cui, T., Martin, J., Tenorio, L. and Marzouk, Y. (2015), Optimal low-rank approximations of Bayesian linear inverse problems, SIAM J. Sci. Comput. 37, A2451A2487.10.1137/140977308CrossRefGoogle Scholar
Spielman, D. A. and Srivastava, N. (2008), Graph sparsification by effective resistances, in Proceedings of the 40th Annual ACM Symposium on Theory of Computing (STOC 2008) , ACM, pp. 563568.Google Scholar
Spöck, G. (2012), Spatial sampling design based on spectral approximations to the random field, Environ. Model. Softw. 33, 4860.10.1016/j.envsoft.2012.01.004CrossRefGoogle Scholar
Spöck, G. and Pilz, J. (2010), Spatial sampling design and covariance-robust minimax prediction based on convex design ideas, Stoch. Env. Res. Risk Assessment 24, 463482.Google Scholar
Spokoiny, V. (2023), Dimension free nonasymptotic bounds on the accuracy of high-dimensional Laplace approximation, SIAM/ASA J. Uncertain. Quantif. 11, 10441068.10.1137/22M1495688CrossRefGoogle Scholar
Sprungk, B. (2020), On the local Lipschitz stability of Bayesian inverse problems, Inverse Problems 36, art. 055015.10.1088/1361-6420/ab6f43CrossRefGoogle Scholar
Sriver, T. A., Chrissis, J. W. and Abramson, M. A. (2009), Pattern search ranking and selection algorithms for mixed variable simulation-based optimization, European J. Oper. Res. 198, 878890.10.1016/j.ejor.2008.10.020CrossRefGoogle Scholar
Steinberg, D. M. and Hunter, W. G. (1984), Experimental design: Review and comment, Technometrics 26, 7197.10.1080/00401706.1984.10487928CrossRefGoogle Scholar
Stone, M. (1959), Application of a measure of information to the design and comparison of regression experiments, Ann. Math. Statist. 30, 5570.10.1214/aoms/1177706359CrossRefGoogle Scholar
Strutz, D. and Curtis, A. (2024), Variational Bayesian experimental design for geophysical applications: Seismic source location, amplitude versus offset inversion, and estimating ${\mathrm{CO}}_2$ saturations in a subsurface reservoir, Geophys. J. Int. 236, 13091331.10.1093/gji/ggad492CrossRefGoogle Scholar
Stuart, A. and Teckentrup, A. (2018), Posterior consistency for Gaussian process approximations of Bayesian posterior distributions, Math. Comp. 87, 721753.10.1090/mcom/3244CrossRefGoogle Scholar
Stuart, A. M. (2010), Inverse problems: A Bayesian perspective, Acta Numer. 19, 451559.10.1017/S0962492910000061CrossRefGoogle Scholar
Sun, N.-Z. and Yeh, W. W. (2007), Development of objective-oriented groundwater models, 2: Robust experimental design, Water Resour. Res. 43, 114.Google Scholar
Sutton, R. S. and Barto, A. G. (2018), Reinforcement Leaning , second edition, MIT Press.Google Scholar
Sutton, R. S., McAllester, D., Singh, S. P. and Mansour, Y. (1999), Policy gradient methods for reinforcement learning with function approximation, in Advances in Neural Information Processing Systems 12 (Solla, S. et al., eds), MIT Press, pp. 10571063.Google Scholar
Sviridenko, M. (2004), A note on maximizing a submodular set function subject to a knapsack constraint, Oper. Res. Lett. 32, 4143.10.1016/S0167-6377(03)00062-2CrossRefGoogle Scholar
Sviridenko, M., Vondrák, J. and Ward, J. (2017), Optimal approximation for submodular and supermodular optimization with bounded curvature, Math. Oper. Res. 42, 11971218.10.1287/moor.2016.0842CrossRefGoogle Scholar
Tang, B. (1993), Orthogonal array-based Latin hypercubes, J. Amer. Statist. Assoc. 88, 13921397.10.1080/01621459.1993.10476423CrossRefGoogle Scholar
Tec, M., Duan, Y. and Müller, P. (2023), A comparative tutorial of Bayesian sequential design and reinforcement learning, Amer. Statist. 77, 223233.10.1080/00031305.2022.2129787CrossRefGoogle Scholar
Terejanu, G., Upadhyay, R. R. and Miki, K. (2012), Bayesian experimental design for the active nitridation of graphite by atomic nitrogen, Experimental Therm. Fluid Sci. 36, 178193.10.1016/j.expthermflusci.2011.09.012CrossRefGoogle Scholar
Torczon, V. (1997), On the convergence of pattern search algorithms, SIAM J. Optim. 7, 125.10.1137/S1052623493250780CrossRefGoogle Scholar
Tzoumas, V., Carlone, L., Pappas, G. J. and Jadbabaie, A. (2021), LQG control and sensing co-design, IEEE Trans. Automat. Control 66, 14681483.10.1109/TAC.2020.2997661CrossRefGoogle Scholar
van den Oord, A., Li, Y. and Vinyals, O. (2018), Representation learning with contrastive predictive coding. Available at arXiv:1807.03748.Google Scholar
Vazirani, V. V. (2001), Approximation Algorithms , Springer.Google Scholar
Villa, U., Petra, N. and Ghattas, O. (2021), HIPPYlib: An extensible software framework for large-scale inverse problems governed by PDEs I: Deterministic inversion and linearized Bayesian inference, ACM Trans. Math. Software 47, 134.10.1145/3428447CrossRefGoogle Scholar
Villani, C. (2009), Optimal Transport: Old and New, Springer.10.1007/978-3-540-71050-9CrossRefGoogle Scholar
von Toussaint, U. (2011), Bayesian inference in physics, Rev. Modern Phys. 83, 943999.10.1103/RevModPhys.83.943CrossRefGoogle Scholar
Vondrák, J. (2008), Optimal approximation for the submodular welfare problem in the value oracle model, in Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing (STOC 2008) , ACM, pp. 6774.Google Scholar
Vondrák, J. (2010), Submodularity and curvature: The optimal algorithm, RIMS Kokyuroku Bessatsu B23, 253266.Google Scholar
Vondrák, J., Chekuri, C. and Zenklusen, R. (2011), Submodular function maximization via the multilinear relaxation and contention resolution schemes, in Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing (STOC 2011) , ACM, pp. 783792.Google Scholar
Wald, A. (1943), On the efficient design of statistical investigations, Ann. Math. Statist. 14, 134140.10.1214/aoms/1177731454CrossRefGoogle Scholar
Walker, S. G. (2016), Bayesian information in an experiment and the Fisher information distance, Statist. Probab. Lett. 112, 59.10.1016/j.spl.2016.01.014CrossRefGoogle Scholar
Wang, J., Clark, S. C., Liu, E. and Frazier, P. I. (2020), Parallel Bayesian global optimization of expensive functions, Oper. Res. 68, 18501865.10.1287/opre.2019.1966CrossRefGoogle Scholar
Wang, S. and Marzouk, Y. (2022), On minimax density estimation via measure transport. Available at arXiv:2207.10231.Google Scholar
Wang, X., Jin, Y., Schmitt, S. and Olhofer, M. (2023), Recent advances in Bayesian optimization, ACM Comput. Surv. 55, 136.Google Scholar
Wathen, J. K. and Christen, J. A. (2006), Implementation of backward induction for sequentially adaptive clinical trials, J. Comput. Graph. Statist. 15, 398413.10.1198/016214506X113406CrossRefGoogle Scholar
Weaver, B. P. and Meeker, W. Q. (2021), Bayesian methods for planning accelerated repeated measures degradation tests, Technometrics 63, 9099.10.1080/00401706.2019.1695676CrossRefGoogle Scholar
Weaver, B. P., Williams, B. J., Anderson-Cook, C. M. and Higdon, D. M. (2016), Computational enhancements to Bayesian design of experiments using Gaussian processes, Bayesian Anal. 11, 191213.10.1214/15-BA945CrossRefGoogle Scholar
Whittle, P. (1973), Some general points in the theory of optimal experimental design, J. R. Statist. Soc. Ser. B. Statist. Methodol. 35, 123130.10.1111/j.2517-6161.1973.tb00944.xCrossRefGoogle Scholar
Wolsey, L. A. and Nemhauser, G. L. (1999), Integer and Combinatorial Optimization , Wiley.Google Scholar
Wu, J. and Frazier, P. (2019), Practical two-step lookahead Bayesian optimization, in Advances in Neural Information Processing Systems 32 (Wallach, H. et al., eds), Curran Associates, pp. 98139823.Google Scholar
Wu, K., Chen, P. and Ghattas, O. (2023a), An offline–online decomposition method for efficient linear Bayesian goal-oriented optimal experimental design: Application to optimal sensor placement, SIAM J. Sci. Comput. 45, B57B77.10.1137/21M1466542CrossRefGoogle Scholar
Wu, K., O’Leary-Roseberry, T., Chen, P. and Ghattas, O. (2023b), Large-scale Bayesian optimal experimental design with derivative-informed projected neural network, J. Sci. Comput. 95, art. 30.10.1007/s10915-023-02145-1CrossRefGoogle Scholar
Wynn, H. P. (1972), Results in the theory and construction of D-optimum experimental designs, J. R. Statist. Soc. Ser. B. Statist. Methodol. 34, 133147.10.1111/j.2517-6161.1972.tb00896.xCrossRefGoogle Scholar
Wynn, H. P. (1984), Jack Kiefer’s contributions to experimental design, Ann. Statist. 12, 416423.10.1214/aos/1176346496CrossRefGoogle Scholar
Xu, Z. and Liao, Q. (2020), Gaussian process based expected information gain computation for Bayesian optimal design, Entropy 22, art. 258.10.3390/e22020258CrossRefGoogle ScholarPubMed
Yates, F. (1933), The principles of orthogonality and confounding in replicated experiments, J. Agric. Sci. 23, 108145.10.1017/S0021859600052916CrossRefGoogle Scholar
Yates, F. (1937), The design and analysis of factorial experiments. Technical Communication no. 35, Imperial Bureau of Soil Science.Google Scholar
Yates, F. (1940), Lattice squares, J. Agric. Sci. 30, 672687.10.1017/S0021859600048292CrossRefGoogle Scholar
Zahm, O., Cui, T., Law, K., Spantini, A. and Marzouk, Y. (2022), Certified dimension reduction in nonlinear Bayesian inverse problems, Math. Comp. 91, 17891835.10.1090/mcom/3737CrossRefGoogle Scholar
Zhan, D. and Xing, H. (2020), Expected improvement for expensive optimization: A review, J. Global Optim. 78, 507544.10.1007/s10898-020-00923-xCrossRefGoogle Scholar
Zhang, J., Bi, S. and Zhang, G. (2021), A scalable gradient-free method for Bayesian experimental design with implicit models, in Proceedings of the 24th International Conference on Artificial Intelligence and Statistics , Vol. 130 of Proceedings of Machine Learning Research, PMLR, pp. 37453753.Google Scholar
Zhang, X., Blanchet, J., Marzouk, Y., Nguyen, V. A. and Wang, S. (2022), Distributionally robust Gaussian process regression and Bayesian inverse problems. Available at arXiv:2205.13111.Google Scholar
Zheng, S., Hayden, D., Pacheco, J. and Fisher, J. W. (2020), Sequential Bayesian experimental design with variable cost structure, in Advances in Neural Information Processing Systems 33 (Larochelle, H. et al., eds), Curran Associates, pp. 41274137.Google Scholar
Zhong, S., Shen, W., Catanach, T. and Huan, X. (2024), Goal-oriented Bayesian optimal experimental design for nonlinear models using Markov chain Monte Carlo. Available at arXiv:2403.18072.Google Scholar
Zhu, Z. and Stein, M. L. (2005), Spatial sampling design for parameter estimation of the covariance function, J. Statist. Plann. Infer. 134, 583603.10.1016/j.jspi.2004.04.017CrossRefGoogle Scholar
Zhu, Z. and Stein, M. L. (2006), Spatial sampling design for prediction with estimated parameters, J. Agric. Biol. Environ. Statist. 11, 2444.10.1198/108571106X99751CrossRefGoogle Scholar
Zong, C. (1999), Sphere Packings, Springer.Google Scholar