Skip to main content Accessibility help
×
Hostname: page-component-5b777bbd6c-2c8nx Total loading time: 0 Render date: 2025-06-19T09:03:24.175Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  13 June 2025

Anna Dawid
Affiliation:
Uniwersytet Warszawski, Poland
Julian Arnold
Affiliation:
Universität Basel, Switzerland
Borja Requena
Affiliation:
ICFO - The Institute of Photonic Sciences
Alexander Gresch
Affiliation:
Heinrich-Heine-Universität Düsseldorf
Marcin Płodzień
Affiliation:
ICFO - The Institute of Photonic Sciences
Kaelan Donatella
Affiliation:
Université de Paris VII (Denis Diderot)
Kim A. Nicoli
Affiliation:
University of Bonn
Paolo Stornati
Affiliation:
ICFO - The Institute of Photonic Sciences
Rouven Koch
Affiliation:
Aalto University, Finland
Miriam Büttner
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Robert Okuła
Affiliation:
Gdańsk University of Technology
Gorka Muñoz-Gil
Affiliation:
Universität Innsbruck, Austria
Rodrigo A. Vargas-Hernández
Affiliation:
McMaster University, Ontario
Alba Cervera-Lierta
Affiliation:
Centro Nacional de Supercomputación
Juan Carrasquilla
Affiliation:
Swiss Federal Institute of Technology in Zurich
Vedran Dunjko
Affiliation:
Universiteit Leiden
Marylou Gabrié
Affiliation:
Institut Polytechnique de Paris
Evert van Nieuwenburg
Affiliation:
Universiteit Leiden
Filippo Vicentini
Affiliation:
Institut Polytechnique de Paris
Lei Wang
Affiliation:
Chinese Academy of Sciences, Beijing
Sebastian J. Wetzel
Affiliation:
University of Waterloo, Ontario
Giuseppe Carleo
Affiliation:
École Polytechnique Fédérale de Lausanne
Eliška Greplová
Affiliation:
Technische Universiteit Delft, The Netherlands
Roman Krems
Affiliation:
University of British Columbia, Vancouver
Florian Marquardt
Affiliation:
Max-Planck-Institut für die Wissenschaft des Lichts
Michał Tomza
Affiliation:
Uniwersytet Warszawski
Maciej Lewenstein
Affiliation:
ICFO - Institute of Photonic Sciences
Alexandre Dauphin
Affiliation:
Instituto de Ciencias Fotónicas
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Blaiszik, B., blaiszik/ml_publication_charts: AI/ML Publication Statistics for 2022, doi:10.5281/zenodo.7713954 (2023).CrossRefGoogle Scholar
Summer School: Machine Learning in Science and Technology, GitHub repository with selected tutorials from the school (2021), doi:10.5281/zenodo.13959917.CrossRefGoogle Scholar
Dawid, A., Arnold, J., Requena, B., et al., GitHub repository with figures prepared for these Lecture Notes (2022), doi:10.5281/zenodo.13959927.Google Scholar
Dunjko, V. and Briegel, H. J., Machine learning & artificial intelligence in the quantum domain: A review of recent progress, Rep. Prog. Phys. 81(7), 074001 (2018), doi:10.1088/1361-6633/aab406.CrossRefGoogle Scholar
Carleo, G., Cirac, I., Cranmer, K., et al., Machine learning and the physical sciences, Rev. Mod. Phys. 91, 045002 (2019), doi:10.1103/RevModPhys.91.045002.CrossRefGoogle Scholar
Carrasquilla, J., Machine learning for quantum matter, Adv. Phys. X 5(1), 1797528 (2020), doi:10.1080/23746149.2020.1797528.Google Scholar
Williams, C., A brief introduction to artificial intelligence, In Proc. OCEANS ’83, pp. 94–99, doi:10.1109/OCEANS.1983.1152096 (1983).CrossRefGoogle Scholar
Dearden, R. and Boutilier, C., Abstraction and approximate decision-theoretic planning, Artif. Intell. 89(1), 219 (1997), doi:10.1016/S0004-3702(96)00023-9.CrossRefGoogle Scholar
Zucker, J.-D., A grounded theory of abstraction in artificial intelligence, Phil. Trans. R. Soc. Lond. B 358(1435), 1293 (2003), doi:10.1098/rstb.2003.1308.CrossRefGoogle Scholar
Saitta, L. and Zucker, J.-D., Abstraction in Artificial Intelligence and Complex Systems, Springer, New York, NY, doi:10.1007/978-1-4614-7052-6 (2013).CrossRefGoogle Scholar
Mitchell, M., Abstraction and analogy-making in artificial intelligence, Ann. N.Y. Acad. Sci. 1505(1), 79 (2021), doi:10.1111/nyas.14619.Google Scholar
Moravec, H., Mind Children: The Future of Robot and Human Intelligence, Harvard University Press, Cambridge, MA, doi:10.2307/1575314, P. 15 (1988).Google Scholar
Goodfellow, I., Bengio, Y. and Courville, A., Deep Learning, The MIT Press (2016).Google Scholar
Marcus, G., Deep learning is hitting a wall, Nautilus, Accessed: 03-11-2022 (2022).Google Scholar
Sejnowski, T. J., The Deep Learning Revolution: Machine Intelligence Meets Human Intelligence, The MIT Press, doi:10.7551/mitpress/11474.001.0001 (2018).Google Scholar
LeCun, Y., Bengio, Y. and Hinton, G., Deep learning, Nature 521(7553), 436 (2015), doi:10.1038/nature14539.CrossRefGoogle ScholarPubMed
Schmidhuber, J., Deep learning in neural networks: An overview, Neural Netw. 61, 85 (2015), doi:10.1016/j.neunet.2014.09.003.CrossRefGoogle ScholarPubMed
Hinton, G. E., Osindero, S. and Teh, Y.-W., A fast learning algorithm for deep belief nets, Neural Comput. 18(7), 1527 (2006), doi:10.1162/neco.2006.18.7.1527.CrossRefGoogle ScholarPubMed
Volkov, V. and Demmel, J. W., Benchmarking GPUs to tune dense linear algebra, In SC ’08: Proc. 2008 ACM/IEEE Conf. Supercomput., pp. 1–11, doi:10.1109/SC.2008.5214359 (2008).CrossRefGoogle Scholar
Raina, R., Madhavan, A. and Ng, A. Y., Large-scale deep unsupervised learning using graphics processors, In Proc. 26th Annu. Int. Conf. Mach. Learn., ICML ’09, pp. 873–880. Association for Computing Machinery, New York, NY, USA, doi:10.1145/1553374.1553486 (2009).Google Scholar
Marr, B., How much data do we create every day? The mind-blowing stats everyone should read, Forbes, Accessed: 05-21-2018 (2018).Google Scholar
SeedScientific, Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025, SeedScientific, Accessed: 01-28-2022 (2021).Google Scholar
Statista Research Department, Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025, Statista, Accessed: 03-18-2022 (2022).Google Scholar
Dawid, A. and LeCun, Y., Introduction to latent variable energy-based models: A path towards autonomous machine intelligence J. Stat. Mech. 2024, 104011 (2024), doi:10.1088/1742-5468/ad292b.Google Scholar
Dixon, M. F., Halperin, I. and Bilokon, P., Machine Learning in Finance: From Theory to Practice (Vol. 1170), Springer International Publishing, New York, NY, doi:10.1007/978-3-030-41068-1 (2020).CrossRefGoogle Scholar
Eisenstein, J., Introduction to Natural Language Processing, The MIT Press, Cambridge, MA (2019).Google Scholar
Polu, S., Han, J. M., Zheng, K., Baksys, M., Babuschkin, I. and Sutskever, I., Formal mathematics statement curriculum learning In ICLR 2023 – Int. Conf. Learn. Represent. (2023), arXiv:2202.01344.Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., et al., Human-level control through deep reinforcement learning, Nature 518(7540), 529 (2015), doi: 10.1038/nature14236.CrossRefGoogle ScholarPubMed
Vinyals, O., Babuschkin, I., Czarnecki, W. M., et al., Grandmaster level in Star-Craft II using multi-agent reinforcement learning, Nature 575(7782), i350 (2019), doi:10.1038/s41586-019-1724-z.CrossRefGoogle ScholarPubMed
Silver, D., Huang, A., Maddison, C. J., et al., Mastering the game of Go with deep neural networks and tree search, Nature 529(7587), 484 (2016), doi:10.1038/nature16961.CrossRefGoogle ScholarPubMed
Lecun, Y., Bottou, L., Bengio, Y. and Haffner, P., Gradient-based learning applied to document recognition, Proc. IEEE 86(11), 2278 (1998), doi:10.1109/5.726791.CrossRefGoogle Scholar
Fisher, R. A., The use of multiple measurements in taxonomic problems, Ann. Eug. 7, 179 (1936), doi:10.1111/j.1469-1809.1936.tb02137.x.CrossRefGoogle Scholar
Krizhevsky, A., Learning multiple layers of features from tiny images, Tech. rep., MIT & NYU, CiteSeer 10.1.1.222.9220 (2009).Google Scholar
Russakovsky, O., Deng, J., Su, H., et al., ImageNet large scale visual recognition challenge, Int. J. Comput. Vis. 115(3), 211 (2015), doi:10.1007/s11263-015-0816-y.CrossRefGoogle Scholar
Gissin, D., Active learning review, GitHub.io, Accessed: 04-08-2022 (2020).Google Scholar
Ren, P., Xiao, Y., Chang, X., et al., A survey of deep active learning, ACM Comput. Surv. 54(9) (2021), doi:10.1145/3472291.Google Scholar
van Engelen, J. E. and Hoos, H. H., A survey on semi-supervised learning, Mach. Learn. 109(2), 373 (2020), doi:10.1007/s10994-019-05855-6.CrossRefGoogle Scholar
Krenn, M., Landgraf, J., Foesel, T. and Marquardt, F., Artificial intelligence and machine learning for quantum technologies, Phys. Rev. A 107, 010101 (2023), doi:10.1103/PhysRevA.107.010101.CrossRefGoogle Scholar
Chollet, F., On the measure of intelligence (2019), arXiv:1911.01547.Google Scholar
Krenn, M., Pollice, R., Guo, S. Y., et al., On scientific understanding with artificial intelligence, Nat. Rev. Phys. 4, 761 (2022), doi:10.1038/s42254-022-00518-3.Google Scholar
Bagheri, R., Weight initialization in deep neural networks, Towards Data Science, Accessed: 02-16-2022 (2020).Google Scholar
Akiba, T., Sano, S., Yanase, T., Ohta, T. and Koyama, M., Optuna: Anext-generation hyperparameter optimization framework, In Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., KDD ’19, pp. 2623–2631. Association for Computing Machinery, New York, NY, USA, doi:10.1145/3292500.3330701 (2019).CrossRefGoogle Scholar
Blum, A. L. and Rivest, R. L., Training a 3-node neural network is NP-complete, Neural Netw. 5(1), 117 (1992), doi:10.1016/S0893-6080(05)80010-3.CrossRefGoogle Scholar
Li, H., Xu, Z., Taylor, G., Studer, C. and Goldstein, T., Visualizing the loss landscape of neural nets, In NeurIPS 2018 – Adv. Neural Inf. Process. Syst. (2018), arXiv:1712.09913.Google Scholar
Bottou, L., Large-Scale Machine Learning with Stochastic Gradient Descent, In Lechevallier, Y. and Saporta, G., eds., Proc. COMPSTAT’2010, pp. 177–186. Physica-Verlag HD, Heidelberg, doi:10.1007/978-3-7908-2604-3_16 (2010).CrossRefGoogle Scholar
Feng, Y. and Tu, Y., The inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minima, Proc. Natl. Acad. Sci. U.S.A. 118(9) (2021), doi:10.1073/pnas.2015617118.CrossRefGoogle Scholar
Lee, J. D., Simchowitz, M., Jordan, M. I. and Recht, B., Gradient descent only converges to minimizers, In Feldman, V., Rakhlin, A. and Shamir, O., eds., 29th Annu. Conf. Learn. Theory, vol. 49 of Proc. Mach. Learn. Res., pp. 1246–1257. PMLR, Columbia University, New York, New York, USA (2016), arXiv:1602.04915.Google Scholar
Choromanska, A., Henaff, M., Mathieu, M., Ben Arous, G. and LeCun, Y., The loss surfaces of multilayer networks, In AISTATS 2015 – Int. Conf. Artif. Intell. Stat., vol. 38, pp. 192–204. PMLR (2015), arXiv:1412.0233.Google Scholar
Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S. and Bengio, Y., Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, In NIPS 2014 – Adv. Neural Inf. Process. Syst. (2014), arXiv:1406.2572.Google Scholar
Sagun, L., Bottou, L. and LeCun, Y., Eigenvalues of the Hessian in deep learning: Singularity and beyond (2016), arXiv:1611.07476.Google Scholar
Alain, G., Le Roux, N. and Manzagol, P. A., Negative eigenvalues of the Hessian in deep neural networks, In ICLR 2018 – Int. Conf. Learn. Represent. (2018), arXiv:1902.02366.Google Scholar
Sutskever, I., Martens, J., Dahl, G. and Hinton, G., On the importance of initialization and momentum in deep learning, In ICML 2013 – 30th Int. Conf. Mach. Learn., vol. 28, pp. 1139–1147 (2013).Google Scholar
Liu, Y., Gao, Y. and Yin, W., An improved analysis of stochastic gradient descent with momentum, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2007.07989.Google Scholar
Duchi, J., Hazan, E. and Singer, Y., Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res. 12, 2121 (2011), doi:10.5555/1953048.2021068.Google Scholar
Kingma, D. P. and Ba, J., Adam: A method for stochastic optimization In ICLR 2015 – Int. Conf. Learn. Represent. (2015), (2014), arXiv:1412.6980.Google Scholar
Zhang, Z., Improved Adam optimizer for deep neural networks, In 2018 IEEE/ACM 26th Int. Symp. Qual. Serv. IWQoS 2018, pp. 1–2, doi:10.1109/IWQoS.2018.8624183 (2018).CrossRefGoogle Scholar
Zhu, C., Byrd, R. H., Lu, P. and Nocedal, J., Algorithm 778: L-BFGS-B, ACM Trans. Math. Softw. 23(4), 550 (1997), doi:10.1145/279232.279236.CrossRefGoogle Scholar
Rios, L. M. and Sahinidis, N. V., Derivative-free optimization: A review of algorithms and comparison of software implementations, J. Glob. Optim. 56(3), 1247 (2012), doi:10.1007/s10898-012-9951-y.Google Scholar
Liu, W., Wang, X., Owens, J. and Li, Y., Energy-based out-of-distribution detection, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2010.03759.Google Scholar
Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O., Understanding deep learning requires rethinking generalization, In ICLR 2017 – Int. Conf. Learn. Represent. (2017), arXiv:1611.03530.Google Scholar
Wolpert, D. H., What Is Important about the No Free Lunch Theorems?, pp. 373–388, Springer International Publishing, Cham, doi:10.1007/978-3-030-66515-9_13 (2021).CrossRefGoogle Scholar
Domingos, P., A unified bias-variance decomposition for zero-one and squared loss, In NCAI 2000 – 17th Nat. Conf. Artif. Intell., pp. 564–569 (2000).Google Scholar
Belkin, M., Hsu, D., Ma, S. and Mandal, S., Reconciling modern machine-learning practice and the classical bias–variance trade-off, Proc. Natl. Acad. Sci. U.S.A. 116(32), 15849 (2019), doi:10.1073/pnas.1903070116.CrossRefGoogle Scholar
Kawaguchi, K., Kaelbling, L. P. and Bengio, Y., Generalization in deep learning, In Mathematical Aspects of Deep Learning. Cambridge University Press, doi:10.1017/9781009025096.003 (2022)Google Scholar
Frankle, J. and Carbin, M., The lottery ticket hypothesis: Finding sparse, trainable neural networks, In ICLR 2019 – Int. Conf. Learn. Represent. (2019), arXiv:1803.03635.Google Scholar
Devroye, L., Györfi, L. and Lugosi, G., The Bayes Error, pp. 9–20, Springer, New York, NY, doi:10.1007/978-1-4612-0711-5_2 (1996).CrossRefGoogle Scholar
Rao, C. R., Generalized Inverse of a Matrix and Its Applications, pp. 601–620, University of California Press, Berkeley, doi:10.1525/9780520325883-032 (1972).CrossRefGoogle Scholar
Tibshirani, R., Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B Methodol. 58(1), 267 (1996), doi:10.1111/j.2517-6161.1996.tb02080.x.CrossRefGoogle Scholar
Zhou, Z., Li, X. and Zare, R. N., Optimizing chemical reactions with deep reinforcement learning, ACS Cent. Sci 3(12), 1337 (2017), doi:10.1021/acscentsci.7b00492.CrossRefGoogle ScholarPubMed
Chervonenkis, A., Early History of Support Vector Machines, pp. 13–20, Springer, Berlin, Heidelberg, doi:10.1007/978-3-642-41136-6_3 (2013).CrossRefGoogle Scholar
Boser, B. E., Guyon, I. M. and Vapnik, V. N., A Training Algorithm for Optimal Margin Classifiers, In Proc. Fifth Ann. Workshop Compu. Learn. Theo., COLT ’92, pp. 144–152. Association for Computing Machinery, doi:10.1145/130385.130401 (1992).Google Scholar
Platt, J., Sequential minimal optimization: A fast algorithm for training support vector machines, Tech. rep. MSR-TR-98-14, Microsoft (1998).Google Scholar
Minsky, M. and Papert, S., Perceptrons: An Introduction to Computational Geometry, MIT Press, doi:10.7551/mitpress/11301.001.0001 (1969).Google Scholar
Rosenblatt, F., The perceptron: A probabilistic model for information storage and organization in the brain, Psychol. Rev. 65(6), 386 (1958), doi:10.1037/h0042519.CrossRefGoogle ScholarPubMed
Kolmogorov, A. N., On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition, Dokl. Akad. Nauk 114, 953–956 (1957).Google Scholar
Cybenko, G., Approximation by superpositions of a sigmoidal function, Math. Control Signals Syst. 2(4), 303 (1989), doi:10.1007/BF02551274.Google Scholar
Hornik, K., Approximation capabilities of multilayer feedforward networks, Neural Netw. 4(2), 251 (1991), doi:10.1016/0893-6080(91)90009-T.CrossRefGoogle Scholar
Rumelhart, D. E., Hinton, G. E. and Williams, R. J., Learning representations by back-propagating errors, Nature 323(6088), 533 (1986), doi:10.1038/323533a0.CrossRefGoogle Scholar
Kingma, D. P. and Welling, M., Auto-encoding variational Bayes, ICLR 2014 – Int. Conf. Learn. Represent. (2014), arXiv:1312.6114.Google Scholar
Rezende, D. J., Mohamed, S. and Wierstra, D., Stochastic backpropagation and approximate inference in deep generative models, In ICML 2014 – Int. Conf. Mach. Learn., vol. 32, pp. 1278–1286 (2014), arXiv:1401.4082.Google Scholar
Ng, A., Sparse autoencoder, CS294A Lecture notes, Stanford University (2011).Google Scholar
Makhzani, A. and Frey, B., K-sparse autoencoders, ICLR 2014 – Int. Conf. Learn. Represent. (2014), arXiv:1312.5663.Google Scholar
Vincent, P., Larochelle, H., Bengio, Y. and Manzagol, P.-A., Extracting and composing robust features with denoising autoencoders, In ICML 2008 – 25th Int. Conf. Mach. Learn., pp. 1096–1103, doi:10.1145/1390156.1390294 (2008).CrossRefGoogle Scholar
Burda, Y., Grosse, R. and Salakhutdinov, R., Importance weighted autoencoders, ICLR 2016 – Int. Conf. Learn. Represent. (2016), arXiv:1509.00519.Google Scholar
Uria, B., Côté, M.-A., Gregor, K., Murray, I. and Larochelle, H., Neural autoregressive distribution estimation, J. Mach. Learn. Res. 17(1), 7184 (2016), doi:10.5555/2946645.3053487.Google Scholar
Hochreiter, S. and Schmidhuber, J., Long short-term memory, Neural Comput. 9(8), 1735 (1997), doi:10.1162/neco.1997.9.8.1735.CrossRefGoogle ScholarPubMed
Cho, K., van Merriënboer, B., Bahdanau, D. and Bengio, Y., On the properties of neural machine translation: Encoder–decoder approaches, In SSST-8 – 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111, doi:10.3115/v1/W14-4012 (2014).CrossRefGoogle Scholar
Wu, D., Wang, L. and Zhang, P., Solving statistical mechanics using variational autoregressive networks, Phys. Rev. Lett. 122(8), 080602 (2019), doi:10.1103/PhysRevLett.122.080602.CrossRefGoogle ScholarPubMed
Nicoli, K. A., Nakajima, S., Strodthoff, N., Samek, W., Müller, K.-R. and Kessel, P., Asymptotically unbiased estimation of physical observables with neural samplers, Phys. Rev. E 101(2), 023304 (2020), doi:10.1103/PhysRevE.101.023304.CrossRefGoogle ScholarPubMed
Liu, J.-G., Mao, L., Zhang, P. and Wang, L., Solving quantum statistical mechanics with variational autoregressive networks and quantum circuits, Mach. Learn.: Sci. Technol. 2(2), 025011 (2021), doi:10.1088/2632-2153/aba19d.CrossRefGoogle Scholar
Carrasquilla, J., Torlai, G., Melko, R. G. and Aolita, L., Reconstructing quantum states with generative models, Nat. Mach. Intell. 1(3), 155–161 (2019), doi:10.1038/s42256-019-0028-1.Google Scholar
Sharir, O., Levine, Y., Wies, N., Carleo, G. and Shashua, A., Deep autoregressive models for the efficient variational simulation of many-body quantum systems, Phys. Rev. Lett. 124(2), 020503 (2020), doi:10.1103/PhysRevLett.124.020503.CrossRefGoogle ScholarPubMed
Bishop, C. M., Pattern Recognition and Machine Learning, Springer, Berlin, Heidelberg (2006).Google Scholar
Mehta, P., Bukov, M., Wang, C.-H., et al., A high-bias, low-variance introduction to machine learning for physicists, Phys. Rep. 810, 1 (2019), doi:10.1016/j.physrep.2019.03.001. https://d2l.ai/CrossRefGoogle ScholarPubMed
Zhang, A., Lipton, Z. C., Li, M. and Smola, A. J., Dive into deep learning (2021), arXiv:2106.11342.Google Scholar
Neupert, T., Fischer, M. H., Greplova, E., Choo, K. and Denner, M., Introduction to machine learning for the sciences (2021), arXiv:2102.04883.Google Scholar
Carrasquilla, J. and Torlai, G., How to use neural networks to investigate quantum many-body physics, PRX Quantum 2, 040201 (2021), doi:10.1103/PRXQuantum.2.040201.Google Scholar
Sachdev, S., Quantum Phase Transitions, Cambridge University Press, doi:10.1017/cbo9780511973765 (2011).Google Scholar
Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, CRC Press, doi:10.1201/9780429493492 (2018).Google Scholar
Onsager, L., Crystal statistics. I. A two-dimensional model with an order-disorder transition, Phys. Rev. 65, 117 (1944), doi:10.1103/PhysRev.65.117.CrossRefGoogle Scholar
Wegner, F. J., Duality in generalized Ising models and phase transitions without local order parameters, J. Math. Phys. 12(10), 2259 (1971), doi:10.1063/1.1665530.CrossRefGoogle Scholar
Landau, L. D., On the theory of phase transitions. I., Phys. Z. Sowjet. 11, 26 (1937), Reprinted in Collected Papers of L. D. Landau.Google Scholar
Landau, L. D., On the theory of phase transitions. II., Phys. Z. Sowjet. 11, 545 (1937), Reprinted in Collected Papers of L. D. Landau.Google Scholar
Wen, X.-G., Topological orders in rigid states, Int. J. Mod. Phys. B 4(02), 239 (1990), doi:10.1142/S0217979290000139.CrossRefGoogle Scholar
Bernevig, B. and Hughes, T., Topological Insulators and Topological Superconductors, Princeton University Press, Princeton, doi:10.1515/9781400846733 (2013).CrossRefGoogle Scholar
Käming, N., Dawid, A., Kottmann, K., et al., Unsupervised machine learning of topological phase transitions from experimental data, Mach. Learn.: Sci. Technol. 2, 035037 (2021), doi:10.1088/2632-2153/abffe7.Google Scholar
Sun, N., Yi, J., Zhang, P., Shen, H. and Zhai, H., Deep learning topological invariants of band insulators, Phys. Rev. B 98, 085402 (2018), doi:10.1103/PhysRevB.98.085402.CrossRefGoogle Scholar
Zhang, P., Shen, H. and Zhai, H., Machine learning topological invariants with neural networks, Phys. Rev. Lett. 120, 066401 (2018), doi:10.1103/PhysRevLett.120.066401.CrossRefGoogle ScholarPubMed
Caio, M. D., Caccin, M., Baireuther, P., Hyart, T. and Fruchart, M., Machine learning assisted measurement of local topological invariants (2019), arXiv:1901.03346.Google Scholar
Holanda, N. L. and Griffith, M. A. R., Machine learning topological phases in real space, Phys. Rev. B 102, 054107 (2020), doi:10.1103/PhysRevB.102.054107.CrossRefGoogle Scholar
Baireuther, P., Płodzień, M., Ojanen, T., Tworzydło, K. and Hyart, T., Identifying Chern numbers of superconductors from local measurements SciPost Phys. Core 6, 087 (2023), doi:10.21468/SciPostPhysCore.6.4.087.CrossRefGoogle Scholar
Huembeli, P., Dauphin, A. and Wittek, P., Identifying quantum phase transitions with adversarial neural networks, Phys. Rev. B 97, 134109 (2018), doi:10.1103/PhysRevB.97.134109.CrossRefGoogle Scholar
Fefferman, C., Mitter, S. and Narayanan, H., Testing the manifold hypothesis, J. Am. Math. Soc. 29(4), 983 (2016), doi:10.1090/jams/852.CrossRefGoogle Scholar
Wang, L., Discovering phase transitions with unsupervised learning, Phys. Rev. B 94, 195105 (2016), doi:10.1103/PhysRevB.94.195105.CrossRefGoogle Scholar
Wetzel, S. J., Unsupervised learning of phase transitions: From principal component analysis to variational autoencoders, Phys. Rev. E 96, 022140 (2017), doi:10.1103/PhysRevE.96.022140.CrossRefGoogle ScholarPubMed
Hu, W., Singh, R. R. and Scalettar, R. T., Discovering phases, phase transitions, and crossovers through unsupervised machine learning: A critical examination, Phys. Rev. E 95(6), 062122 (2017), doi:10.1103/PhysRevE.95.062122.CrossRefGoogle ScholarPubMed
Schölkopf, B., Smola, A. and Müller, K.-R., Nonlinear component analysis as a kernel eigenvalue problem, Neural Comput. 10(5), 1299 (1998), doi:10.1162/089976698300017467.CrossRefGoogle Scholar
Van der Maaten, L. and Hinton, G., Visualizing data using t-SNE, J. Mach. Learn. Res. 9(11) (2008).Google Scholar
McInnes, L., Healy, J. and Melville, J., UMAP: Uniform manifold approximation and projection for dimension reduction (2018), arXiv:1802.03426.Google Scholar
Hinton, G. E. and Roweis, S. T., Stochastic neighbor embedding, In NIPS 2002 – Adv. Neural Inf. Process. Syst. (2002).Google Scholar
Greplova, E., Valenti, A., Boschung, G., Schäfer, F., Lörch, N. and Huber, S. D., Unsupervised identification of topological phase transitions using predictive models, New J. Phys. 22(4), 045003 (2020), doi:10.1088/1367-2630/ab7771.CrossRefGoogle Scholar
Arnold, J., Schäfer, F., Žonda, M. and Lode, A. U. J., Interpretable and unsupervised phase classification, Phys. Rev. Res. 3, 033052 (2021), doi:10.1103/PhysRevResearch.3.033052.Google Scholar
Carrasquilla, J. and Melko, R. G., Machine learning phases of matter, Nat. Phys. 13(5), 431 (2017), doi:10.1038/nphys4035.CrossRefGoogle Scholar
Kottmann, K., Metz, F., Fraxanet, J. and Baldelli, N., Variational quantum anomaly detection: Unsupervised mapping of phase diagrams on a physical quantum computer, Phys. Rev. Res. 3, 043184 (2021), doi:10.1103/PhysRevResearch.3.043184.Google Scholar
Szołdra, T., Sierant, P., Lewenstein, M. and Zakrzewski, J., Unsupervised detection of decoupled subspaces: Many-body scars and beyond, Phys. Rev. B 105, 224205 (2022), doi:10.1103/PhysRevB.105.224205.CrossRefGoogle Scholar
Kottmann, K., Huembeli, P., Lewenstein, M. and Acín, A., Unsupervised phase discovery with deep anomaly detection, Phys. Rev. Lett. 125, 170603 (2020), doi:10.1103/PhysRevLett.125.170603.CrossRefGoogle ScholarPubMed
Szołdra, T., Sierant, P., Kottmann, K., Lewenstein, M. and Zakrzewski, J., Detecting ergodic bubbles at the crossover to many-body localization using neural networks, Phys. Rev. B 104, L140202 (2021), doi:10.1103/PhysRevB.104.L140202.CrossRefGoogle Scholar
Van Nieuwenburg, E. P., Liu, Y.-H. and Huber, S. D., Learning phase transitions by confusion, Nat. Phys. 13(5), 435 (2017), doi:10.1038/nphys4037.CrossRefGoogle Scholar
Liu, Y.-H. and van Nieuwenburg, E. P. L., Discriminative cooperative networks for detecting phase transitions, Phys. Rev. Lett. 120, 176401 (2018), doi:10.1103/PhysRevLett.120.176401.CrossRefGoogle ScholarPubMed
Lee, S. S. and Kim, B. J., Confusion scheme in machine learning detects double phase transitions and quasi-long-range order, Phys. Rev. E 99, 043308 (2019), doi:10.1103/PhysRevE.99.043308.CrossRefGoogle ScholarPubMed
Richter-Laskowska, M., Kurpas, M. and Maśka, M. M., Learning by confusion approach to identification of discontinuous phase transitions, Phys. Rev. E 108, 024113 (2023), doi:10.1103/PhysRevE.108.024113.CrossRefGoogle ScholarPubMed
Schäfer, F. and Lörch, N., Vector field divergence of predictive model output as indication of phase transitions, Phys. Rev. E 99, 062107 (2019), doi:10.1103/PhysRevE.99.062107.CrossRefGoogle ScholarPubMed
Ronhovde, P., Chakrabarty, S., Hu, D., et al., Detecting hidden spatial and spatio-temporal structures in glasses and complex physical systems by multiresolution network clustering, Eur. Phys. J. E 34(9), 1 (2011), doi:10.1140/epje/i2011-11105-9.CrossRefGoogle Scholar
Ronhovde, P., Chakrabarty, S., Hu, D., et al., Detection of hidden structures for arbitrary scales in complex physical systems, Sci. Rep. 2(1), 1 (2012), doi:10.1038/srep00329.CrossRefGoogle ScholarPubMed
Vargas-Hernández, R. A., Sous, J., Berciu, M. and Krems, R. V., Extrapolating quantum observables with machine learning: Inferring multiple phase transitions from properties of a single phase, Phys. Rev. Lett. 121, 255702 (2018), doi:10.1103/PhysRevLett.121.255702.CrossRefGoogle ScholarPubMed
Shirinyan, A. A., Kozin, V. K., Hellsvik, J., Pereiro, M., Eriksson, O. and Yudin, D., Self-organizing maps as a method for detecting phase transitions and phase identification, Phys. Rev. B 99, 041108 (2019), doi:10.1103/PhysRevB.99.041108.CrossRefGoogle Scholar
Mazaheri, T., Sun, B., Scher-Zagier, J., et al., Stochastic replica voting machine prediction of stable cubic anddouble perovskite materials andbinary alloys, Phys. Rev. Mater. 3, 063802 (2019), doi:10.1103/PhysRevMaterials.3.063802.Google Scholar
Balabanov, O. and Granath, M., Unsupervised learning using topological data augmentation, Phys. Rev. Res. 2, 013354 (2020), doi:10.1103/PhysRevResearch.2.013354.Google Scholar
Gu, S.-J., Fidelity approach to quantum phase transitions, Int. J. Mod. Phys. B 24(23), 4371 (2010), doi:10.1142/S0217979210056335.CrossRefGoogle Scholar
Rem, B. S., Käming, N., Tarnowski, M., et al., Identifying quantum phase transitions using artificial neural networks on experimental data, Nat. Phys. 15, 917 (2019), doi:10.1038/s41567-019-0554-0.CrossRefGoogle Scholar
Bohrdt, A., Kim, S., Lukin, A., et al., Analyzing nonequilibrium quantum states through snapshots with artificial neural networks, Phys. Rev. Lett. 127, 150504 (2021), doi:10.1103/PhysRevLett.127.150504.CrossRefGoogle ScholarPubMed
Lipton, Z. C., The mythos of model interpretability, Commun. ACM 61(10), 35 (2018), doi:10.1145/3233231.CrossRefGoogle Scholar
Bohrdt, A., Chiu, C. S., Ji, G., et al., Classifying snapshots of the doped Hubbard model with machine learning, Nat. Phys. 15(9), 921 (2019), doi:10.1038/s41567-019-0565-x.CrossRefGoogle Scholar
Zhang, Y., Ginsparg, P. and Kim, E.-A., Interpreting machine learning of topological quantum phase transitions, Phys. Rev. Res. 2, 023283 (2020), doi:10.1103/PhysRevResearch.2.023283.Google Scholar
Cranmer, M., Sanchez-Gonzalez, A., Battaglia, P., et al., Discovering symbolic models from deep learning with inductive biases, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2006.11287.Google Scholar
Ponte, P. and Melko, R. G., Kernel methods for interpretable machine learning of order parameters, Phys. Rev. B 96, 205146 (2017), doi:10.1103/PhysRevB.96.205146.CrossRefGoogle Scholar
Greitemann, J., Liu, K. and Pollet, L., Probing hidden spin order with interpretable machine learning, Phys. Rev. B 99, 060404 (2019), doi:10.1103/PhysRevB.99.060404.CrossRefGoogle Scholar
Liu, K., Greitemann, J. and Pollet, L., Learning multiple order parameters with interpretable machines, Phys. Rev. B 99, 104410 (2019), doi:10.1103/PhysRevB.99.104410.Google Scholar
Iten, R., Metger, T., Wilming, H., Del Rio, L. and Renner, R., Discovering physical concepts with neural networks, Phys. Rev. Lett. 124(1), 010508 (2020), doi:10.1103/PhysRevLett.124.010508.CrossRefGoogle ScholarPubMed
Wetzel, S. J. and Scherzer, M., Machine learning of explicit order parameters: From the Ising model to SU(2) lattice gauge theory, Phys. Rev. B 96(18), 184410 (2017), doi:10.1103/PhysRevB.96.184410.CrossRefGoogle Scholar
Wetzel, S. J., Melko, R. G., Scott, J., Panju, M. and Ganesh, V., Discovering symmetry invariants and conserved quantities by interpreting Siamese neural networks, Phys. Rev. Res. 2, 033499 (2020), doi:10.1103/PhysRevResearch.2.033499.Google Scholar
Miles, C., Bohrdt, A., Wu, R., et al., Correlator convolutional neural networks: An interpretable architecture for image-like quantum matter data, Nat. Commun. 12(1), 1 (2021), doi:10.1038/s41467-021-23952-w.CrossRefGoogle ScholarPubMed
Radha, S. K. and Jao, C., Generalized quantum similarity learning (2022), arXiv:2201.02310.Google Scholar
Patel, Z., Merali, E. and Wetzel, S. J., Unsupervised learning of Rydberg atom array phase diagram with Siamese neural networks, New J. Phys. 24(11), 113021 (2022), doi:10.1088/1367-2630/ac9c7a.CrossRefGoogle Scholar
Han, X.-Q., Xu, S.-S., Feng, Z., He, R.-Q. and Lu, Z.-Y., A simple framework for contrastive learning phases of matter Chin. Phys. Lett. 40, 027501 (2023), doi:10.1088/0256-307X/40/2/027501.CrossRefGoogle Scholar
Liu, Z. and Tegmark, M., Machine learning conservation laws from trajectories, Phys. Rev. Lett. 126, 180604 (2021), doi:10.1103/PhysRevLett.126.180604.CrossRefGoogle ScholarPubMed
Liu, Z., Madhavan, V. and Tegmark, M., Machine learning conservation laws from differential equations, Phys. Rev. E 106, 045307 (2022), doi:10.1103/PhysRevE.106.045307.CrossRefGoogle ScholarPubMed
Ha, S. and Jeong, H., Discovering invariants via machine learning, Phys. Rev. Res. 3, L042035 (2021), doi:10.1103/PhysRevResearch.3.L042035.Google Scholar
Keskar, N. S., Nocedal, J., Tang, P. T. P., Mudigere, D. and Smelyanskiy, M., On large-batch training for deep learning: Generalization gap and sharp minima, In ICLR 2017 – Int. Conf. Learn. Represent. (2017), arXiv:1609.04836.Google Scholar
Wu, L., Zhu, Z. and Weinan, E., Towards understanding generalization of deep learning: Perspective of loss landscapes (2017), arXiv:1706.10239.Google Scholar
Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D. and Wilson, A. G., Averaging weights leads to wider optima and better generalization, In UAI 2018 – 34th Conf. Uncertain. Artif. Intell., vol. 2, pp. 876–885 (2018), arXiv:1803.05407.Google Scholar
He, H., Huang, G. and Yuan, Y., Asymmetric valleys: Beyond sharp and flat local minima, In NeurIPS 2019 – Adv. Neural Inf. Process. Syst. (2019), arXiv:1902.00744.Google Scholar
Dinh, L., Pascanu, R., Bengio, S. and Bengio, Y., Sharp minima can generalize for deep nets, In ICML 2017 – 34th Int. Conf. Mach. Learn., vol. 3, pp. 1705–1714 (2017), arXiv:1703.04933.Google Scholar
Dawid, A., Huembeli, P., Tomza, M., Lewenstein, M. and Dauphin, A., Hessian-based toolbox for reliable and interpretable machine learning in physics, Mach. Learn.: Sci. Technol. 3, 015002 (2022), doi:10.1088/2632-2153/ac338d.Google Scholar
Koh, P. W. and Liang, P., Understanding black-box predictions via influence functions, In ICML 2017 – 34th Int. Conf. Mach. Learn., vol. 70, pp. 1885–1894. PMLR (2017), arXiv:1703.04730.Google Scholar
Schulam, P. and Saria, S., Can you trust this prediction? Auditing pointwise reliability after learning, In AISTATS 2019 – Int. Conf. Artif. Intell. Stat., vol. 89, pp. 1022–1031. PLMR (2020), arXiv:1901.00403.Google Scholar
Madras, D., Atwood, J. and D’Amour, A., Detecting extrapolation with local ensembles, In ICLR 2020 – Int. Conf. Learn. Represent. (2020), arXiv:1910.09573.Google Scholar
Dawid, A., Huembeli, P., Tomza, M., Lewenstein, M. and Dauphin, A., Phase detection with neural networks: Interpreting the black box, New J. Phys. 22(11), 115001 (2020), doi:10.1088/1367-2630/abc463.CrossRefGoogle Scholar
Arnold, J. and Schäfer, F., Replacing neural networks by optimal analytical predictors for the detection of phase transitions, Phys. Rev. X 12, 031044 (2022), doi:10.1103/PhysRevX.12.031044.Google Scholar
Arnold, J., Schäfer, F., Edelman, A. and Bruder, C., Mapping out phase diagrams with generative classifiers Phys. Rev. Lett. 132, 207301 (2024), doi:10.1103/PhysRevLett.132.207301.CrossRefGoogle ScholarPubMed
Molnar, C., Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, GitHub.io (2019).Google Scholar
Müller, K.-R., Mika, S., Tsuda, K. and Schölkopf, K., An introduction to kernel- based learning algorithms, In Handbook of Neural Network Signal Processing, pp. 94–133. CRC Press, Boca Raton, doi:10.1201/9781315220413 (2018).Google Scholar
Schölkopf, B. and Smola, A. J., Learning with Kernels, The MIT Press, Cambridge, MA, doi:10.7551/mitpress/4175.001.0001 (2018).Google Scholar
Hofmann, T., Schölkopf, B. and Smola, A. J., Kernel methods in machine learning, Ann. Statist. 36(3), 1171 (2008), doi:10.1214/009053607000000677.CrossRefGoogle Scholar
Bachman, G. and Narici, L., Functional analysis, Dover Publications, Mineola, NY (2000).Google Scholar
Mercer, J., XVI. Functions of positive and negative type, and their connection the theory of integral equations, Philos. Trans. Soc, Royal. A 209(441–458), 415 (1909), doi:10.1098/rsta.1909.0016.CrossRefGoogle Scholar
Aronszajn, N., Theory of reproducing kernels, Trans. Am. Math. Soc. 68(3), 337 (1950), doi:10.2307/1990404.CrossRefGoogle Scholar
Schölkopf, B., Herbrich, R. and Smola, A. J., A Generalized Representer Theorem, In Computational Learning Theory, pp. 416–426. Springer, doi:10.1007/3-540-44581-1_27 (2001).Google Scholar
Schölkopf, B., Smola, A. and Müller, K.-R., Kernel principal component analysis, In ICANN 1997 – Int. Conf. Neural Netw., pp. 583–588. Springer, doi:10.1007/BFb0020217 (1997).Google Scholar
Saunders, C., Gammerman, A. and Vovk, V., Ridge Regression Learning Algorithm in Dual Variables, In Proc. 15th Int. Conf. Mach. Learn., ICML ’98, pp. 515–521. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, doi:10.5555/645527.657464 (1998).Google Scholar
Smola, A. J. and Schölkopf, B., On a kernel-based method for pattern recognition, regression, approximation, and operator inversion, Algorithmica 22(1), 211 (1998), doi:10.1007/PL00013831.CrossRefGoogle Scholar
Garnett, R., Bayesian Optimization, Cambridge University Press, in preparation (2022).Google Scholar
Neal, R. M., Bayesian Learning for Neural Networks, vol. 118 of Lecture Notes in Statistics, Springer, doi:10.1007/978-1-4612-0745-0 (2012).Google Scholar
Cressie, N., The origins of kriging, Math. Geol. 22(3), 239 (1990), doi:10.1007/BF00889887.CrossRefGoogle Scholar
Frazier, P. I., A tutorial on Bayesian optimization (2018), arXiv:1807.02811.Google Scholar
Schwarz, G., Estimating the dimension of a model, Ann. Stat. 6(2), 461–464 (1978), doi:10.1214/aos/1176344136.CrossRefGoogle Scholar
Stoica, P. and Selen, Y., Model-order selection: A review of information criterion rules, IEEE Signal Process. Mag. 21(4), 36 (2004), doi:10.1109/MSP.2004.1311138.CrossRefGoogle Scholar
Akaike, H., A new look at the statistical model identification, IEEE Trans. Automat. Contr. 19(6), 716 (1974), doi:10.1109/TAC.1974.1100705.CrossRefGoogle Scholar
Duvenaud, D., Lloyd, J., Grosse, R., Tenenbaum, J. and Zoubin, G., Structure discovery in nonparametric regression through compositional kernel search, In ICML 2013 – Int. Conf. Mach. Learn., vol. 28, pp. 1166–1174. PMLR (2013), arXiv:1302.4922.Google Scholar
Duvenaud, D., Nickisch, H. and Rasmussen, C. E., Additive Gaussian processes, In NIPS 2011 – Adv. Neural Inf. Process. Syst. (2011), arXiv:1112.4394.Google Scholar
Dai, J. and Krems, R. V., Interpolation and extrapolation of global potential energy surfaces for polyatomic systems by Gaussian processes with composite kernels, J. Chem. Theory Comput. 16(3), 1386 (2020), doi:10.1021/acs.jctc.9b00700.CrossRefGoogle ScholarPubMed
Vargas-Hernández, R. A. and Gardner, J. R., Gaussian processes with spectral delta kernel for higher accurate potential energy surfaces for large molecules (2021), arXiv:2109.14074.Google Scholar
Su, N. Q., Chen, J., Sun, Z., Zhang, D. H. and Xu, X., H + H quantum dynamics using potential energy surfaces based on the XYG3 type of doubly hybrid density functionals: Validation of the density functionals, J. Chem. Phys. 142, 084107 (2015), doi:10.1063/1.4913196.CrossRefGoogle Scholar
Vargas-Hernández, R. A., Guan, Y., Zhang, D. H. and Krems, R. V., Bayesian optimization for the inverse scattering problem in quantum reaction dynamics, New J. Phys. 21, 22001 (2019), doi:10.1088/1367-2630/ab0099.CrossRefGoogle Scholar
Deng, Z., Tutunnikov, I., Averbukh, I. S., Thachuk, M. and Krems, R. V., Bayesian optimization for inverse problems in time-dependent quantum dynamics, J. Chem. Phys. 153(16), 164111 (2020), doi:10.1063/5.0015896.CrossRefGoogle ScholarPubMed
Cantin, J. T., Alexandrowicz, G. and Krems, R. V., Transfer-matrix theory of surface spin-echo experiments with molecules, Phys. Rev. A 101(6), 062703 (2020), doi:10.1103/PhysRevA.101.062703.CrossRefGoogle Scholar
Sugisawa, N., Sugisawa, H., Otake, Y., Krems, R. V., Nakamura, H. and Fuse, S., Rapid and mild one-flow synthetic approach to unsymmetrical sulfamides guided by Bayesian optimization, Chem. Methods 1(11), 484 (2021), doi:10.1002/cmtd.202100053.Google Scholar
Jasinski, A., Montaner, J., Forrey, R. C., et al., Machine learning corrected quantum dynamics calculations, Phys. Rev. Res. 2(3), 32051 (2020), doi:10.1103/PhysRevResearch.2.032051.Google Scholar
Vargas Hernandez, R. A., Bayesian optimization for calibrating and selecting hybrid-density functional models, J. Phys. Chem. A 124(20), 4053 (2020), doi:10.1021/acs.jpca.0c01375.Google ScholarPubMed
Proppe, J., Gugler, S. and Reiher, M., Gaussian process-based refinement of dispersion corrections, J. Chem. Theory Comput. 15(11), 6046 (2019), doi:10.1021/acs.jctc.9b00627.CrossRefGoogle ScholarPubMed
Tamura, R. and Hukushima, K., Bayesian optimization for computationally extensive probability distributions, PLoS One 13(3), 1 (2018), doi:10.1371/journal.pone.0193785.CrossRefGoogle ScholarPubMed
Carr, S., Garnett, R. and Lo, C., BASC: Applying Bayesian optimization to the search for global minima on potential energy surfaces, In ICML 2016 – Int. Conf. Mach. Learn., vol. 48, pp. 898–907. PMLR (2016).Google Scholar
Chan, L., Hutchison, G. R. and Morris, G. M., Bayesian optimization for conformer generation, J. Cheminformatics 11(1), 32 (2019), doi:10.1186/s13321-019-0354-7.CrossRefGoogle Scholar
Vargas-Hernández, R. A., Chuang, C. and Brumer, P., Multi-objective optimization for retinal photoisomerization models with respect to experimental observables, J. Chem. Phys. 155(23), 234109 (2021), doi:10.1063/5.0060259.CrossRefGoogle ScholarPubMed
Duris, J., Kennedy, D., Hanuka, A., et al., Bayesian optimization of a free-electron laser, Phys. Rev. Lett. 124, 124801 (2020), doi:10.1103/PhysRevLett.124.124801.CrossRefGoogle ScholarPubMed
Jalas, S., Kirchen, M., Messner, P., et al., Bayesian optimization of a laser-plasma accelerator, Phys. Rev. Lett. 126, 104801 (2021), doi:10.1103/PhysRevLett.126.104801.CrossRefGoogle ScholarPubMed
Shalloo, R. J., Dann, S. J. D., Gruse, J.-N., et al., Automation and control of laser wakefield accelerators using Bayesian optimization, Nat. Commun. 11(1), 6355 (2020), doi:10.1038/s41467-020-20245-6.CrossRefGoogle ScholarPubMed
Ueno, T., Rhone, T. D., Hou, Z., Mizoguchi, T. and Tsuda, K., COMBO: An efficient Bayesian optimization library for materials science, Mater. Discov. 4, 18 (2016), doi:10.1016/j.md.2016.04.001.Google Scholar
Jalem, R., Kanamori, K., Takeuchi, I., Nakayama, M., Yamasaki, H. and Saito, T., Bayesian-driven first-principles calculations for accelerating exploration of fast ion conductors for rechargeable battery application, Sci. Rep. 8(1), 5845 (2018), doi:10.1038/s41598-018-23852-y.CrossRefGoogle ScholarPubMed
Ju, S., Shiga, T., Feng, L., Hou, Z., Tsuda, K. and Shiomi, J., Designing nanostructures for phonon transport via Bayesian optimization, Phys. Rev. X 7, 021024 (2017), doi:10.1103/PhysRevX.7.021024.Google Scholar
Kuhn, J., Spitz, J., Sonnweber-Ribic, P., Schneider, M. and Böhlke, T., Identifying material parameters in crystal plasticity by Bayesian optimization, Optim. Eng. (2021), doi:10.1007/s11081-021-09663-7.CrossRefGoogle Scholar
Griffiths, R.-R. and Hernández-Lobato, J. M., Constrained Bayesian optimization for automatic chemical design using variational autoencoders, Chem. Sci. 11, 577 (2020), doi:10.1039/C9SC04026A.CrossRefGoogle ScholarPubMed
Simon, Deshwal, C. M. and Doppa, J. R., Bayesian optimization of nanoporous materials, Mol. Syst. Des. Eng. 6, 1066 (2021), doi:10.1039/D1ME00093D.Google Scholar
Häse, F., Roch, L. M., Kreisbeck, C. and Aspuru-Guzik, A., Phoenics: A Bayesian optimizer for chemistry, ACS Cent. Sci. 4(9), 1134 (2018), doi:10.1021/acscentsci.8b00307.CrossRefGoogle ScholarPubMed
Häse, F., Aldeghi, M., Hickman, R. J., Roch, L. M. and Aspuru-Guzik, A., Gryffin: An algorithm for Bayesian optimization of categorical variables informed by expert knowledge, Appl. Phys. Rev. 8(3), 031406 (2021), doi:10.1063/5.0048164.CrossRefGoogle Scholar
Biswas, A. N. Morozovska, M. Ziatdinov, E. A. Eliseev and S. V. Kalinin, Multi-objective Bayesian optimization of ferroelectric materials with interfacial control for memory and energy storage applications, J. Appl. Phys. 130(20), 204102 (2021), doi:10.1063/5.0068903.CrossRefGoogle Scholar
Wang, Y., Chen, T.-Y. and Vlachos, D. G., NEXTorch: A design and Bayesian optimization toolkit for chemical sciences and engineering, J. Chem. Inf. Model. 61(11), 5312 (2021), doi:10.1021/acs.jcim.1c00637.CrossRefGoogle ScholarPubMed
Aldeghi, M., Häse, F., Hickman, R. J., Tamblyn, I. and Aspuru-Guzik, A., Golem: An algorithm for robust experiment and process optimization, Chem. Sci. 12, 14792 (2021), doi:10.1039/D1SC01545A.CrossRefGoogle ScholarPubMed
Vendeiro, Z., Ramette, J., Rudelis, A., et al., Machine-learning-accelerated Bose-Einstein condensation, Phys. Rev. Res. 4, 043216 (2022), doi:10.1103/PhysRevResearch.4.043216.Google Scholar
Sugisawa, H., Ida, T. and Krems, R. V., Gaussian process model of 51-dimensional potential energy surface for protonated imidazole dimer, J. Chem. Phys. 153(11), 114101 (2020), doi:10.1063/5.0023492.CrossRefGoogle ScholarPubMed
Puzzarini, C., Bloino, J., Tasinato, N. and Barone, V., Accuracy and interpretability: The devil and the holy grail. New routes across old boundaries in computational spectroscopy, Chem. Rev. 119(13), 8131 (2019), doi:10.1021/acs.chemrev.9b00007.CrossRefGoogle Scholar
Herrera, F., Madison, K. W., Krems, R. V. and Berciu, M., Investigating polaron transitions with polar molecules, Phys. Rev. Lett. 110(22), 223002 (2013), doi:10.1103/PhysRevLett.110.223002.CrossRefGoogle ScholarPubMed
Deglmann, P., Schäfer, A. and Lennartz, C., Application of quantum calculations in the chemical industry: An overview, Int. J. Quantum Chem. 115(3), 107 (2015).CrossRefGoogle Scholar
Cao, Y., Romero, J., Olson, J. P. et al., Quantum chemistry in the age of quantum computing, Chem. Rev. 119(19), 10856 (2019).CrossRefGoogle ScholarPubMed
McCaskey, A. J., Parks, Z. P., Jakowski, J. et al., Quantum chemistry as a benchmark for near-term quantum computers, npj Quantum Inf. 5(1), 99 (2019).Google Scholar
Ciavarella, A. N. and Chernyshev, I. A., Preparation of the SU (3) lattice Yang-Mills vacuum with variational quantum methods, Phys. Rev. D 105(7), 074504 (2022).CrossRefGoogle Scholar
Banuls, M. C., Blatt, R., Catani, J. et al., Simulating lattice gauge theories within quantum technologies, Eur. Phys. J. D 74, 1 (2020).CrossRefGoogle Scholar
Iannelli, G. and Jansen, K., Noisy Bayesian optimization for variational quantum eigensolvers, (2021), arXiv:2112.00426.Google Scholar
Mueller, J., Lavrijsen, W., Iancu, C. and de Jong, W. A., Accelerating noisy VQE optimization with Gaussian processes, 2022 IEEE Int. Conf. Quantum Comput. Eng. (QCE), pp. 215–225 (2022).CrossRefGoogle Scholar
Nicoli, K. A., Anders, C. J., Funcke, L., et al., Physics-informed Bayesian optimization of variational quantum circuits, In NeurIPS 2023 – Adv. Neural Inf. Process. Syst. (2023).Google Scholar
Nakanishi, K. M., Fujii, K. and Todo, S., Sequential minimal optimization for quantum-classical hybrid algorithms, Phys. Rev. Res. 2, 043158 (2020), doi:10.1103/PhysRevResearch.2.043158.Google Scholar
Platt, J., Sequential minimal optimization: A fast algorithm for training support vector machines, Microsoft Research Technical Report (1998).Google Scholar
Asnaashari, K. and Krems, R. V., Gradient domain machine learning with composite kernels: Improving the accuracy of PES and force fields for large molecules, Mach. Learn.: Sci. Technol. 3(1), 015005 (2021), doi:10.1088/2632-2153/ac3845.CrossRefGoogle Scholar
Wilson, G., Hu, Z., Salakhutdinov, R. and Xing, E. P., Deep kernel learning, In AISTATS 2016 – Int. Conf. Artif. Intell. Stat. (2016), arXiv:1511.02222.Google Scholar
Sun, S., Zhang, G., Wang, C., Zeng, W., Li, J. and Grosse, R., Differentiable compositional kernel learning for Gaussian processes, In ICML 2018 – Int. Conf. Mach. Learn. (2018), arXiv:1806.04326.Google Scholar
Gardner, J., Pleiss, G., Weinberger, K. Q., Bindel, D. and Wilson, A. G., GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration, In NeurIPS 2018 – Adv. Neural Inf. Process. Syst. (2018), arXiv:1809.11165.Google Scholar
Charlier, B., Feydy, J., Glaunès, J. A., Collin, F.-D. and Durif, G., Kernel operations on the GPU, with Autodiff, without memory overflows, J. Mach. Learn. Res. 22(74), 1 (2021), arXiv:2004.11127.Google Scholar
A. G. d. G. Matthews, M. van der Wilk, T. Nickson, et al., GPflow: A Gaussian process library using TensorFlow, J. Mach. Learn. Res. 18(40), 1 (2017), arXiv:1610.08733.Google Scholar
Blondel, M., Berthet, Q., Cuturi, M., et al., Efficient and modular implicit differentiation (2021), arXiv:2105.15183.Google Scholar
Huang, H.-Y., Kueng, R. and Preskill, J., Predicting many properties of a quantum system from very few measurements, Nat. Phys. 16(10), 1050 (2020), doi:10.1038/s41567-020-0932-7.CrossRefGoogle Scholar
Rasmussen, C. E. and Williams, C. K. I., Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning. MIT Press, doi:10.7551/mitpress/3206.001.0001 (2005).Google Scholar
Krems, R. V., Bayesian machine learning for quantum molecular dynamics, Phys. Chem. Chem. Phys. 21(25), 13392 (2019), doi:10.1039/c9cp01883b.CrossRefGoogle ScholarPubMed
Vargas-Hernández, R. A. and Krems, R. V., Physical Extrapolation of Quantum Observables by Generalization with Gaussian Processes, pp. 171–194, Springer International Publishing, Cham, doi:10.1007/978-3-030-40245-7_9 (2020).CrossRefGoogle Scholar
Huang, H.-Y., Kueng, R., Torlai, G., Albert, V. V. and Preskill, J., Provably efficient machine learning for quantum many-body problems, Science 377(6613) (2022), doi:10.1126/science.abk3333.CrossRefGoogle ScholarPubMed
Dirac, P. A. M. and Fowler, R. H., Quantum mechanics of many-electron systems, Proc. R. Soc. A: Math. Phys. Eng. Sci. 123(792), 714 (1929), doi:10.1098/rspa.1929.0094.CrossRefGoogle Scholar
Carleo, G. and Troyer, M., Solving the quantum many-body problem with artificial neural networks, Science 355(6325), 602 (2017), doi:10.1126/science.aag2302.CrossRefGoogle ScholarPubMed
Choo, K., Mezzacapo, A. and Carleo, G., Fermionic neural-network states for ab-initio electronic structure, Nat. Commun. 11(1), 2368 (2020), doi:10.1038/s41467-020-15724-9.CrossRefGoogle ScholarPubMed
Saito, H., Solving the Bose–Hubbard model with machine learning, J. Phys. Soc. Jpn. 86(9), 093001 (2017), doi:10.7566/jpsj.86.093001.CrossRefGoogle Scholar
White, S. R., Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992), doi:10.1103/PhysRevLett.69.2863.CrossRefGoogle ScholarPubMed
Schollwöck, U., The density-matrix renormalization group in the age of matrix product states, Ann. Phys. (N.Y.) 326(1), 96 (2011), doi:10.1016/j.aop.2010.09.012.CrossRefGoogle Scholar
Orús, R., A practical introduction to tensor networks: Matrix product states and projected entangled pair states, Ann. Phys. (N.Y.) 349, 117 (2014), doi:10.1016/j.aop.2014.06.013.CrossRefGoogle Scholar
den Nest, M. V., Simulating quantum computers with probabilistic methods (2010), arXiv:0911.1624.Google Scholar
Hastings, W. K., Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57(1), 97 (1970), doi:10.2307/2334940.CrossRefGoogle Scholar
Jastrow, R., Many-body problem with strong forces, Phys. Rev. 98, 1479 (1955), doi:10.1103/PhysRev.98.1479.CrossRefGoogle Scholar
Manousakis, E., The spin-½ Heisenberg antiferromagnet on a square lattice and its application to the cuprous oxides, Rev. Mod. Phys. 63, 1 (1991), doi:10.1103/RevModPhys.63.1.CrossRefGoogle Scholar
Brown, T., Mann, B., Ryder, N., et al., Language models are few-shot learners, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2005.14165.Google Scholar
Caron, M., Touvron, H., Misra, I., et al., Emerging Properties in Self-Supervised Vision Transformers, In Proc. IEEE Int. Conf. Comput. Vis., pp. 9650–9660, doi:10.1109/ICCV48922.2021.00951 (2021).CrossRefGoogle Scholar
Nichol, A., Dhariwal, P., Ramesh, A., et al., GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models (2021), arXiv:2112.10741.Google Scholar
Barra, A. Bernacchia, E. Santucci and P. Contucci, On the equivalence of Hopfield networks and Boltzmann machines, Neural Netw. 34, 1 (2012), doi:10.1016/j.neunet.2012.06.003.CrossRefGoogle Scholar
Montufar, G., Restricted Boltzmann machines: Introduction and review (2018), arXiv:1806.07066.Google Scholar
Deng, D.-L., Li, X. and Das Sarma, S., Quantum entanglement in neural network states, Phys. Rev. X 7(2) (2017), doi:10.1103/PhysRevX.7.021021.Google Scholar
Chen, J., Cheng, S., Xie, H., Wang, L. and Xiang, T., Equivalence of restricted Boltzmann machines and tensor network states, Phys. Rev. B 97(8) (2018), doi:10.1103/PhysRevB.97.085104.Google Scholar
Gao, X. and Duan, L.-M., Efficient representation of quantum many-body states with deep neural networks, Nat. Commun. 8(1) (2017), doi:10.1038/s41467-017-00705-2.CrossRefGoogle ScholarPubMed
Luo, D., Carleo, G., Clark, B. K. and Stokes, J., Gauge equivariant neural networks for quantum lattice gauge theories, Phys. Rev. Lett. 127, 276402 (2021), doi:10.1103/PhysRevLett.127.276402.CrossRefGoogle ScholarPubMed
Bansal, A., Chen, X., Russell, B., Gupta, A. and Ramanan, D., PixelNet: Representation of the pixels, by the pixels, and for the pixels (2017), arXiv:1702.06506.Google Scholar
Hibat-Allah, M., Ganahl, M., Hayward, L. E., Melko, R. G. and Carrasquilla, J., Recurrent neural network wave functions, Phys. Rev. Res. 2(2), 023358 (2020), doi:10.1103/PhysRevResearch.2.023358.Google Scholar
Schmitt, M. and Heyl, M., Quantum many-body dynamics in two dimensions with artificial neural networks, Phys. Rev. Lett. 125, 100503 (2020), doi:10.1103/PhysRevLett.125.100503.CrossRefGoogle ScholarPubMed
Roth and A. H. MacDonald, Group convolutional neural networks improve quantum state accuracy (2021), arXiv:2104.05085.Google Scholar
Glasser, I., Pancotti, N., August, M., Rodriguez, I. D. and Cirac, J. I., Neural-network quantum states, string-bond states, and chiral topological states, Phys. Rev. X 8, 011006 (2018), doi:10.1103/PhysRevX.8.011006.Google Scholar
Sharir, O., Shashua, A. and Carleo, G., Neural tensor contractions and the expressive power of deep neural quantum states, Phys. Rev. B 106, 205136 (2022), doi:10.1103/PhysRevB.106.205136.CrossRefGoogle Scholar
Levine, Y., Sharir, O., Cohen, N. and Shashua, A., Quantum entanglement in deep learning architectures, Phys. Rev. Lett. 122, 065301 (2019), doi:10.1103/PhysRevLett.122.065301.CrossRefGoogle ScholarPubMed
Calabrese, P. and Cardy, J., Entanglement entropy and quantum field theory, J. Stat. Mech. 2004, P06002 (2004), doi:10.1088/1742-5468/2004/06/p06002.Google Scholar
Eisert, J., Cramer, M. and Plenio, M. B., Colloquium: Area laws for the entanglement entropy, Rev. Mod. Phys. 82, 277 (2010), doi:10.1103/RevModPhys.82.277.CrossRefGoogle Scholar
Hibat-Allah, M., Inack, E. M., Wiersema, R., Melko, R. G. and Carrasquilla, J., Variational neural annealing, Nat. Mach. Intell. 3, 952 (2021), doi:10.1038/s42256-021-00401-3.Google Scholar
Choo, K., Carleo, G., Regnault, N. and Neupert, T., Symmetries and many-body excitations with neural-network quantum states, Phys. Rev. Lett. 121, 167204 (2018), doi:10.1103/PhysRevLett.121.167204.CrossRefGoogle ScholarPubMed
Valenti, A., Greplova, E., Lindner, N. H. and Huber, S. D., Correlation-enhanced neural networks as interpretable variational quantum states, Phys. Rev. Res. 4, L012010 (2022), doi:10.1103/PhysRevResearch.4.L012010.Google Scholar
Carleo, G., Nomura, Y. and Imada, M., Constructing exact representations of quantum many-body systems with deep neural networks, Nat. Commun. 9, 5322 (2018), doi:10.1038/s41467-018-07520-3.CrossRefGoogle ScholarPubMed
Kaubruegger, R., Pastori, L. and Budich, J. C., Chiral topological phases from artificial neural networks, Phys. Rev. B 97, 195136 (2018), doi:10.1103/PhysRevB.97.195136.CrossRefGoogle Scholar
Zheng, Y., He, H., Regnault, N. and Bernevig, B. A., Restricted Boltzmann machines and matrix product states of one-dimensional translationally invariant stabilizer codes, Phys. Rev. B 99, 155129 (2019), doi:10.1103/PhysRevB.99.155129.CrossRefGoogle Scholar
Lu, S., Gao, X. and Duan, L.-M., Efficient representation of topologically ordered states with restricted Boltzmann machines, Phys. Rev. B 99, 155136 (2019), doi: 10.1103/PhysRevB.99.155136.CrossRefGoogle Scholar
Huang, Y. and Moore, J. E., Neural network representation of tensor network and chiral states, Phys. Rev. Lett. 127, 170601 (2021), doi:10.1103/PhysRevLett.127.170601.CrossRefGoogle ScholarPubMed
Park, C.-Y. and Kastoryano, M. J., Geometry of learning neural quantum states, Phys. Rev. Res. 2, 023232 (2020), doi:10.1103/PhysRevResearch.2.023232.Google Scholar
Lin, S.-H. and Pollmann, F., Scaling of neural-network quantum states for time evolution, Phys. Status Solidi B 259(5), 2100172 (2022), doi:10.1002/pssb.202100172.CrossRefGoogle Scholar
Vicentini, F., Hofmann, D., Szabó, A., et al., NetKet 3: Machine learning toolbox for many-body quantum systems (2021), arXiv:2112.10526.Google Scholar
Yuan, X., Endo, S., Zhao, Q., Li, Y. and Benjamin, S. C., Theory of variational quantum simulation, Quantum 3, 191 (2019), doi:10.22331/q-2019-10-07-191.CrossRefGoogle Scholar
Carleo, G., Becca, F., Schiro, M. and Fabrizio, M., Localization and glassy dynamics of many-body quantum systems, Sci. Rep. 2, 243 (2012), doi:10.1038/10.1038/srep00243.CrossRefGoogle ScholarPubMed
Gutiérrez, I. L. and Mendl, C. B., Real time evolution with neural-network quantum states, Quantum 6, 627 (2022), doi:10.22331/q-2022-01-20-627.CrossRefGoogle Scholar
Hofmann, D., Fabiani, G., Mentink, J., Carleo, G. and Sentef, M., Role of stochastic noise and generalization error in the time propagation of neural-network quantum states, SciPost Phys. 12(5) (2022), doi:10.21468/scipostphys.12.5.165.Google Scholar
Sorella, S., Green function Monte Carlo with stochastic reconfiguration, Phys. Rev. Lett. 80, 4558 (1998), doi:10.1103/PhysRevLett.80.4558.CrossRefGoogle Scholar
Becca, F. and Sorella, S., Quantum Monte Carlo Approaches for Correlated Systems, Cambridge University Press, doi:10.1017/9781316417041 (2017).Google Scholar
Hangleiter, D., Roth, I., Nagaj, D. and Eisert, J., Easing the Monte Carlo sign problem, Sci. Adv. 6, eabb8341 (2020), doi:10.1126/sciadv.abb8341.CrossRefGoogle Scholar
Luo, D. and Clark, B. K., Backflow transformations via neural networks for quantum many-body wave functions, Phys. Rev. Lett. 122, 226401 (2019), doi:10.1103/PhysRevLett.122.226401.CrossRefGoogle ScholarPubMed
Hermann, J., Schätzle, Z. and Noé, F., Deep-neural-network solution of the electronic Schrödinger equation, Nat. Chem. 12, 891 (2020), doi:10.1038/s41557-020-0544-y.CrossRefGoogle ScholarPubMed
Pfau, D., Spencer, J. S., Matthews, A. G. and Foulkes, W. M. C., Ab initio solution of the many-electron Schrödinger equation with deep neural networks, Phys. Rev. Res. 2, 033429 (2020), doi:10.1103/PhysRevResearch.2.033429.Google Scholar
Hermann, J., Spencer, J., Choo, K., et al., Ab-initio quantum chemistry with neural-network wavefunctions Nat. Rev. Chem. 7, 692–709 (2023), doi:10.1038/s41570-023-00516-8.CrossRefGoogle ScholarPubMed
Bravyi, S. B. and Kitaev, A. Y., Fermionic quantum computation, Ann. Phys. 298, 210 (2002), doi:10.1006/aphy.2002.6254.CrossRefGoogle Scholar
Jordan, P. and Wigner, E., Über das Paulische Äquivalenzverbot, Zeitschrift für Physik 47, 631 (1928), doi:10.1007/BF01331938.CrossRefGoogle Scholar
Zohar, E. and Cirac, J. I., Eliminating fermionic matter fields in lattice gauge theories, Phys. Rev. B 98, 075119 (2018), doi:10.1103/PhysRevB.98.075119.CrossRefGoogle Scholar
Borla, U., Verresen, R., Grusdt, F. and Moroz, S., Confined phases of one-dimensional spinless fermions coupled to gauge theory, Phys. Rev. Lett. 124, 120503 (2020), doi:10.1103/PhysRevLett.124.120503.CrossRefGoogle ScholarPubMed
Nys, J. and Carleo, G., Variational solutions to fermion-to-qubit mappings in two spatial dimensions, Quantum 6, 833 (2022), doi:10.22331/q-2022-10-13-833.CrossRefGoogle Scholar
Barrett, T. D., Malyshev, A. and Lvovsky, A. I., Autoregressive neural-network wavefunctions for ab initio quantum chemistry, Nat. Mach. Intell. 4(4), 351 (2022), doi:10.1038/s42256-022-00461-z.Google Scholar
Jonsson, B., Bauer, B. and Carleo, G., Neural-network states for the classical simulation of quantum computing (2018), arXiv:1808.05232.Google Scholar
Medvidović, M. and Carleo, G., Classical variational simulation of the quantum approximate optimization algorithm, npj Quantum Inf. 7, 101 (2021), doi:10.1038/s41534-021-00440-z.Google Scholar
Farhi, E., Goldstone, J. and Gutmann, S., A quantum approximate optimization algorithm (2014), arXiv:1411.4028.Google Scholar
Harrigan, M. P., Sung, K. J., Neeley, M., et al., Quantum approximate optimization of non-planar graph problems on a planar superconducting processor, Nat. Phys. 17, 332 (2021), doi:10.1038/s41567-020-01105-y.CrossRefGoogle Scholar
Carrasquilla, J., Luo, D., Pérez, F., et al., Probabilistic simulation of quantum circuits using a deep-learning architecture, Phys. Rev. A 104, 032610 (2021), doi:10.1103/PhysRevA.104.032610.CrossRefGoogle Scholar
Vaswani, A., Shazeer, N., Parmar, N., et al., Attention is all you need, In Adv. Neural. Inf. Process. Syst. (2017), arXiv:1706.03762.Google Scholar
Breuer, H.-P. and Petruccione, F., The Theory of Open Quantum Systems, Oxford University Press, doi:10.1093/acprof:oso/9780199213900.001.0001 (2007).Google Scholar
Yoshioka, N. and Hamazaki, R., Constructing neural stationary states for open quantum many-body systems, Phys. Rev. B 99, 214306 (2019), doi:10.1103/PhysRevB.99.214306.CrossRefGoogle Scholar
Nagy, A. and Savona, V., Variational quantum Monte Carlo method with a neural-network ansatz for open quantum systems, Phys. Rev. Lett. 122, 250501 (2019), doi:10.1103/PhysRevLett.122.250501.CrossRefGoogle ScholarPubMed
Vicentini, F., Biella, A., Regnault, N. and Ciuti, C., Variational neural-network ansatz for steady states in open quantum systems, Phys. Rev. Lett. 122, 250503 (2019), doi:10.1103/PhysRevLett.122.250503.CrossRefGoogle ScholarPubMed
Hartmann, M. J. and Carleo, G., Neural-network approach to dissipative quantum many-body dynamics, Phys. Rev. Lett. 122, 250502 (2019), doi:10.1103/PhysRevLett.122.250502.CrossRefGoogle ScholarPubMed
Luo, D., Chen, Z., Carrasquilla, J. and Clark, B. K., Autoregressive neural network for simulating open quantum systems via a probabilistic formulation, Phys. Rev. Lett. 128, 090501 (2022), doi:10.1103/PhysRevLett.128.090501.CrossRefGoogle Scholar
Reh, M., Schmitt, M. and Gärttner, M., Time-dependent variational principle for open quantum systems with artificial neural networks, Phys. Rev. Lett. 127, 230501 (2021), doi:10.1103/PhysRevLett.127.230501.CrossRefGoogle ScholarPubMed
Minganti, F., Biella, A., Bartolo, N. and Ciuti, C., Spectral theory of Liouvillians for dissipative phase transitions, Phys. Rev. A 98, 042118 (2018), doi:10.1103/PhysRevA.98.042118.CrossRefGoogle Scholar
Gühne, O. and Tóth, G., Entanglement detection, Physics Reports 474(1), 1 (2009), doi:10.1016/j.physrep.2009.02.004.CrossRefGoogle Scholar
da Silva, M. P., Landon-Cardinal, O. and Poulin, D., Practical characterization of quantum devices without tomography, Phys. Rev. Lett. 107, 210404 (2011), doi:10.1103/PhysRevLett.107.210404.CrossRefGoogle ScholarPubMed
Tavakoli, A., Semi-device-independent certification of independent quantum state and measurement devices, Phys. Rev. Lett. 125, 150503 (2020), doi:10.1103/PhysRevLett.125.150503.CrossRefGoogle ScholarPubMed
Kliesch, M. and Roth, I., Theory of quantum system certification, PRX Quantum 2, 010201 (2021), doi:10.1103/PRXQuantum.2.010201.Google Scholar
Friis, N., Vitagliano, G., Malik, M. and Huber, M., Entanglement certification from theory to experiment, Nat. Rev. Phys. 1(1), 72 (2019), doi:10.1038/s42254-018-0003-5.Google Scholar
Eisert, J., Hangleiter, D., Walk, N., et al., Quantum certification and benchmarking, Nat. Rev. Phys. 2(7), 382 (2020), doi:10.1038/s42254-020-0186-4.Google Scholar
Sotnikov, O. M., Iakovlev, I. A., Iliasov, A. A., Katsnelson, M. I., Bagrov, A. A. and Mazurenko, V. V., Certification of quantum states with hidden structure of their bitstrings, npj Quantum Inf. 8(1), 41 (2022), doi:10.1038/s41534-022-00559-7.Google Scholar
Chen, S., Li, J., Huang, B. and Liu, A., Tight bounds for quantum state certification with incoherent measurements, In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pp. 1205–1213. IEEE Computer Society, Los Alamitos, CA, USA, doi:10.1109/FOCS54457.2022.00118 (2022).CrossRefGoogle Scholar
Gočanin, A., Šupić, I. and Dakić, B., Sample-efficient device-independent quantum state verification and certification, PRX Quantum 3, 010317 (2022), doi:10.1103/PRXQuantum.3.010317.Google Scholar
Boghiu, E.-C., Hirsch, F., Lin, P.-S., Quintino, M. T. and Bowles, J., Device-independent and semi-device-independent entanglement certification in broadcast Bell scenarios, SciPost Phys. Core 6, 028 (2023), doi:10.21468/SciPostPhysCore.6.2.028.CrossRefGoogle Scholar
Hangleiter, M. Kliesch, M. Schwarz and J. Eisert, Direct certification of a class of quantum simulations, Quantum Sci. Technol. 2(1), 015004 (2017), doi:10.1088/2058-9565/2/1/015004.CrossRefGoogle Scholar
Frérot, I., Fadel, M. and Lewenstein, M., Probing quantum correlations in manybody systems: A review of scalable methods, Rep. Prog. Phys. 86(11), 114001 (2023), doi:10.1088/1361-6633/acf8d7.CrossRefGoogle Scholar
Leonhardt, U., Quantum-state tomography and discrete Wigner function, Phys. Rev. Lett. 74, 4101 (1995), doi:10.1103/PhysRevLett.74.4101.CrossRefGoogle ScholarPubMed
White, A. G., James, D. F. V., Eberhard, P. H. and Kwiat, P. G., Nonmaximally entangled states: Production, characterization, and utilization, Phys. Rev. Lett. 83, 3103 (1999), doi:10.1103/PhysRevLett.83.3103.CrossRefGoogle Scholar
Roos, C. F., Lancaster, G. P. T., Riebe, M., et al., Bell states of atoms with ultralong lifetimes and their tomographic state analysis, Phys. Rev. Lett. 92, 220402 (2004), doi:10.1103/PhysRevLett.92.220402.CrossRefGoogle ScholarPubMed
Häffner, H., Hänsel, W., Roos, C. F., et al., Scalable multiparticle entanglement of trapped ions, Nature 438(7068), 643 (2005), doi:10.1038/nature04279.CrossRefGoogle ScholarPubMed
Gross, Y.-K. Liu, S. T. Flammia, S. Becker and J. Eisert, Quantum state tomography via compressed sensing, Phys. Rev. Lett. 105, 150401 (2010), doi:10.1103/PhysRevLett.105.150401.CrossRefGoogle Scholar
Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory 57(3), 1548 (2011), doi:10.1109/TIT.2011.2104999.Google Scholar
Tóth, G., Wieczorek, W., Gross, D., Krischek, R., Schwemmer, C. and Weinfurter, H., Permutationally invariant quantum tomography, Phys. Rev. Lett. 105, 250403 (2010), doi:10.1103/PhysRevLett.105.250403.CrossRefGoogle ScholarPubMed
Moroder, T., Hyllus, P., Tóth, G., et al., Permutationally invariant state reconstruction, New Journal of Physics 14(10), 105001 (2012), doi:10.1088/1367-2630/14/10/105001.CrossRefGoogle Scholar
Cramer, M., Plenio, M. B., Flammia, S. T., et al., Efficient quantum state tomography, Nat. Commun. 1(1) (2010), doi:10.1038/ncomms1147.CrossRefGoogle ScholarPubMed
Baumgratz, T., Gross, D., Cramer, M. and Plenio, M. B., Scalable reconstruction of density matrices, Phys. Rev. Lett. 111, 020401 (2013), doi:10.1103/PhysRevLett.111.020401.CrossRefGoogle ScholarPubMed
Lanyon, P., Maier, C., Holzäpfel, M., et al., Efficient tomography of a quantum many-body system, Nat. Phys. 13(12), 1158 (2017), doi:10.1038/nphys4244.CrossRefGoogle Scholar
Palmieri, A., Kovlakov, E., Bianchi, F., et al., Experimental neural network enhanced quantum tomography, npj Quantum Inf. 6, 20 (2020), doi:10.1038/s41534-020-0248-6.Google Scholar
Pan and J. Zhang, Deep learning-based quantum state tomography with imperfect measurement, Int. J. Theor. Phys. 61(9) (2022), doi:10.1007/s10773-022-05209-4.Google Scholar
Koutný, L. Motka, Z. c. v. Hradil, J. Řeháček and L. L. Sánchez-Soto, Neural-network quantum state tomography, Phys. Rev. A 106, 012409 (2022), doi:10.1103/PhysRevA.106.012409.CrossRefGoogle Scholar
Ahmed, S., Sánchez Muñoz, C., Nori, F. and Kockum, A. F., Quantum state tomography with conditional generative adversarial networks, Phys. Rev. Lett. 127, 140502 (2021), doi:10.1103/PhysRevLett.127.140502.CrossRefGoogle ScholarPubMed
Ma, H., Sun, Z., Dong, D., Chen, C. and Rabitz, H., Attention-based transformer networks for quantum state tomography (2023), arXiv:2305.05433.Google Scholar
Palmieri, A. M., Müller-Rigat, G., Srivastava, A. K., Lewenstein, M., Rajchel-Mieldzioć, G. and Płodzień, M., Enhancing quantum state tomography via resource-efficient attention-based neural networks Phys. Rev. Res. 6, 033248 (2024), doi:10.1103/PhysRevResearch.6.033248.Google Scholar
Torlai, G., Mazzola, G., Carrasquilla, J., Troyer, M., Melko, R. and Carleo, G., Neural-network quantum state tomography, Nat. Phys. 14, 447 (2018), doi:10.1038/s41567-018-0048-5.CrossRefGoogle Scholar
Szabó, A. and Castelnovo, C., Neural network wave functions and the sign problem, Phys. Rev. Res. 2, 033075 (2020), doi:10.1103/PhysRevResearch.2.033075.Google Scholar
Schmale, T., Reh, M. and Gärttner, M., Efficient quantum state tomography with convolutional neural networks, npj Quantum Inf. 8(1), 115 (2022), doi:10.1038/s41534-022-00621-4.Google Scholar
Torlai, G., Timar, B., van Nieuwenburg, E. P. L., et al., Integrating neural networks with a quantum simulator for state reconstruction, Phys. Rev. Lett. 123, 230504 (2019), doi:10.1103/PhysRevLett.123.230504.CrossRefGoogle ScholarPubMed
Lohani, S., Kirby, B. T., Brodsky, M., Danaci, O. and Glasser, R. T., Machine learning assisted quantum state estimation, Mach. Learn.: Sci. Technol. 1(3), 035007 (2020), doi:10.1088/2632-2153/ab9a21.CrossRefGoogle Scholar
Lohani, S., Searles, T. A., Kirby, B. T. and Glasser, R. T., On the experimental feasibility of quantum state reconstruction via machine learning, IEEE Trans. Quantum Eng. 2, 1 (2021), doi:10.1109/TQE.2021.3106958.CrossRefGoogle Scholar
Lohani, S., Lukens, J. M., Jones, D. E., Searles, T. A., Glasser, R. T. and Kirby, B. T., Improving application performance with biased distributions of quantum states, Phys. Rev. Res. 3, 043145 (2021), doi:10.1103/PhysRevResearch.3.043145.Google Scholar
Lohani, S., Lukens, J. M., Glasser, R. T., Searles, T. A. and Kirby, B. T., Data-centric machine learning in quantum information science, Mach. Learn.: Sci. Technol. 3(4), 04LT01 (2022), doi:10.1088/2632-2153/ac9036.Google Scholar
Danaci, O., Lohani, S., Kirby, B. T. and Glasser, R. T., Machine learning pipeline for quantum state estimation with incomplete measurements, Mach. Learn.: Sci. Technol. 2(3), 035014 (2021), doi:10.1088/2632-2153/abe5f5.CrossRefGoogle Scholar
Aaronson, S., Shadow tomography of quantum states, In Proc. 50th Annu. ACM SIGACT Symp. Theory Comput., STOC 2018, pp. 325–338. Association for Computing Machinery, New York, NY, USA, doi:10.1145/3188745.3188802 (2018).Google Scholar
Aaronson, S. and Rothblum, G. N., Gentle measurement of quantum states and differential privacy, In Proc. 51st Annu. ACM SIGACT Symp. Theory Comput., STOC 2019, pp. 322–333. Association for Computing Machinery, New York, NY, USA, doi:10.1145/3313276.3316378 (2019).Google Scholar
Altepeter, J. B., James, D. F. and Kwiat, P. G., 4 qubit quantum state tomography, In Paris, M. and Řeháček, J., eds., Quantum State Estimation, pp. 113–145. Springer, Berlin, Heidelberg (2004).Google Scholar
O’Donnell, R. and Wright, J., Efficient quantum tomography, In Proc. 48th Annu. ACM Symp. Theory Comput., STOC ’16, p. 899–912. Association for Computing Machinery, New York, NY, USA, doi:10.1145/2897518.2897544 (2016).CrossRefGoogle Scholar
Koh, D. E. and Grewal, S., Classical Shadows with Noise, Quantum 6, 776 (2022), doi:10.22331/q-2022-08-16-776.CrossRefGoogle Scholar
Elben, A., Kueng, R., Huang, H.-Y. R., et al., Mixed-state entanglement from local randomized measurements, Phys. Rev. Lett. 125, 200501 (2020), doi:10.1103/PhysRevLett.125.200501.CrossRefGoogle ScholarPubMed
Elben, A., Flammia, S. T., Huang, H.-Y., et al., The randomized measurement tool-box, Nat. Rev. Phys. 5(1), 9 (2023), doi:10.1038/s42254-022-00535-2.Google Scholar
Donatella, K., Denis, Z., Le Boité, A. and Ciuti, C., Dynamics with autoregressive neural quantum states: Application to critical quench dynamics, Phys. Rev. A 108(2), 022210 (2023).CrossRefGoogle Scholar
Sinibaldi, C. Giuliani, G. Carleo and F. Vicentini, Unbiasing time-dependent Variational Monte Carlo by projected quantum evolution, Quantum 7, 1131 (2023), doi:10.22331/q-2023-10-10-1131.CrossRefGoogle Scholar
Kitagawa, M. and Ueda, M., Squeezed spin states, Phys. Rev. A 47, 5138 (1993), doi:10.1103/PhysRevA.47.5138.CrossRefGoogle ScholarPubMed
Wineland, D. J., Bollinger, J. J., Itano, W. M. and Heinzen, D. J., Squeezed atomic states and projection noise in spectroscopy, Phys. Rev. A 50, 67 (1994), doi:10.1103/PhysRevA.50.67.CrossRefGoogle ScholarPubMed
Płodzień, M., Kościelski, M., Witkowska, E. and Sinatra, A., Producing and storing spin-squeezed states and Greenberger-Horne-Zeilinger states in a one-dimensional optical lattice, Phys. Rev. A 102, 013328 (2020), doi:10.1103/PhysRevA.102.013328.CrossRefGoogle Scholar
Płodzień, M., Lewenstein, M., Witkowska, E. and Chwedeńczuk, J., One-axis twisting as a method of generating many-body Bell correlations, Phys. Rev. Lett. 129, 250402 (2022), doi:10.1103/PhysRevLett.129.250402.CrossRefGoogle ScholarPubMed
Płodzień, M., Wasak, T., Witkowska, E., Lewenstein, M. and Chwedeńczuk, J., Generation of scalable many-body Bell correlations in spin chains with short-range two-body interactions Phys. Rev. Research 6, 023050 (2024), doi:10.1103/PhysRevResearch.6.023050.Google Scholar
Hernández Yanes, T., Płodzień, M., M. Mackoit Sinkevičiene˙, G. Žlabys, G. Juzeliu¯ nas and E. Witkowska, One- and two-axis squeezing via laser coupling in an atomic Fermi-Hubbard model, Phys. Rev. Lett. 129, 090403 (2022), doi:10.1103/PhysRevLett.129.090403.CrossRefGoogle Scholar
Dziurawiec, M., Yanes, T. H., Płodzień, M., Gajda, M., Lewenstein, M. and Witkowska, E., Accelerating many-body entanglement generation by dipolar interactions in the Bose-Hubbard model, Phys. Rev. A 107(1) (2023), doi:10.1103/PhysRevA.107.013311.CrossRefGoogle Scholar
Hernández Yanes, T., Žlabys, G., Płodzień, M., et al., Spin squeezing in open Heisenberg spin chains, Phys. Rev. B 108, 104301 (2023), doi:10.1103/PhysRevB.108.104301.CrossRefGoogle Scholar
Adams, C., Carleo, G., Lovato, A. and Rocco, N., Variational Monte Carlo calculations of nuclei with an artificial neural-network correlator ansatz, Phys. Rev. Lett. 127, 022502 (2021), doi:10.1103/PhysRevLett.127.022502.CrossRefGoogle ScholarPubMed
Bausch, J. and Leditzky, F., Quantum codes from neural networks, New J. Phys. 22(2), 023005 (2020), doi:10.1088/1367-2630/ab6cdd.CrossRefGoogle Scholar
Vicentini, F., Machine learning toolbox for quantummanybody physics, Nat. Rev. Phys. 3, 156 (2021), doi:10.1038/s42254-021-00285-7.Google Scholar
Carleo, G., Beijing lecture notes and code, Lecture Notes (2017).Google Scholar
Melnikov, A. A., Nautrup, H. P., Krenn, M., et al., Active learning machine learns to create new quantum experiments, Proc. Natl. Acad. Sci. U.S.A. 115(6), 1221 (2018), doi:10.1073/pnas.1714936115.CrossRefGoogle Scholar
Fawzi, A., Balog, M., Huang, A., et al., Discovering faster matrix multiplication algorithms with reinforcement learning, Nature 610(7930), 47 (2022), doi:10.1038/s41586-022-05172-4.CrossRefGoogle ScholarPubMed
Mankowitz, D. J., Michi, A., Zhernov, A., et al., Faster sorting algorithms discovered using deep reinforcement learning, Nature 618(7964), 257 (2023), doi:10.1038/s41586-023-06004-9.CrossRefGoogle ScholarPubMed
Sutton, S. R. and Barto, A. G., Reinforcement Learning: An Introduction, Bradford Book, doi:10.5555/980651.980663 (2018).Google Scholar
Sutton, R. S., Learning to predict by the methods of temporal differences, Mach. Learn. 3, 9 (1988), doi:10.1007/BF00115009.CrossRefGoogle Scholar
Rummery, G. A. and Niranjan, M., On-line Q-learning using connectionist systems, CUED/F-INFENG/TR 166, University of Cambridge, Department of Engineering (1994).Google Scholar
van Seijen, H., Mahmood, A. R., Pilarski, P. M., Machado, M. C. and Sutton, R. S., True online temporal-difference learning, J. Mach. Learn. Res. 17, 1 (2016), doi:10.48550/arXiv.1512.04087.Google Scholar
John, H., When the best move isn’t optimal: Q-learning with exploration, In Proc. 12th Nat. Conf. Artif. Intell. (Vol. 2) (1994).Google Scholar
Watkins, C. J. C. H. and Dayan, P., Q-learning, Mach. Learn. 8, 279 (1992), doi:10.1007/BF00992698.CrossRefGoogle Scholar
Smith, J. E. and Winkler, R. L., The optimizer's curse: Skepticism and postdecision surprise in decision analysis, Manag. Sci. 52, 311 (2006), doi:10.1287/mnsc.1050.0451.Google Scholar
Thrun, S. and Schwartz, A., Issues in using function approximation for reinforcement learning, In Proc. 4th Connectionist Models Summer School (1993).Google Scholar
Van Hasselt, H., Double Q-learning, In NIPS 2010 – Adv. Neural Inf. Process. Syst. (2010).Google Scholar
Lin, L.-J., Reinforcement Learning for Robots Using Neural Networks, Ph.D. thesis, Carnegie Mellon University, USA, UMI Order No. GAX93-22750 (1992).Google Scholar
Guez, Van Hasselt, A. and Silver, D., Deep reinforcement learning with double Q-learning, In Proc. AAAI Conf. Artif. Intell. (2016), arXiv:1509.06461.Google Scholar
Marbach, P. and Tsitsiklis, J. N., Simulation-based optimization of Markov reward processes, IEEE Trans. Automat. Contr. 46, 191 (2001), doi:10.1109/9.905687.CrossRefGoogle Scholar
Sutton, R. S., McAllester, D., Singh, S. and Mansour, Y., Policy gradient methods for reinforcement learning with function approximation, In NIPS 1999 – Adv. Neural Inf. Process. Syst. (1999).Google Scholar
Williams, R. J., Simple statistical gradient-following algorithms for connectionist reinforcement learning, Mach. Learn. 8, 229 (1992), doi:10.1007/BF00992696.CrossRefGoogle Scholar
Rennie, S. J., Marcheret, E., Mroueh, Y., Ross, J. and Goel, V., Self-critical sequence training for image captioning, In Proc. IEEE Conf. Comput. Vision and Pattern Recognition, doi:10.1109/CVPR.2017.131 (2017).CrossRefGoogle Scholar
Schulman, J., Moritz, P., Levine, S., Jordan, M. and Abbeel, P., High-dimensional continuous control using generalized advantage estimation, In ICLR 2016 – Int. Conf. Learn. Represent. (2016), arXiv:1506.02438.Google Scholar
Barto, G., Sutton, R. S. and Anderson, C. W., Neuronlike adaptive elements that can solve difficult learning control problems, IEEE Trans. Syst. Man Cybern. Syst. 5, 834 (1983), doi:10.1109/TSMC.1983.6313077.CrossRefGoogle Scholar
Konda, V. and Tsitsiklis, J., Actor-critic algorithms, In Adv. Neural Inf. Process. Syst. (1999).Google Scholar
Degris, T., White, M. and Sutton, R. S., Off-policy actor-critic, In ICML 2012 – Conf. Mach. Learn., pp. 179–186. Omnipress (2012), arXiv:1205.4839.Google Scholar
Mnih, V., Badia, A. P., Mirza, M., et al., Asynchronous methods for deep reinforcement learning, In ICML 2016 – 33th Int. Conf. Mach. Learn., vol. 48, pp. 1928–1937 (2016), arXiv:1602.01783.Google Scholar
Amari, S.-i, Natural gradient works efficiently in learning, Neural Comput. 10(2), 251 (1998), doi:10.1162/089976698300017746.CrossRefGoogle Scholar
Kakade, S. M., A natural policy gradient, In NIPS 2001 – Adv. Neural Inf. Process. Syst. (2001).Google Scholar
Peters, J. and Schaal, S., Natural actor-critic, Neurocomputing 71(7), 1180 (2008), doi:10.1016/j.neucom.2007.11.026.CrossRefGoogle Scholar
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M. and Lee, M., Natural actor– critic algorithms, Automatica 45(11), 2471 (2009), doi:10.1016/j.automatica.2009.07.008.CrossRefGoogle Scholar
Schulman, J., Levine, S., Abbeel, P., Jordan, M. and Moritz, P., Trust region policy optimization, In ICML 2015 – Int. Conf. Mach. Learn. (2015), arXiv:1502.05477.Google Scholar
Wu, Y., Mansimov, E., Grosse, R. B., Liao, S. and Ba, J., Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation, In NIPS 2017 – Adv. Neural Inf. Process. Syst. (2017), arXiv:1708.05144.Google Scholar
Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O., Proximal policy optimization algorithms (2017), arXiv:1707.06347.Google Scholar
H. J. Briegel and G. De las Cuevas, Projective simulation for artificial intelligence, Sci. Rep. 2(1), 1 (2012), doi:10.1038/srep00400.CrossRefGoogle Scholar
Mautner, J., Makmal, A., Manzano, D., Tiersch, M. and Briegel, H. J., Projective simulation for classical learning agents: A comprehensive investigation, New Gener. Comput. 33(1), 69 (2015), doi:10.1007/s00354-015-0102-0.CrossRefGoogle Scholar
Melnikov, A. A., Makmal, A. and Briegel, H. J., Benchmarking projective simulation in navigation problems, IEEE Access 6, 64639 (2018), doi:10.1109/ACCESS.2018.2876494.CrossRefGoogle Scholar
Jerbi, S., Trenkwalder, L. M., Nautrup, H. P., Briegel, H. J. and Dunjko, V., Quantum enhancements for deep reinforcement learning in large spaces, PRX Quantum 2(1), 010328 (2021), doi:10.1103/PRXQuantum.2.010328.Google Scholar
Boyajian, W. L., Clausen, J., Trenkwalder, L. M., Dunjko, V. and Briegel, H. J., On the convergence of projective-simulation–based reinforcement learning in Markov decision processes, Quantum Mach. Intell. 2(2), 1 (2020), doi:10.1007/s42484020-00023-9.Google Scholar
Melnikov, A. A., Makmal, A., Dunjko, V. and Briegel, H. J., Projective simulation with generalization, Sci. Rep. 7(1), 1 (2017), doi:10.1038/s41598-017-14740-y.CrossRefGoogle ScholarPubMed
Eva, K. Ried, T. Müller, et al., How a minimal learning agent can infer the existence of unobserved variables in a complex environment Minds Mach. 33, 185–219 (2023), doi:10.1007/s11023-022-09619-5.CrossRefGoogle Scholar
Campbell, M., A. J. Hoane Jr and F.-h. Hsu, Deep blue, Artif. Intell. 134(1–2), 57 (2002), doi:10.1016/S0004-3702(01)00129-1.Google Scholar
Yee, A. and Alvarado, M., Pattern recognition and Monte-Carlo tree search for Go gaming better automation, In IBERAMIA 2012 – Adv. Artif. Intell., doi:10.1007/978-3-642-34654-5_2 (2012).CrossRefGoogle Scholar
Coulom, R., Efficient selectivity and backup operators in Monte-Carlo tree search, In Int. Conf. Comput. Games, pp. 72–83. Springer, doi:10.1007/978-3-540-75538-8_7 (2006).Google Scholar
Silver, J. Schrittwieser, K. Simonyan, et al., Mastering the game of Go without human knowledge, Nature 550(7676), 354 (2017), doi:10.1038/nature24270.CrossRefGoogle Scholar
Silver, D., Hubert, T., Schrittwieser, J., et al., A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science 362(6419), 1140 (2018), doi:10.1126/science.aar6404.CrossRefGoogle ScholarPubMed
OpenAI, C. Berner, G. Brockman, et al., Dota 2 with large scale deep reinforcement learning (2019), arXiv:1912.06680.Google Scholar
Guss, W. H., Codel, C., Hofmann, K., et al., The MineRL 2019 competition on sample efficient reinforcement learning using human priors (2019), arXiv:1904.10079.Google Scholar
Schrittwieser, J., Antonoglou, I., Hubert, T., et al., Mastering Atari, Go, chess and shogi by planning with a learned model, Nature 588(7839), 604 (2020), doi:10.1038/s41586-020-03051-4.CrossRefGoogle ScholarPubMed
Marquardt, Machine learning and quantum devices, SciPost Phys. Lect. Notes p. 29 (2021), doi:10.21468/SciPostPhysLectNotes.29.Google Scholar
Porotti, R., Essing, A., Huard, B. and Marquardt, F., A deep reinforcement learning for quantum state preparation with weak nonlinear measurements Quantum 6, 747 (2022), doi:10.22331/q-2022-06-28-747.CrossRefGoogle Scholar
Fösel, T., Tighineanu, P., Weiss, T. and Marquardt, F., Reinforcement learning with neural networks for quantum feedback, Phys. Rev. X 8, 031084 (2018), doi:10.1103/PhysRevX.8.031084.Google Scholar
Borah, S., Sarma, B., Kewming, M., Milburn, G. J. and Twamley, J., Measurement-based feedback quantum control with deep reinforcement learning for a double-well nonlinear potential, Phys. Rev. Lett. 127, 190403 (2021), doi: 10.1103/PhysRevLett.127.190403.CrossRefGoogle ScholarPubMed
Nguyen, V., Orbell, S. B., Lennon, D. T., et al., Deep reinforcement learning for efficient measurement of quantum devices, npj Quantum Inf. 7(1), 100 (2021), doi:10.1038/s41534-021-00434-x.Google Scholar
Preskill, J., Quantum computing in the NISQ era and beyond, Quantum 2, 79 (2018), doi:10.22331/q-2018-08-06-79.CrossRefGoogle Scholar
Fösel, T., Yuezhen Niu, M., Marquardt, F. and Li, L., Quantum circuit optimization with deep reinforcement learning (2021), arXiv:2103.07585.Google Scholar
Wootters, W. K. and Zurek, W. H., A single quantum cannot be cloned, Nature 299(5886), 802 (1982), doi:10.1038/299802a0.CrossRefGoogle Scholar
Shor, P. W., Scheme for reducing decoherence in quantum computer memory, Phys. Rev. A 52, R2493 (1995), doi:10.1103/PhysRevA.52.R2493.CrossRefGoogle ScholarPubMed
Steane, A. M., Error correcting codes in quantum theory, Phys. Rev. Lett. 77, 793 (1996), doi:10.1103/PhysRevLett.77.793.CrossRefGoogle ScholarPubMed
Gottesman, D., An introduction to quantum error correction and fault-tolerant quantum computation (2009), arXiv:0904.2557.Google Scholar
Sweke, R., Kesselring, M. S., van Nieuwenburg, E. P. L. and Eisert, J., Reinforcement learning decoders for fault-tolerant quantum computation, Mach. Learn.: Sci. Technol. 2(2), 025005 (2021), doi:10.1088/2632-2153/abc609.CrossRefGoogle Scholar
Andreasson, P., Johansson, J., Liljestrand, S. and Granath, M., Quantum error correction for the toric code using deep reinforcement learning, Quantum 3, 183 (2019), doi:10.22331/q-2019-09-02-183.CrossRefGoogle Scholar
Fitzek, M. Eliasson, A. F. Kockum and M. Granath, Deep Q-learning decoder for depolarizing noise on the toric code, Phys. Rev. Res. 2, 023230 (2020), doi:10.1103/PhysRevResearch.2.023230.Google Scholar
Théveniaut, H. and van Nieuwenburg, E., A NEAT quantum error decoder, Sci-Post Phys. 11, 5 (2021), doi:10.21468/SciPostPhys.11.1.005.Google Scholar
Erhard, M., Krenn, M. and Zeilinger, A., Advances in high-dimensional quantum entanglement, Nat. Rev. Phys. 2(7), 365 (2020), doi:10.1038/s42254-020-0193-5.Google Scholar
Krenn, M., Malik, M., Fickler, R., Lapkiewicz, R. and Zeilinger, A., Automated search for new quantum experiments, Phys. Rev. Lett. 116, 090405 (2016), doi:10.1103/PhysRevLett.116.090405.CrossRefGoogle ScholarPubMed
Krenn, M., Kottmann, J. S., Tischler, N. and Aspuru-Guzik, A., Conceptual understanding through efficient automated design of quantum optical experiments, Phys. Rev. X 11, 031044 (2021), doi:10.1103/PhysRevX.11.031044.Google Scholar
Krenn, M., Erhard, M. and Zeilinger, A., Computer-inspired quantum experiments, Nat. Rev. Phys. 2, 649 (2020), doi:10.1038/s42254-020-0230-4.Google Scholar
Peres, A., Separability criterion for density matrices, Phys. Rev. Lett. 77(8), 1413 (1996), doi:10.1103/PhysRevLett.77.1413.CrossRefGoogle ScholarPubMed
Requena, B., Muñoz Gil, G., Lewenstein, M., Dunjko, V. and Tura, J., Certificates of quantum many-body properties assisted by machine learning, Phys. Rev. Res. 5, 013097 (2023), doi:10.1103/PhysRevResearch.5.013097.Google Scholar
Bukov, M., Day, A. G. R., Sels, D., Weinberg, P., Polkovnikov, A. and Mehta, P., Reinforcement learning in different phases of quantum control, Phys. Rev. X 8, 031086 (2018), doi:10.1103/PhysRevX.8.031086.Google Scholar
Niu, M. Y., Boixo, S., Smelyanskiy, V. N. and Neven, H., Universal quantum control through deep reinforcement learning, npj Quantum Inf. 5(33), 1 (2019), doi:10.1038/s41534-019-0141-3.Google Scholar
McKiernan, K. A., Davis, E., Alam, M. S. and Rigetti, C., Automated quantum programming via reinforcement learning for combinatorial optimization (2019), arXiv:1908.08054.Google Scholar
Zhang, Y.-H., Zheng, P.-L., Zhang, Y. and Deng, D.-L., Topological quantum compiling with reinforcement learning, Phys. Rev. Lett. 125, 170501 (2020), doi:10.1103/PhysRevLett.125.170501.CrossRefGoogle ScholarPubMed
Baum, Y., Amico, M., Howell, S., et al., Experimental deep reinforcement learning for error-robust gate-set design on a superconducting quantum computer, PRX Quantum 2, 040324 (2021), doi:10.1103/PRXQuantum.2.040324.Google Scholar
Cao, Z. An, S.-Y. Hou, D. L. Zhou and B. Zeng, Quantum imaginary time evolution steered by reinforcement learning, Commun. Phys. 5(57), 1 (2022), doi:10.1038/s42005-022-00837-y.CrossRefGoogle Scholar
Metz, F. and Bukov, M., Self-correcting quantum many-body control using reinforcement learning with tensor networks, Nat. Mach. Intell. 5(7), 780 (2023), doi:10.1038/s42256-023-00687-5.Google Scholar
Qiu, Y., Zhuang, M., Huang, J. and Lee, C., Efficient and robust entanglement generation with deep reinforcement learning for quantum metrology, New J. Phys. 24(8), 083011 (2022), doi:10.1088/1367-2630/ac8285.CrossRefGoogle Scholar
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D. and Riedmiller, M., Deterministic policy gradient algorithms, In ICML 2014 – Int. Conf. Mach. Learn., doi:10.5555/3044805.3044850 (2014).Google Scholar
Levine, S., Reinforcement learning and control as probabilistic inference: Tutorial and review (2018), arXiv:1805.00909.Google Scholar
Haarnoja, T., Zhou, A., Abbeel, P. and Levine, S., Soft actor-critic: Off-policy maximumentropy deep reinforcement learning with a stochastic actor, In ICML 2018 – Int. Conf. Mach. Learn. (2018), arXiv:1801.01290.Google Scholar
Abdolmaleki, J. T. Springenberg, Y. Tassa, et al., Maximum a posteriori policy optimisation (2018), arXiv:1806.06920.Google Scholar
Degrave, J., Felici, F., Buchli, J., et al., Magnetic control of tokamak plasmas through deep reinforcement learning, Nature 602(7897), 414 (2022), doi:10.1038/s41586-021-04301-9.CrossRefGoogle ScholarPubMed
Sivak, V. V., Eickbusch, A., Liu, H., Royer, B., Tsioutsios, I. and Devoret, M. H., Model-free quantum control with reinforcement learning, Phys. Rev. X 12, 011059 (2022), doi:10.1103/PhysRevX.12.011059.Google Scholar
Sivak, V. V., Eickbusch, A., Royer, B., et al., Real-time quantum error correction beyond break-even, Nature 616(7955), 50 (2023), doi:10.1038/s41586-023-05782-6.CrossRefGoogle ScholarPubMed
Karpathy, A., Software 2.0, Medium, Accessed: 04-08-2022 (2017).Google Scholar
Schuman, C. D., Potok, T. E., Patton, R. M., et al., A survey of neuromorphic computing and neural networks in hardware (2017), arXiv:1808.05232.Google Scholar
Roy, K., Jaiswal, A. and Panda, P., Towards spike-based machine intelligence with neuromorphic computing, Nature 575(7784), 607 (2019), doi:10.1038/s41586-019-1677-2.CrossRefGoogle ScholarPubMed
Innes, M., Edelman, A., Fischer, K., et al., A differentiable programming system to bridge machine learning and scientific computing (2019), arXiv:1907.07587.Google Scholar
Johnson, S. G., Notes on adjoint methods for 18.335, Tech. rep., MIT, Introduction to Numerical Methods (2021).Google Scholar
Chen, R. T., Rubanova, Y., Bettencourt, J. and Duvenaud, D. K., Neural ordinary differential equations, In NeurIPS 2018 – Adv. Neural Inf. Process. Syst. (2018), arXiv:1806.07366.Google Scholar
Liao, H.-J., Liu, J.-G., Wang, L. and Xiang, T., Differentiable programming tensor networks, Phys. Rev. X 9(3), 031041 (2019), doi:10.1103/PhysRevX.9.031041.Google Scholar
Chen, B.-B., Gao, Y., Guo, Y.-B., et al., Automatic differentiation for second renormalization of tensor networks, Phys. Rev. B 101, 220409 (2020), doi:10.1103/PhysRevB.101.220409.CrossRefGoogle Scholar
Torlai, G., Carrasquilla, J., Fishman, M. T., Melko, R. G. and Fisher, M. P. A., Wave-function positivization via automatic differentiation, Phys. Rev. Res. 2, 032060 (2020), doi:10.1103/PhysRevResearch.2.032060.Google Scholar
Ingraham, J., Riesselman, A., Sander, C. and Marks, D., Learning protein structure with a differentiable simulator, In ICLR 2018 – Int. Conf. Learn. Represent. (2018).Google Scholar
Schoenholz, S. S. and Cubuk, E. D., JAX, M.D.: A framework for differentiable physics, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:1912.04232.Google Scholar
Tamayo-Mendoza, T., Kreisbeck, C., Lindh, R. and Aspuru-Guzik, A., Automatic differentiation in quantum chemistry with applications to fully variational Hartree–Fock, ACS Cent. Sci. 4(5), 559 (2018), doi:10.1021/acscentsci.7b00586.CrossRefGoogle ScholarPubMed
Zhao, L. and Neuscamman, E., Excited state mean-field theory without automatic differentiation, J. Chem. Phys. 152(20), 204112 (2020), doi:10.1063/5.0003438.CrossRefGoogle ScholarPubMed
Li, L., Hoyer, S., Pederson, R., et al., Kohn-Sham equations as regularizer: Building prior knowledge into machine-learned physics, Phys. Rev. Lett. 126(3), 036401 (2021), doi:10.1103/PhysRevLett.126.036401.Google ScholarPubMed
Kasim, M. F. and Vinko, S. M., Learning the exchange-correlation functional from nature with fully differentiable density functional theory, Phys. Rev. Lett. 127, 126403 (2021), doi:10.1103/PhysRevLett.127.126403.CrossRefGoogle ScholarPubMed
Abbott, S., Abbott, B. Z., Turney, J. M. and Schaefer, H. F., Arbitrary-order derivatives of quantum chemical methods via automatic differentiation, J. Phys. Chem. Lett. 12(12), 3232 (2021), doi:10.1021/acs.jpclett.1c00607.CrossRefGoogle ScholarPubMed
Kasim, M. F., Lehtola, S. and Vinko, S. M., DQC: A Python program package for differentiable quantum chemistry, J. Chem. Phys. 156(8), 084801 (2022), doi:10.1063/5.0076202.CrossRefGoogle ScholarPubMed
Bergholm, V., Izaac, J., Schuld, M., et al., PennyLane: Automatic differentiation of hybrid quantum-classical computations (2020), arXiv:1811.04968.Google Scholar
Zhang, X. and Chan, G. K.-L., Differentiable quantum chemistry with PySCF for molecules and materials at the mean-field level and beyond, J. Chem. Phys. 157(20) (2022), doi:10.1063/5.0118200,204801.Google ScholarPubMed
Yoshikawa, N. and Sumita, M., Automatic differentiation for the direct minimization approach to the Hartree–Fock method, J. Phys. Chem. A 126(45), 8487 (2022), doi:10.1021/acs.jpca.2c05922, PMID: 36346835.CrossRefGoogle Scholar
Vargas–Hernández, R. A., Jorner, K., Pollice, R. and Aspuru–Guzik, A., Inverse molecular design and parameter optimization with Hückel theory using automatic differentiation, J. Chem. Phys. 158(10) (2023), doi:10.1063/5.0137103.CrossRefGoogle ScholarPubMed
Khaneja, N., Reiss, T., Kehlet, C., Schulte-Herbrüggen, T. and Glaser, S. J., Optimal control of coupled spin dynamics: Design of NMR pulse sequences by gradient ascent algorithms, J. Magn. Reson. 172(2), 296 (2005), doi:10.1016/j.jmr.2004.11.004.CrossRefGoogle ScholarPubMed
Leung, N., Abdelhafez, M., Koch, J. and Schuster, D., Speedup for quantum optimal control from automatic differentiation based on graphics processing units, Phys. Rev. A 95(4) (2017), doi:10.1103/PhysRevA.95.042318.CrossRefGoogle Scholar
Abdelhafez, M., Schuster, D. I. and Koch, J., Gradient-based optimal control of open quantum systems using quantum trajectories and automatic differentiation, Phys. Rev. A 99, 052327 (2019), doi:10.1103/PhysRevA.99.052327.CrossRefGoogle Scholar
Jirari, Optimal population inversion of a single dissipative two-level system, Eur. Phys. J. B 92(12), 265 (2019), doi:10.1140/epjb/e2019-100378-x.Google Scholar
Jirari, H., Time-optimal bang-bang control for the driven spin-boson system, Phys. Rev. A 102, 012613 (2020), doi:10.1103/PhysRevA.102.012613.CrossRefGoogle Scholar
Schäfer, M. Kloc, C. Bruder and N. Lörch, A differentiable programming method for quantum control, Mach. Learn.: Sci. Technol. 1(3), 035009 (2020), doi:10.1088/2632-2153/ab9802.CrossRefGoogle Scholar
Vargas-Hernández, R. A., Chen, R. T. Q., Jung, K. A. and Brumer, P., Inverse design of dissipative quantum steady-states with implicit differentiation (2020), arXiv:2011.12808.Google Scholar
Vargas-Hernández, R. A., Chen, R. T. Q., Jung, K. A. and Brumer, P., Fully differentiable optimization protocols for non-equilibrium steady states, New J. Phys. 23(12), 123006 (2021), doi:10.1088/1367-2630/ac395e.CrossRefGoogle Scholar
Khait, I., Carrasquilla, J. and Segal, D., Optimal control of quantum thermal machines using machine learning, Phys. Rev. Res. 4, L012029 (2022), doi:10.1103/PhysRevResearch.4.L012029.Google Scholar
Coopmans, L., Luo, D., Kells, G., Clark, B. K. and Carrasquilla, J., Protocol discovery for the quantum control of majoranas by differentiable programming and natural evolution strategies, PRX Quantum 2(2), 020332 (2021), doi:10.1103/PRXQuantum.2.020332.Google Scholar
Schäfer, P. Sekatski, M. Koppenhöfer, C. Bruder and M. Kloc, Control of stochastic quantum dynamics by differentiable programming, Mach. Learn.: Sci. Technol. 2(3), 035004 (2021), doi:10.1088/2632-2153/abec22.CrossRefGoogle Scholar
Goerz, M. H., Carrasco, S. C. and Malinovsky, V. S., Quantum optimal control via semi-automatic differentiation, Quantum 6, 871 (2022), doi:10.22331/q-2022-12-07-871.CrossRefGoogle Scholar
Luo, X.-Z., Liu, J.-G., Zhang, P. and Wang, L., Yao. jl: Extensible, efficient framework for quantum algorithm design, Quantum 4, 341 (2020), doi:10.22331/q-2020-10-11-341.CrossRefGoogle Scholar
Kyriienko, O., Paine, A. E. and Elfving, V. E., Solving nonlinear differential equations with differentiable quantum circuits, Phys. Rev. A 103(5), 052416 (2021), doi:10.1103/PhysRevA.103.052416.CrossRefGoogle Scholar
Huembeli, P. and Dauphin, A., Characterizing the loss landscape of variational quantum circuits, Quantum Sci. Technol. 6(2), 025011 (2021), doi:10.1088/2058-9565/abdbc9.CrossRefGoogle Scholar
Baydin, A. G., Pearlmutter, B. A., Radul, A. A. and Siskind, J. M., Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res. 18(1), 5595–5637 (2018), doi:10.5555/3122009.3242010.Google Scholar
Wengert, R. E., A simple automatic derivative evaluation program, Commun. ACM 7(8), 463 (1964), doi:10.1145/355586.364791.CrossRefGoogle Scholar
Griewank, A. and Walther, A., Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Society for Industrial and Applied Mathematics, Philadelphia, PA, doi:10.1137/1.9780898717761 (2008).CrossRefGoogle Scholar
Linnainmaa, S., The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors, Ph.D. thesis, Univ. Helsinki, Finland, Master's Thesis (1970).Google Scholar
Griewank, Who invented the reverse mode of differentiation, Documenta Math., Accessed: 04-01-2022 (2012).CrossRefGoogle Scholar
Rackauckas, C., Parallel computing and scientific machine learning, 18.337J/6.338J Lecture notes, MIT Lecture (2020).Google Scholar
Ma, Y., Dixit, V., Innes, M., Guo, X. and Rackauckas, C., A comparison of automatic differentiation and continuous sensitivity analysis for derivatives of differential equation solutions (2021), arXiv:1812.01892.Google Scholar
Wang, L., Implementation of an inverse Schrödinger problem in JAX, Available as Google Colab Notebook: https://colab.research.google.com/drive/1e1NFA-E1Th7nN_9-DzQjAaglH6bwZtVU?usp=sharing (2021).Google Scholar
Xie, H., Liu, J.-G. and Wang, L., Automatic differentiation of dominant eigensolver and its applications in quantum physics, Phys. Rev. B 101, 245139 (2020), doi:10.1103/PhysRevB.101.245139.CrossRefGoogle Scholar
Wang, L., Implementation of a quantum optimal control problem in JAX, Accessed: 03-11-2022. Available as Google Colab Notebook (2021).Google Scholar
Wang, Q., Hu, R. and Blonigan, P., Least squares shadowing sensitivity analysis of chaotic limit cycle oscillations, J. Comput. Phys. 267, 210 (2014), doi:10.1016/j.jcp.2014.03.002.CrossRefGoogle Scholar
Metz, L., Freeman, C. D., Schoenholz, S. S. and Kachman, T., Gradients are not all you need (2021), arXiv:2111.05803.Google Scholar
Moses, W. S. and Churavy, V., Instead of rewriting foreign code for machine learning, automatically synthesize fast gradients, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2010.01709.Google Scholar
Liu, J.-G. and Zhao, T., Differentiate everything with a reversible embedded domain-specific language (2020), arXiv:2003.04617.Google Scholar
Silverman, B., Density Estimation for Statistics and Data Analysis, Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis, Boca Raton doi:10.1201/9781315140919 (1998).Google Scholar
Hradil, Z., Quantum-state estimation, Phys. Rev. A 55, R1561 (1997), doi:10.1103/PhysRevA.55.R1561.CrossRefGoogle Scholar
Paris, M. and Rehacek, J., Quantum State Estimation, Lecture Notes in Physics. Springer-Verlag, Berlin/Heidelberg, doi:10.1007/b98673 (2004).Google Scholar
Teo, Y. S., Introduction to Quantum-State Estimation, World Scientific, Singapore, doi:10.1142/9617 (2015).CrossRefGoogle Scholar
Kullback, S. and Leibler, R. A., On information and sufficiency, Ann. Math. Stat. 22(1), 79 (1951), doi:10.1214/aoms/1177729694.Google Scholar
Hackett, D. C., Hsieh, C.-C., Albergo, M. S., et al., Flow-based sampling for multimodal distributions in lattice field theory (2021), arXiv:2107.00734.Google Scholar
Nicoli, K. A., Anders, C., Funcke, L., et al., Machine learning of thermodynamic observables in the presence of mode collapse (2021), arXiv:2111.11303.Google Scholar
Melko, R. G., Carleo, G., Carrasquilla, J. and Cirac, J. I., Restricted Boltzmann machines in quantum physics, Nat. Phys. 15(9), 887 (2019), doi:10.1038/s41567-019-0545-1.CrossRefGoogle Scholar
Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A. and Kavukcuoglu, K., Conditional image generation with PixelCNN decoders, In NIPS 2016: Adv. Neural Inf. Process. Syst. (2016), arXiv:1606.05328.Google Scholar
Kingma, P. and Welling, M., An introduction to variational autoencoders, Found. Trends Mach. Learn. 12(4), 307 (2019), doi:10.1561/2200000056.Google Scholar
Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets, In NIPS 2014 – Adv. Neural Inf. Process. Syst. (2014), arXiv:1406.2661.Google Scholar
Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B. and Bharath, A. A., Generative adversarial networks: An overview, IEEE Signal Process. Mag. 35(1), 53 (2018), doi:10.1109/MSP.2017.2765202.CrossRefGoogle Scholar
Tabak, G. and Vanden-Eijnden, E., Density estimation by dual ascent of the log-likelihood, Commun. Math. Sci. 8(1), 217 (2010), doi:10.4310/CMS.2010.v8.n1.a11.CrossRefGoogle Scholar
Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S. and Lakshminarayanan, B., Normalizing flows for probabilistic modeling and inference, J. Mach. Learn. Res. 22(57), 1 (2021), arXiv:1912.02762.Google Scholar
Prince, Kobyzev, S. and Brubaker, M., Normalizing flows: An introduction and review of current methods, IEEE Trans. Pattern Anal. Mach. Intell. 11, 3964 (2021), doi:10.1109/TPAMI.2020.2992934.CrossRefGoogle Scholar
Ho, J., Jain, A. and Abbeel, P., Denoising diffusion probabilistic models, In NeurIPS 2020 – Adv. Neural Inf. Process Syst. (2020), arXiv:2006.11239.Google Scholar
Yang, L., Zhang, Z. and Hong, S., Diffusion models: A comprehensive survey of methods and applications (2022), arXiv:2209.00796.Google Scholar
Noé, S. Olsson, J. Köhler and H. Wu, Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning, Science 365(6457), eaaw1147 (2019), doi:10.1126/science.aaw1147.CrossRefGoogle Scholar
Nicoli, K. A., Anders, C. J., Funcke, L., et al., Estimation of thermodynamic observables in lattice field theories with deep generative models, Phys. Rev. Lett. 126(3), 032001 (2021), doi:10.1103/PhysRevLett.126.032001.CrossRefGoogle ScholarPubMed
Albergo, M. S., Kanwar, G. and Shanahan, P. E., Flow-based generative models for Markov chain Monte Carlo in lattice field theory, Phys. Rev. D 100, 034515 (2019), doi:10.1103/PhysRevD.100.034515.CrossRefGoogle Scholar
Gabrié, M., Rotskoff, G. M. and Vanden-Eijnden, E., Adaptive Monte Carlo augmented with normalizing flows, Proc. Natl. Acad. Sci. U.S.A. 119(10), e2109420119 (2022), doi:10.1073/pnas.2109420119.CrossRefGoogle Scholar
Hinton, E., A practical guide to training restricted Boltzmann machines, In Montavon, G., Orr, G. B. and Müller, K.-R., eds., Neural Networks: Tricks of the Trade: Second Edition, Lecture Notes in Computer Science, pp. 599–619. Springer, Berlin, Heidelberg, doi:10.1007/978-3-642-35289-8_32 (2012).Google Scholar
Gabrié, M., Tramel, E. W. and Krzakala, F., Training restricted Boltzmann machines via the Thouless-Anderson-Palmer free energy, In NIPS 2015 – Adv. Neural Inf. Process. Syst. (2015), arXiv:1506.02914.Google Scholar
Ramachandran, P., Paine, T. L., Khorrami, P., et al., Fast generation for convolutional autoregressive models (2017), arXiv:1704.06001.Google Scholar
van den Oord, A., Kalchbrenner, N. and Kavukcuoglu, K., Pixel recurrent neural networks, In ICML 2016 – Int. Conf. Mach. Learn. (2016), arXiv:1601.06759.Google Scholar
Cristoforetti, M., Jurman, G., Nardelli, A. I. and Furlanello, C., Towards meaningful physics from generative models (2019), arXiv:1705.09524.Google Scholar
Kanwar, G., Albergo, M. S., Boyda, D., et al., Equivariant flow-based sampling for lattice gauge theory, Phys. Rev. Lett. 125(12), 121601 (2020), doi:10.1103/PhysRevLett.125.121601.CrossRefGoogle ScholarPubMed
Nicoli, A., Deep generative models for thermodynamics of spin systems and field theories, Ph.D. thesis, Technische Universität Berlin, Fakultät IV, Maschinelles Lernen, doi:10.14279/depositonce-17052 (2023).Google Scholar
Krueger, Dinh, D. and Bengio, Y., NICE: Non-linear independent components estimation, In ICLR 2015 – Int. Conf. Learn. Represent. (2015), arXiv:1410.8516.Google Scholar
Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real NVP, In ICLR 2017 – Int. Conf. Learn. Represent. (2017), arXiv:1605.08803.Google Scholar
Kingma, D. P. and Dhariwal, P., Glow: Generative flow with invertible 1x1 convolutions, In NeurIPS 2018 – Adv. Neural Inf. Process Syst. (2018), arXiv:1807.03039.Google Scholar
Durkan, A. Bekasov, I. Murray and G. Papamakarios, Neural spline flows, In NeurIPS 2019 – Adv. Neural Inf. Process Syst. (2019), arXiv:1906.04032.Google Scholar
Grenioux, L., Durmus, A., Moulines, É. and Gabrié, M., On sampling with approximate transport maps In ICML 2023 – 40th Int. Conf. Mach. Learn., vol. 202, pp. 11698–11733. PMLR (2023), arXiv:2302.04763.Google Scholar
Müller, T., Mcwilliams, B., Rousselle, F., Gross, M. and Novák, J., Neural importance sampling, ACM Trans. Graph. 38(5), 1 (2019), doi:10.1145/3341156.CrossRefGoogle Scholar
Bacchio, S., Kessel, P., Schaefer, S. and Vaitl, L., Learning trivializing gradient flows for lattice gauge theories, Phys. Rev. D 107, L051504 (2023), doi:10.1103/PhysRevD.107.L051504.CrossRefGoogle Scholar
Del Debbio, L., Rossney, J. M. and Wilson, M., Machine learning trivializing maps: A first step towards understanding how flow-based samplers scale up (2021), arXiv:2112.15532.Google Scholar
Del Debbio, L., Marsh Rossney, J. and Wilson, M., Efficient modeling of trivializing mapsfor lattice theory using normalizing flows: A first look at scalability, Phys. Rev. D 104, 094507 (2021), doi:10.1103/PhysRevD.104.094507.CrossRefGoogle Scholar
Abbott, R., Albergo, M. S., Botev, A., et al., Aspects of scaling and scalability for flow-based sampling of lattice QCD, Eur. Phys. J. A 59, 257 (2023), doi:10.1140/epja/s10050-023-01154-w.Google Scholar
Kanwar, G., Albergo, M. S., Boyda, D., et al., Equivariant flow-based sampling for lattice gauge theory, Phys. Rev. Lett. 125, 121601 (2020), doi:10.1103/PhysRevLett.125.121601.CrossRefGoogle ScholarPubMed
Boyda, D., Kanwar, G., Racanière, S., et al., Sampling using gauge equivariant flows, Phys. Rev. D 103, 074504 (2021), doi:10.1103/PhysRevD.103.074504.CrossRefGoogle Scholar
Köhler, J., Klein, L. and Noé, F., Equivariant flows: Exact likelihood generative learning for symmetric densities, In ICML 2020 – Int. Conf. Mach. Learn. (2020), arXiv:2006.02425.Google Scholar
Satorras, V. G., Hoogeboom, E., Fuchs, F. B., Posner, I. and Welling, M., equivariant normalizing flows, In NeurIPS 2021 – Adv. Neural Inf. Process Syst. (2021), arXiv:2105.09016.Google Scholar
Jerfel, G., Wang, S., Wong-Fannjiang, C., Heller, K. A., Ma, Y. and Jordan, M. I., Variational refinement for importance sampling using the forward Kullback-Leibler divergence, In PLMR 2021 – Proc. Mach. Learn. Res. (2021), arXiv:2106.15980.Google Scholar
Nicoli, K. A., Anders, C. J., Hartung, T., Jansen, K., Kessel, P. and Nakajima, S., Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories, Phys. Rev. D 108, 114501 (2023), doi:10.1103/PhysRevD.108.114501.CrossRefGoogle Scholar
Vaitl, K. A. Nicoli, S. Nakajima and P. Kessel, Gradients should stay on path: Better estimators of the reverse- and forward KL divergence for normalizing flows, Mach. Learn.: Sci. Technol. 3(4), 045006 (2022), doi:10.1088/2632-2153/ac9455.CrossRefGoogle Scholar
Vaitl, L., Nicoli, K. A., Nakajima, S. and Kessel, P., Path-gradient estimators for continuous normalizing flows, In PLMR 2022 – Proc. Mach. Learn. Res. (2022), arXiv:2206.09016.Google Scholar
Matthews, Arbel, A. and Doucet, A., Annealed flow transport Monte Carlo, In PLMR 2021 – Proc. Mach. Learn. Res. (2021), arXiv:2102.07501.Google Scholar
Midgley, L. I., Stimper, V., Simm, G. N., Schölkopf, B. and Hernández-Lobato, J. M., Flow annealed importance sampling bootstrap (2022), arXiv:2208.01893.Google Scholar
Matthews, A., Arbel, M., Rezende, D. J. and Doucet, A., Continual repeated annealed flow transport Monte Carlo, In PLMR 2022 – Proc. Mach. Learn. Res. (2022), arXiv:2201.13117.Google Scholar
Wang, L., Generative models for physicists, Tech. rep., Institute of Physics, Chinese Academy of Sciences, GitHub.io (2018).Google Scholar
Albergo, M. S., Kanwar, G., Racanière, S., et al., Flow-based sampling for fermionic lattice field theories, Phys. Rev. D 104, 114507 (2021), doi:10.1103/PhysRevD.104.114507.CrossRefGoogle Scholar
Lustig, E., Yair, O., Talmon, R. and Segev, M., Identifying topological phase transitions in experiments using manifold learning, Phys. Rev. Lett. 125(12), 127401 (2020), doi:10.1103/PhysRevLett.125.127401.CrossRefGoogle ScholarPubMed
Greplova, E., Gold, C., Kratochwil, B., et al., Fully automated identification of two-dimensional material samples, Phys. Rev. Appl. 13(6), 064017 (2020), doi:10.1103/PhysRevApplied.13.064017.CrossRefGoogle Scholar
Durrer, R., Kratochwil, B., Koski, J. V., et al., Automated tuning of double quantum dots into specific charge states using neural networks, Phys. Rev. Appl. 13(5), 054019 (2020), doi:10.1103/PhysRevApplied.13.054019.CrossRefGoogle Scholar
E. Greplova and S. Huber group, GitHub repository to “Fully automated search for 2D material samples” (2019).Google Scholar
Mostosi, P., Schindelin, H., Kollmannsberger, P. and Thorn, A., Haruspex: A neural network for the automatic identification of oligonucleotides and protein secondary structure in cryo-electron microscopy maps, Angew. Chem. Int. Ed. 59(35), 14788 (2020), doi:10.1002/anie.202000421.CrossRefGoogle Scholar
Nolte, K., Gao, Y., Stäb, S., Kollmannsberger, P. and Thorn, A., Detecting ice artefacts in processed macromolecular diffraction data with machine learning, Acta Crystallogr. D 78(2), 187 (2022), doi:10.1107/S205979832101202X.CrossRefGoogle Scholar
Lode, U., Lin, R., Büttner, M., et al., Optimized observable readout from single-shot images of ultracold atoms via machine learning, Phys. Rev. A 104(4), L041301 (2021), doi:10.1103/PhysRevA.104.L041301.CrossRefGoogle Scholar
Lin, R., Georges, C., Klinder, J., et al., Mott transition in a cavity-boson system: A quantitative comparison between theory and experiment, SciPost Phys. 11(2), 030 (2021), doi:10.21468/SciPostPhys.11.2.030.Google Scholar
Lin, R., Molignini, P., Papariello, L., et al., MCTDH-X: The multiconfigurational time-dependent Hartree method for indistinguishable particles software, Quantum Sci. Technol. 5(2), 024004 (2020), doi:10.1088/2058-9565/ab788b.CrossRefGoogle Scholar
Zhang, J., Pagano, G., Hess, P. W., et al., Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator, Nature 551(7682), 601 (2017), doi:10.1038/nature24654.CrossRefGoogle ScholarPubMed
Bernien, H., Schwartz, S., Keesling, A., et al., Probing many-body dynamics on a 51-atom quantum simulator, Nature 551(7682), 579 (2017), doi:10.1038/nature24622.CrossRefGoogle ScholarPubMed
Chiaro, B., Neill, C., Bohrdt, A., et al., Direct measurement of nonlocal interactions in the many-body localized phase, Phys. Rev. Res. 4, 013148 (2022), doi:10.1103/PhysRevResearch.4.013148.Google Scholar
Rispoli, M., Lukin, A., Schittko, R., et al., Quantum critical behaviour at the many-body localization transition, Nature 573(7774), 385 (2019), doi:10.1038/s41586-019-1527-2.CrossRefGoogle ScholarPubMed
Valenti, A., Jin, G., Léonard, J., Huber, S. D. and Greplova, E., Scalable Hamiltonian learning for large-scale out-of-equilibrium quantum dynamics, Phys. Rev. A 105, 023302 (2022), doi:10.1103/PhysRevA.105.023302.CrossRefGoogle Scholar
Gresch, A., Bittel, L. and Kliesch, M., Scalable approach to many-body localization via quantum data (2022), arXiv:2202.08853.Google Scholar
Gebhart, V., Santagati, R., Gentile, A. A., et al., Learning quantum systems, Nat. Rev. Phys. 5(3), 141 (2023), doi:10.1038/s42254-022-00552-1.Google Scholar
Krenn, Cervera-Lierta, M. and Aspuru-Guzik, A., Design of quantum optical experiments with logic artificial intelligence, Quantum 6, 836 (2022), doi:10.22331/q-2022-10-13-836.Google Scholar
Flam-Shepherd, D., Wu, T., Gu, X., Cervera-Lierta, A., Krenn, M. and Aspuru-Guzik, A., Learning interpretable representations of entanglement in quantum optics experiments using deep generative models, Nat. Mach. Intell. 4, 544–554 (2022), doi:10.1038/s42256-022-00493-5.Google Scholar
King, R. D., Rowland, J., Oliver, S. G., et al., The automation of science, Science 324(5923), 85 (2009), doi:10.1126/science.1165620.CrossRefGoogle ScholarPubMed
Roch, Häse, L. M. and Aspuru-Guzik, A., Next-generation experimentation with self-driving laboratories, Trends Chem. 1(3), 282 (2019), doi:10.1016/j.trechm.2019.02.007.Google Scholar
Gentile, A. A., Flynn, B., Knauer, S., et al., Learning models of quantum systems from experiments, Nat. Phys. 17(7), 837 (2021), doi:10.1038/s41567-021-01201-7.CrossRefGoogle Scholar
O’Brien, T. E., Ioffe, L. B., Su, Y., et al., Quantum computation of molecular structure using data from challenging-to-classically-simulate nuclear magnetic resonance experiments, PRX Quantum 3, 030345 (2022), doi:10.1103/PRXQuantum.3.030345.Google Scholar
van Esbroeck, M., Lennon, D. T., Moon, H., et al., Quantum device fine-tuning using unsupervised embedding learning, New J. Phys. 22(9), 095003 (2020), doi: 10.1088/1367-2630/abb64c.CrossRefGoogle Scholar
Severin, D. T. Lennon, L. C. Camenzind, et al., Cross-architecture tuning of silicon and SiGe-based quantum devices using machine learning, Sci. Rep. 14, 17281 (2024), doi:10.1038/s41598-024-67787-z.CrossRefGoogle Scholar
Zwolak, J. P., McJunkin, T., Kalantre, S. S., et al., Ray-based framework for state identification in quantum dot devices, PRX Quantum 2, 020335 (2021), doi:10.1103/PRXQuantum.2.020335.CrossRefGoogle Scholar
Dawid, N. Bigagli, D. W. Savin and S. Will, Automated graph-based detection of quantum control schemes: Application to molecular laser cooling, Phys. Rev. Research 7, 013135 (2025), doi:10.1103/PhysRevResearch.7.013135.Google Scholar
Wiebe, N., Kapoor, A. and Svore, K. M., Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning, Quantum Inf. Comput. 15(3–4), 316 (2015), doi:10.26421/QIC15.3-4-7.Google Scholar
Raccuglia, K. C. Elbert, P. D. F. Adler, et al., Machine-learning-assisted materials discovery using failed experiments, Nature 533(7601), 73 (2016), doi:10.1038/nature17439.CrossRefGoogle Scholar
Zdeborová, L., Understanding deep learning is also a job for physicists, Nat. Phys. 16(6), 602 (2020), doi:10.1038/s41567-020-0929-2.CrossRefGoogle Scholar
Cover, T. M., Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE Trans. Comput. EC-14(3), 326 (1965), doi:10.1109/PGEC.1965.264137.CrossRefGoogle Scholar
Gardner, E., Maximum storage capacity in neural networks, EPL 4(4), 481 (1987), doi:10.1209/0295-5075/4/4/016.CrossRefGoogle Scholar
Amit, D. J., Gutfreund, H. and Sompolinsky, H., Storing infinite numbers of patterns in a spin-glass model of neural networks, Phys. Rev. Lett. 55(14), 1530 (1985), doi:10.1103/PhysRevLett.55.1530.CrossRefGoogle Scholar
Gardner, Derrida, E. and Zippelius, A., An exactly solvable asymmetric neural network model, EPL 4(2), 167 (1987), doi:10.1209/0295-5075/4/2/007.CrossRefGoogle Scholar
Derrida and J. P. Nadal, Learning and forgetting on asymmetric, diluted neural networks, J. Stat. Phys. 49(5-6), 993 (1987), doi:10.1007/BF01017556.CrossRefGoogle Scholar
Peterson, C., A mean field theory learning algorithm for neural networks, Complex Syst. 1, 995 (1987).Google Scholar
Krauth, W. and Mézard, M., Storage capacity of memory networks with binary couplings, J. Phys. 50(20), 3057 (1989), doi:10.1051/jphys:0198900500200305700.Google Scholar
Györgyi, G., First-order transition to perfect generalization in a neural network with binary synapses, Phys. Rev. A 41(12), 7097 (1990), doi:10.1103/PhysRevA.41.7097.CrossRefGoogle Scholar
Opper, M. and Haussler, D., Generalization performance of Bayes optimal classification algorithm for learning a perceptron, Phys. Rev. Lett. 66(20), 2677 (1991), doi:10.1103/PhysRevLett.66.2677.CrossRefGoogle ScholarPubMed
Sherrington, D. and Kirkpatrick, S., Solvable model of a spin-glass, Phys. Rev. Lett. 35(26), 1792 (1975), doi:10.1103/PhysRevLett.35.1792.CrossRefGoogle Scholar
Mézard, M., Parisi, G. and Virasoro, M. A., SK model: The replica solution without replicas, EPL 1(2), 77 (1986), doi:10.1209/0295-5075/1/2/006.CrossRefGoogle Scholar
Barahona, F., On the computational complexity of Ising spin glass models, J. Phys. A: Math. Gen. 15, 3241 (1982), doi:10.1088/0305-4470/15/10/028.CrossRefGoogle Scholar
Fan, M. Shen, Z. Nussinov, et al. Searching for spin glass ground states through deep reinforcement learning. Nat. Commun. 14, 725 (2023), doi:10.1038/s41467-023-36363-w.Google ScholarPubMed
Edwards, S. F. and Anderson, P. W., Theory of spin glasses, J. Phys. F: Met. Phys. 5, 965 (1975), doi:10.1088/0305-4608/5/5/017.Google Scholar
Thouless, J., Anderson, P. W. and Palmer, R. G., Solution of 'Solvable model of a spin glass’, Philos. Mag. 35(3), 593 (1977), doi:10.1080/14786437708235992.CrossRefGoogle Scholar
Talagrand, M., The Parisi formula, Ann. Math. 163(1), 221 (2006), doi:10.4007/annals.2006.163.221.CrossRefGoogle Scholar
Panchenko, D., Introduction to the SK model, Curr. Dev. Math. 2014(1), 231 (2014), doi:10.4310/cdm.2014.v2014.n1.a4.CrossRefGoogle Scholar
Castellani, T. and Cavagna, A., Spin-glass theory for pedestrians, J. Stat. Mech. p. P05012 (2005), doi:10.1088/1742-5468/2005/05/P05012.CrossRefGoogle Scholar
Gabrié, M., Mean-field inference methods for neural networks, J. Phys. A: Math. Theor. 53(22), 223002 (2020), doi:10.1088/1751-8121/ab7f65.CrossRefGoogle Scholar
Barbier, J., Krzakala, F., Macris, N., Miolane, L. and Zdeborová, L., Optimal errors and phase transitions in high-dimensional generalized linear models, Proc. Natl. Acad. Sci. U.S.A. 116(12), 5451 (2019), doi:10.1073/pnas.1802705116.CrossRefGoogle Scholar
Rangan, S., Generalized approximate message passing for estimation with random linear mixing, In 2011 IEEE Int. Symp. Inf. Theory Proc.. IEEE, doi:10.1109/isit.2011.6033942 (2011).CrossRefGoogle Scholar
Zdeborová, L. and Krzakala, F., Statistical physics of inference: Thresholds and algorithms, Adv. Phys. 65(5), 453 (2016), doi:10.1080/00018732.2016.1211393.CrossRefGoogle Scholar
Monasson and R. Zecchina, Learning and generalization theories of large committee machines, Mod. Phys. Lett. B 09(30), 1887 (1995), doi:10.1142/s0217984995001868.CrossRefGoogle Scholar
Aubin, A. Maillard, J. Barbier, F. Krzakala, N. Macris and L. Zdeborová, The committee machine: Computational to statistical gaps in learning a two- layers neural network, J. Stat. Mech. Theor. Exp. 2019(12), 124023 (2019), doi:10.1088/1742-5468/ab43d2.Google Scholar
Rahimi and B. Recht, Random features for large-scale kernel machines, In NIPS 2007 – Adv. Neural Inf. Process. Syst. (2007).Google Scholar
Rahimi and B. Recht, Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning, In NIPS 2008 – Adv. Neural Inf. Process. Syst. (2008).Google Scholar
Schwarze and J. Hertz, Generalization in fully connected committee machines, EPL 21(7), 786 (1993), doi:10.1209/0295-5075/21/7/012.CrossRefGoogle Scholar
Schwarze, H., Learning a rule in a multilayer neural network, J. Phys. A: Math. Gen. 26(21), 5781 (1993), doi:10.1088/0305-4470/26/21/017.CrossRefGoogle Scholar
Gerace, F., Loureiro, B., Krzakala, F., Mézard, M. and Zdeborová, L., Generalisation error in learning with random features and the hidden manifold model, J. Stat. Mech. 2021(12), 124013 (2021), doi:10.1088/1742-5468/ac3ae6.CrossRefGoogle Scholar
D’Ascoli, S., Gabrié, M., Sagun, L. and Biroli, G., More data or more parameters? Investigating the effect of data structure on generalization, In NeurIPS 2021 – Adv. Neural Inf. Process. Syst. (2021), arXiv:2103.05524.Google Scholar
Goldt, M. S. Advani, A. M. Saxe, F. Krzakala and L. Zdeborová, Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup, J. Stat. Mech. Theor. Exp. 2020(12), 1 (2020), doi:10.1088/1742-5468/abc61e.Google Scholar
Biehl, M. and Riegler, P., On-line learning with a perceptron, EPL 28(7), 525 (1994), doi:10.1209/0295-5075/28/7/012.CrossRefGoogle Scholar
Biehl, M. and Schwarze, H., Learning by on-line gradient descent, J. Phys. A: Math. Gen. 28(3), 643 (1995), doi:10.1088/0305-4470/28/3/018.CrossRefGoogle Scholar
Goldt, B. Loureiro, G. Reeves, F. Krzakala, M. Mézard and L. Zdeborová, The Gaussian equivalence of generative models for learning with shallow neural networks (2020), arXiv:2006.14709.Google Scholar
Mignacco, F., Krzakala, F., Urbani, P. and Zdeborová, L., Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2006.06098.Google Scholar
Lewenstein, M., Quantum perceptrons, J. Mod. Opt. 41(12), 2491 (1994), doi:10.1080/09500349414552331.CrossRefGoogle Scholar
Gratsea, A., Kasper, V. and Lewenstein, M., Storage properties of a quantum perceptron, Phys. Rev. E 110(2), 024127 (2021), doi:10.1103/PhysRevE.110.024127.Google Scholar
Lewenstein, M., Gratsea, A., Riera-Campeny, A., Aloy, A., Kasper, V. and Sanpera, A., Storage capacity and learning capability of quantum neural networks, Quantum Sci. Technol. 6(4), 045002 (2021), doi:10.1088/2058-9565/ac070f.CrossRefGoogle Scholar
Gratsea, A. and Huembeli, P., Exploring quantum perceptron and quantum neural network structures with a teacher-student scheme, Quantum Mach. Intell. 4(1), 2 (2022), doi:10.1007/s42484-021-00058-6.Google Scholar
Feng, Y. and Tu, Y., Phases of learning dynamics in artificial neural networks in the absence or presence of mislabeled data, Mach. Learn.: Sci. Technol. 2(4), 043001 (2021), doi:10.1088/2632-2153/abf5b9.CrossRefGoogle Scholar
Furtlehner, Decelle, C. and Seoane, B., Equilibrium and non-equilibrium regimes in the learning of restricted Boltzmann machines, In NeurIPS 2021 – Adv. Neural Process. Syst. (2021), arXiv:2103.05524.Google Scholar
Zhong, H.-S., Wang, H., Deng, Y.-H., et al., Quantum computational advantage using photons, Science 370(6523), 1460 (2020), doi:10.1126/science.abe8770.CrossRefGoogle ScholarPubMed
Arute, F., Arya, K., Babbush, R., et al., Quantum supremacy using a programmable superconducting processor, Nature 574(7779), 505 (2019), doi:10.1038/s41586-019-1666-5.CrossRefGoogle ScholarPubMed
Bruzewicz, C. D., Chiaverini, J., McConnell, R. and Sage, J. M., Trapped-ion quantum computing: Progress and challenges, Appl. Phys. Rev. 6(2), 021314 (2019), doi:10.1063/1.5088164.CrossRefGoogle Scholar
Saffman, M., Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges, J. Phys. B: At. Mol. Opt. Phys. 49(20), 202001 (2016), doi:10.1088/0953-4075/49/20/202001.Google Scholar
Henriet, L., Beguin, L., Signoles, A., et al., Quantum computing with neutral atoms, Quantum 4, 327 (2020), doi:10.22331/q-2020-09-21-327.CrossRefGoogle Scholar
Madsen, L. S., Laudenbach, F., Askarani, M. F., et al., Quantum computational advantage with a programmable photonic processor, Nature 606(7912), 75 (2022), doi:10.1038/s41586-022-04725-x.CrossRefGoogle ScholarPubMed
Liu, Y., Arunachalam, S. and Temme, K., A rigorous and robust quantum speed-up in supervised machine learning, Nat. Phys. 17(9), 1013 (2021), doi:10.1038/s41567-021-01287-z.CrossRefGoogle Scholar
Herrmann, S. M. Llima, A. Remm, et al., Realizing quantum convolutional neural networks on a superconducting quantum processor to recognize quantum phases, Nat. Commun. 13(1), 4144 (2022), doi:10.1038/s41467-022-31679-5.CrossRefGoogle Scholar
Huang, H.-Y., Broughton, M., Cotler, J., et al., Quantum advantage in learning from experiments, Science 376(6598), 1182 (2022), doi:10.1126/science.abn7293.CrossRefGoogle ScholarPubMed
Gong, M., Huang, H.-L., Wang, S., et al., Quantum neuronal sensing of quantum many-body states on a 61-qubit programmable superconducting processor, Sci. Bull. 68(9), 906 (2023), doi:10.1016/j.scib.2023.04.003.CrossRefGoogle ScholarPubMed
Hales, L. and Hallgren, S., An improved quantum Fourier transform algorithm and applications, In FOCS 2000 – 41st Annu. IEEE Symp. Found. Comput. Sci., pp. 515–525, doi:10.1109/SFCS.2000.892139 (2000).Google Scholar
Shor, P., Algorithms for quantum computation: Discrete logarithms and factoring, In FOCS 1994 – 35th Annu. IEEE Symp. Found. Comput. Sci., pp. 124–134, doi:10.1109/SFCS.1994.365700 (1994).Google Scholar
Kitaev, Y., Quantum measurements and the Abelian stabilizer problem (1995), arXiv:quant-ph/9511026.Google Scholar
Harrow, A. W., Hassidim, A. and Lloyd, S., Quantum algorithm for linear systems of equations, Phys. Rev. Lett. 103, 150502 (2009), doi:10.1103/PhysRevLett.103.150502.CrossRefGoogle ScholarPubMed
Rebentrost, P., Mohseni, M. and Lloyd, S., Quantum support vector machine for big data classification, Phys. Rev. Lett. 113, 130503 (2014), doi:10.1103/PhysRevLett.113.130503.CrossRefGoogle ScholarPubMed
Li, Z., Liu, X., Xu, N. and Du, J., Experimental realization of a quantum support vector machine, Phys. Rev. Lett. 114, 140504 (2015), doi:10.1103/PhysRevLett.114.140504.CrossRefGoogle ScholarPubMed
Wiebe, N., Braun, D. and Lloyd, S., Quantum algorithm for data fitting, Phys. Rev. Lett. 109, 050505 (2012), doi:10.1103/PhysRevLett.109.050505.CrossRefGoogle ScholarPubMed
Gilyén, A., Lloyd, S. and Tang, E., Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension (2018), arXiv:1811.04909.Google Scholar
Ekert, A. and Jozsa, R., Quantum computation and Shor's factoring algorithm, Rev. Mod. Phys. 68, 733 (1996), doi:10.1103/RevModPhys.68.733.CrossRefGoogle Scholar
Dawson, C. M. and Nielsen, M. A., The Solovay-Kitaev algorithm (2005), arXiv:quant-ph/0505030.Google Scholar
Atas, Y. Y., Zhang, J., Lewis, R., Jahanpour, A., Haase, J. F. and Muschik, C. A., SU(2) hadrons on a quantum computer via a variational approach, Nat. Commun. 12(1) (2021), doi:10.1038/s41467-021-26825-4.CrossRefGoogle Scholar
Temme, K., Bravyi, S. and Gambetta, J. M., Error mitigation for short-depth quantum circuits, Phys. Rev. Lett. 119, 180509 (2017), doi:10.1103/PhysRevLett.119.180509.CrossRefGoogle ScholarPubMed
Endo, S., Benjamin, S. C. and Li, Y., Practical quantum error mitigation for near-future applications, Phys. Rev. X 8(3) (2018), doi:10.1103/PhysRevX.8.031027.Google Scholar
Funcke, L., Hartung, T., Jansen, K., Kühn, S., Stornati, P. and Wang, X., Measurement error mitigation in quantum computers through classical bit-flip correction, Phys. Rev. A 105, 062404 (2022), doi:10.1103/PhysRevA.105.062404.CrossRefGoogle Scholar
Córcoles, A. D., Magesan, E., Srinivasan, S. J., et al., Demonstration of a quantum error detection code using a square lattice of four superconducting qubits, Nat. Commun. 6(1), 6979 (2015), doi:10.1038/ncomms7979.CrossRefGoogle ScholarPubMed
Havlíček, V., Córcoles, A. D., Temme, K., et al., Supervised learning with quantum-enhanced feature spaces, Nature 567(7747), 209 (2019), doi:10.1038/s41586-019-0980-2.CrossRefGoogle ScholarPubMed
Schuld, M. and Killoran, N., Quantum machine learning in feature Hilbert spaces, Phys. Rev. Lett. 122, 040504 (2019), doi:10.1103/PhysRevLett.122.040504.CrossRefGoogle ScholarPubMed
Wilson, C. M., Otterbach, J. S., Tezak, N., et al., Quantum kitchen sinks: An algorithm for machine learning on near-term quantum computers (2018), arXiv:1806.08321.Google Scholar
Dai and R. V. Krems, Quantum Gaussian process model of potential energy surface for a polyatomic molecule, J. Chem. Phys. 156(18), 184802 (2022), doi:10.1063/5.0088821.Google Scholar
Quantum feature maps and kernels, chapter of the Qiskit textbook “Introduction to Quantum Computing,” Accessed: 03-02-2022.Google Scholar
Tang, Quantum principal component analysis only achieves an exponential speedup because of its state preparation assumptions, Phys. Rev. Lett. 127(6), 060503 (2021), doi:10.1103/PhysRevLett.127.060503.Google Scholar
Kübler, J. M., Buchholz, S. and Schölkopf, B., The inductive bias of quantum kernels, In NeurIPS 2021 – Adv. Neural Inf. Process. Syst., pp. 12661–12673 (2021), arXiv:2106.03747.Google Scholar
Haug, T., Self, C. N. and Kim, M., Quantum machine learning of large datasets using randomized measurements, Mach. Learn.: Sci. Technol. 4(1), 015005 (2023), doi:10.1088/2632-2153/acb0b4.CrossRefGoogle Scholar
Liu, F. Tacchino, J. R. Glick, L. Jiang and A. Mezzacapo, Representation learning via quantum neural tangent kernels, PRX Quantum 3, 030323 (2022), doi:10.1103/PRXQuantum.3.030323.Google Scholar
Torabian and R. V. Krems, Compositional optimization of quantum circuits for quantum kernels of support vector machines, Phys. Rev. Res. 5, 013211 (2023), doi:10.1103/PhysRevResearch.5.013211.CrossRefGoogle Scholar
Peruzzo, A., McClean, J., Shadbolt, P., et al., A variational eigenvalue solver on a photonic quantum processor, Nat. Comm. 5(1) (2014), doi:10.1038/ncomms5213.CrossRefGoogle Scholar
Funcke, T. Hartung, K. Jansen, S. Kühn and P. Stornati, Dimensional expressivity analysis of parametric quantum circuits, Quantum 5, 422 (2021), doi:10.22331/q-2021-03-29-422.CrossRefGoogle Scholar
Gresch and M. Kliesch, Guaranteed efficient energy estimation of quantum many-body Hamiltonians using ShadowGrouping Nat. Comm. 16, 689 (2025), doi:10.1038/s41467-024-54859-x.CrossRefGoogle Scholar
Li, J., Yang, X., Peng, X. and Sun, C.-P., Hybrid quantum-classical approach to quantum optimal control, Phys. Rev. Lett. 118, 150503 (2017), doi:10.1103/PhysRevLett.118.150503.CrossRefGoogle ScholarPubMed
Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster and J. I. Latorre, Data re-uploading for a universal quantum classifier, Quantum 4, 226 (2020), doi:10.22331/q-2020-02-06-226.CrossRefGoogle Scholar
Cong, I., Choi, S. and Lukin, M. D., Quantum convolutional neural networks, Nat. Phys. 15(12), 1273 (2019), doi:10.1038/s41567-019-0648-8.CrossRefGoogle Scholar
Schuld, Supervised quantum machine learning models are kernel methods (2021), arXiv:2101.11020.Google Scholar
Jerbi, S., Fiderer, L. J., Poulsen Nautrup, H., Kübler, J. M., Briegel, H. J. and Dunjko, V., Quantum machine learning beyond kernel methods, Nat. Commun. 14(1), 517 (2023), doi:10.1038/s41467-023-36159-y.CrossRefGoogle ScholarPubMed
Bausch, J., Recurrent quantum neural networks, In NeurIPS 2020 – Adv. Neural Inf. Process. Syst. (2020), arXiv:2006.14619.Google Scholar
Jerbi, S., Gyurik, C., Marshall, S., Briegel, H. and Dunjko, V., Parametrized quantum policies for reinforcement learning, In NeurIPS 2021 – Adv. Neural Inf. Process. Syst., pp. 28362–28375 (2021), arXiv:2103.05577.Google Scholar
Skolik, A., Jerbi, S. and Dunjko, V., Quantum agents in the Gym: A variational quantum algorithm for deep Q-learning, Quantum 6, 720 (2022), doi:10.22331/q-2022-05-24-720.CrossRefGoogle Scholar
Brockman, V. Cheung, L. Pettersson, et al., OpenAI Gym (2016), arXiv:1606.01540.Google Scholar
Romero, J., Olson, J. P. and Aspuru-Guzik, A., Quantum autoencoders for efficient compression of quantum data, Quantum Sci. Technol. 2(4), 045001 (2017), doi:10.1088/2058-9565/aa8072.CrossRefGoogle Scholar
Bondarenko and P. Feldmann, Quantum autoencoders to denoise quantum data, Phys. Rev. Lett. 124(13) (2020), doi:10.1103/PhysRevLett.124.130502.CrossRefGoogle Scholar
Locher, D. F., Cardarelli, L. and Müller, M., Quantum Error Correction with Quantum Autoencoders, Quantum 7, 942 (2023), doi:10.22331/q-2023-03-09-942.CrossRefGoogle Scholar
Verdon, G., Marks, J., Nanda, S., Leichenauer, S. and Hidary, J., Quantum Hamiltonian-based models and the variational quantum thermalizer algorithm (2019), arXiv:1910.02071.Google Scholar
Lloyd, S. and Weedbrook, C., Quantum generative adversarial learning, Phys. Rev. Lett. 121, 040502 (2018), doi:10.1103/PhysRevLett.121.040502.CrossRefGoogle ScholarPubMed
Dallaire-Demers, P.-L. and Killoran, N., Quantum generative adversarial networks, Phys. Rev. A 98, 012324 (2018), doi:10.1103/PhysRevA.98.012324.CrossRefGoogle Scholar
Liu, J.-G. and Wang, L., Differentiable learning of quantum circuit Born machines, Phys. Rev. A 98, 062324 (2018), doi:10.1103/PhysRevA.98.062324.CrossRefGoogle Scholar
Coyle, B., Mills, D., Danos, V. and Kashefi, E., The Born supremacy: Quantum advantage and training of an Ising Born machine, npj Quantum Inf. 6(1) (2020), doi:10.1038/s41534-020-00288-9.Google Scholar
Benedetti, D. Garcia-Pintos, O. Perdomo, V. Leyton-Ortega, Y. Nam and A. Perdomo-Ortiz, A generative modeling approach for benchmarking and training shallow quantum circuits, npj Quantum Inf. 5(1) (2019), doi:10.1038/s41534-019-0157-8.Google Scholar
Viola, L. and Lloyd, S., Dynamical suppression of decoherence in two-state quantum systems, Phys. Rev. A 58, 2733 (1998), doi:10.1103/PhysRevA.58.2733.CrossRefGoogle Scholar
Kleißler, F., Lazariev, A. and Arroyo-Camejo, S., Universal, high-fidelity quantum gates based on superadiabatic, geometric phases on a solid-state spin-qubit at room temperature, npj Quantum Inf. 4(1) (2018), doi:10.1038/s41534-018-0098-7.Google Scholar
Taherkhani, M., Willatzen, M., Denning, E. V., Protsenko, I. E. and Gregersen, N., High-fidelity optical quantum gates based on type-II double quantum dots in a nanowire, Phys. Rev. B 99, 165305 (2019), doi:10.1103/PhysRevB.99.165305.CrossRefGoogle Scholar
Ghosh, Zahedinejad, J. and Sanders, B. C., High-fidelity single-shot toffoli gate via quantum control, Phys. Rev. Lett. 114, 200502 (2015), doi:10.1103/PhysRevLett.114.200502.Google Scholar
Yu, D., Wang, H., Ma, D., Zhao, X. and Qian, J., Adiabatic and high-fidelity quantum gates with hybrid Rydberg-Rydberg interactions, Opt. Express 27(16), 23080 (2019), doi:10.1364/OE.27.023080.CrossRefGoogle ScholarPubMed
Haddadfarshi and F. Mintert, High fidelity quantum gates of trapped ions in the presence of motional heating, New J. Phys. 18(12), 123007 (2016), doi:10.1088/1367-2630/18/12/123007.Google Scholar
Li, J. Xue, T. Chen and Z. Xue, High-fidelity geometric quantum gates with short paths on superconducting circuits, Adv. Quantum Technol. 4(5), 2000140 (2021), doi:10.1002/qute.202000140.CrossRefGoogle Scholar
McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. and Neven, H., Barren plateaus in quantum neural network training landscapes, Nat. Commun. 9(1), 1 (2018), doi:10.1038/s41467-018-07090-4.CrossRefGoogle ScholarPubMed
Cerezo, M., Sone, A., Volkoff, T., Cincio, L. and Coles, P. J., Cost function dependent barren plateaus in shallow parametrized quantumcircuits, Nat. Commun. 12(1), 1 (2021), doi:10.1038/s41467-021-21728-w.CrossRefGoogle Scholar
Bittel, L. and Kliesch, M., Training variational quantum algorithms is NP-hard, Phys. Rev. Lett. 127, 120502 (2021), doi:10.1103/PhysRevLett.127.120502.CrossRefGoogle ScholarPubMed
Sim, S., Johnson, P. D. and Aspuru-Guzik, A., Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms, Adv. Quantum Technol. 2(12), 1900070 (2019), doi:10.1002/qute.201900070.CrossRefGoogle Scholar
Bittel, L., Gharibian, S. and Kliesch, M., The optimal depth of variational quantum algorithms is QCMA-hard to approximate, In 38th Comput. Complexity Conf. (CCC 2023), vol. 264, pp. 34:1–34:24, doi:10.4230/LIPIcs.CCC.2023.34 (2023), arXiv:2211.12519.CrossRefGoogle Scholar
Kitaev, A. Y., Quantum computations: Algorithms and error correction, Russ. Math. Surv. 52(6), 1191 (1997), doi:10.1070/rm1997v052n06abeh002155.CrossRefGoogle Scholar
Rudolph, M. S., Sim, S., Raza, A., et al., ORQVIZ: Visualizing high-dimensional landscapes in variational quantum algorithms (2021), arXiv:2111.04695.Google Scholar
Eisert, J., Entangling power and quantum circuit complexity, Phys. Rev. Lett. 127, 020501 (2021), doi:10.1103/PhysRevLett.127.020501.CrossRefGoogle ScholarPubMed
Huang, H.-Y., Kueng, R. and Preskill, J., Information-theoretic bounds on quantum advantage in machine learning, Phys. Rev. Lett. 126, 190505 (2021), doi:10.1103/PhysRevLett.126.190505.CrossRefGoogle ScholarPubMed
Bharti, K., Cervera-Lierta, A., Kyaw, T. H., et al., Noisy intermediate-scale quantum algorithms, Rev. Mod. Phys. 94(1), 015004 (2022), doi:10.1103/RevModPhys.94.015004.CrossRefGoogle Scholar
Munoz-Gil, G. Volpe, M. A. Garcia-March, et al., Objective comparison of methods to decode anomalous diffusion, Nat. Commun. 12(1), 6253 (2021), doi:10.1038/s41467-021-26320-w.CrossRefGoogle Scholar
Muñoz-Gil, M. A. Garcia-March, C. Manzo, J. D. Martín-Guerrero and M. Lewenstein, Single trajectory characterization via machine learning, New J. Phys. 22(1), 013010 (2020), doi:10.1088/1367-2630/ab6065.CrossRefGoogle Scholar
Munoz-Gil, C. Romero-Aristizabal, N. Mateos, et al., Particle flow modulates growth dynamics and nanoscale-arrested growth of transcription factor condensates in living cells, bioRxiv (2022), doi:10.1101/2022.01.11.475940.CrossRefGoogle Scholar
Moss, B. and Griffiths, R.-R., Gaussian process molecule property prediction with FlowMO (2020), arXiv:2010.01118.Google Scholar
Glielmo, Y. Rath, G. Csányi, A. De Vita and G. H. Booth, Gaussian process states: A data-driven representation of quantum many-body physics, Phys. Rev. X 10, 041026 (2020), doi:10.1103/PhysRevX.10.041026.Google Scholar
Choo, K., Neupert, T. and Carleo, G., Two-dimensional frustrated model studied with neural network quantum states, Phys. Rev. B 100(12) (2019), doi:10.1103/PhysRevB.100.125124.CrossRefGoogle Scholar
Secor, M., Soudackov, A. V. and Hammes-Schiffer, S., Artificial neural networks as propagators in quantum dynamics, J. Phys. Chem. Lett. 12(43), 10654 (2021), doi:10.1021/acs.jpclett.1c03117.CrossRefGoogle ScholarPubMed
Havlicek, Amplitude ratios and neural network quantum states, Quantum 7, 938 (2023), doi:10.22331/q-2023-03-02-938.Google Scholar
Lin, Yao, L. and Bukov, M., Reinforcement learning for many-body ground-state preparation inspired by counterdiabatic driving, Phys. Rev. X 11, 031070 (2021), doi:10.1103/PhysRevX.11.031070.Google Scholar
Nautrup, P., Delfosse, N., Dunjko, V., Briegel, H. J. and Friis, N., Optimizing quantum error correction codes with reinforcement learning, Quantum 3, 215 (2019), doi:10.22331/q-2019-12-16-215.CrossRefGoogle Scholar
Peng, X. Huang, C. Yin, L. Joseph, C. Ramanathan and P. Cappellaro, Deep reinforcement learning for quantum Hamiltonian engineering, Phys. Rev. Appl. 18, 024033 (2022), doi:10.1103/PhysRevApplied.18.024033.CrossRefGoogle Scholar
Jumper, J., Evans, R., Pritzel, A., et al., Highly accurate protein structure prediction with AlphaFold, Nature 596(7873), 583 (2021), doi:10.1038/s41586-021-03819-2.CrossRefGoogle ScholarPubMed
Varadi, M., Anyango, S., Deshpande, M., et al., AlphaFold protein structure database: Massively expanding the structural coverage of protein-sequence space with high-accuracy models, Nucleic Acids Res. 50(D1), D439 (2021), doi:10.1093/nar/gkab1061.Google Scholar
Davies, P. Veličković, L. Buesing, et al., Advancing mathematics by guiding human intuition with AI, Nature 600(7887), 70 (2021), doi:10.1038/s41586-021-04086-x.CrossRefGoogle Scholar
Kriváchy, T., Cai, Y., Cavalcanti, D., Tavakoli, A., Gisin, N. and Brunner, N., A neural network oracle for quantum nonlocality problems in networks, npj Quantum Inf. 6, 70 (2020), doi:10.1038/s41534-020-00305-x.Google Scholar
Gisin, Pozas-Kerstjens, N. and Renou, M.-O., Proofs of network quantum nonlocality in continuous families of distributions, Phys. Rev. Lett. 130, 090201 (2023), doi:10.1103/PhysRevLett.130.090201.Google Scholar
Pozas-Kerstjens, A., Muñoz-Gil, G., Piñol, E., et al., Efficient training of energy-based models via spin-glass control, Mach. Learn.: Sci. Technol. 2(2), 025026 (2021), doi:10.1088/2632-2153/abe807.CrossRefGoogle Scholar
Wright, L. G., Onodera, T., Stein, M. M., et al., Deep physical neural networks trained with backpropagation, Nature 601(7894), 549 (2022), doi:10.1038/s41586-021-04223-6.CrossRefGoogle ScholarPubMed
Wagner, K. and Psaltis, D., Optical neural networks: An introduction by the feature editors, Appl. Opt. 32(8), 1261 (1993), doi:10.1364/AO.32.001261.CrossRefGoogle ScholarPubMed
Zuo, Y., Li, B., Zhao, Y., et al., All-optical neural network with nonlinear activation functions, Optica 6(9), 1132 (2019), doi:10.1364/OPTICA.6.001132.CrossRefGoogle Scholar
Sui, X., Wu, Q., Liu, J., Chen, Q. and Gu, G., A review of optical neural networks, IEEE Access 8, 70773 (2020), doi:10.1109/ACCESS.2020.2987333.CrossRefGoogle Scholar
Zhang, H., Gu, M., Jiang, X. D., et al., An optical neural chip for implementing complex-valued neural network, Nat. Commun. 12(1), 457 (2021), doi:10.1038/s41467-020-20719-7.Google ScholarPubMed
Hui Zhang, J. Thompson, M. Gu, et al., Efficient on-chip training of optical neural networks using genetic algorithm, ACS Photonics 8(6), 1662 (2021), doi:10.1021/acsphotonics.1c00035.Google Scholar
Xu, X., Tan, M., Corcoran, B., et al., 11 TOPS photonic convolutional accelerator for optical neural networks, Nature 589(7840), 44 (2021), doi:10.1038/s41586-020-03063-0.CrossRefGoogle ScholarPubMed
Liu, J., Wu, Q., Sui, X., et al., Research progress in optical neural networks: Theory, applications and developments, PhotoniX 2(1), 5 (2021), doi:10.1186/s43074-021-00026-0.CrossRefGoogle Scholar
Wang, T., Ma, S.-Y., Wright, L. G., Onodera, T., Richard, B. C. and McMahon, P. L., An optical neural network using less than 1 photon per multiplication, Nat. Commun. 13(1), 123 (2022), doi:10.1038/s41467-021-27774-8.Google ScholarPubMed
Xu, H., Ghosh, S., Matuszewski, M. and Liew, T. C., Universal self-correcting computing with disordered exciton-polariton neural networks, Phys. Rev. Appl. 13, 064074 (2020), doi:10.1103/PhysRevApplied.13.064074.CrossRefGoogle Scholar
Ballarini, D., Gianfrate, A., Panico, R., et al., Polaritonic neuromorphic computing outperforms linear classifiers, Nano Lett. 20(5), 3506 (2020), doi:10.1021/acs.nanolett.0c00435.CrossRefGoogle ScholarPubMed
Matuszewski, M., Opala, A., Mirek, R., et al., Energy-efficient neural network inference with microcavity exciton polaritons, Phys. Rev. Appl. 16, 024045 (2021), doi:10.1103/PhysRevApplied.16.024045.CrossRefGoogle Scholar
Mirek, R., Opala, A., Comaron, P., et al., Neuromorphic binarized polariton networks, Nano Lett. 21(9), 3715 (2021), doi:10.1021/acs.nanolett.0c04696.CrossRefGoogle ScholarPubMed
Zvyagintseva, D., Sigurdsson, H., Kozin, V. K., et al., Machine learning of phase transitions in nonlinear polariton lattices, Commun. Phys. 5(1), 8 (2022), doi:10.1038/s42005-021-00755-5.CrossRefGoogle Scholar
Hopfield, J. J., Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A. 79(8), 2554 (1982), doi:10.1073/pnas.79.8.2554.CrossRefGoogle Scholar
Rotondo, M. Marcuzzi, J. P. Garrahan, I. Lesanovsky and M. Müller, Open quantum generalisation of Hopfield neural networks, J. Phys. A: Math. Theor. 51(11), 115301 (2018), doi:10.1088/1751-8121/aaabcb.CrossRefGoogle Scholar
Petersen, K. B. and Pedersen, M. S., The matrix cookbook, Mathematics – Waterloo University, Accessed: 03-04-2022 (2012).Google Scholar
Morvan, B. Villalonga, X. Mi, et al., Phase transitions in random circuit sampling. Nature 634(8033), 328–333 (2024). https://doi.org/10.1038/s41586-024-07998-6.CrossRefGoogle ScholarPubMed
Bluvstein, D., Evered, S. J., Geim, A. A., et al., Logical quantum processor based on reconfigurable atom arrays. Nature 626(7997), 58–65 (2024). https://doi.org/10.1038/s41586-023-06927-3.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×