Publication

Back to overview

Boosting Quantum Machine Learning Models with a Multilevel Combination Technique: Pople Diagrams Revisited

Type of publication Peer-reviewed
Publikationsform Original article (peer-reviewed)
Author Zaspel Peter, Huang Bing, Harbrecht Helmut, von Lilienfeld O. Anatole,
Project Big Data for Computational Chemistry: Unified machine learning and sparse grid combination technique for quantum based molecular design
Show all

Original article (peer-reviewed)

Journal Journal of Chemical Theory and Computation
Volume (Issue) 15(3)
Page(s) 1546 - 1559
Title of proceedings Journal of Chemical Theory and Computation
DOI 10.1021/acs.jctc.8b00832

Open Access

URL https://arxiv.org/abs/1808.02799
Type of Open Access Repository (Green Open Access)

Abstract

Inspired by Pople diagrams popular in quantum chemistry, we introduce a hierarchical scheme, based on the multilevel combination (C) technique, to combine various levels of approximations made when molecular energies are calculated. When combined with quantum machine learning (QML) models, the resulting CQML model is a generalized unified recursive kernel ridge regression that exploits correlations implicitly encoded in training data composed of multiple levels in multiple dimensions. Here, we have investigated up to three dimensions: chemical space, basis set, and electron correlation treatment. Numerical results have been obtained for atomization energies of a set of ∼7000 organic molecules with up to 7 atoms (not counting hydrogens) containing CHONFClS, as well as for ∼6000 constitutional isomers of C7H10O2. CQML learning curves for atomization energies suggest a dramatic reduction in necessary training samples calculated with the most accurate and costly method. In order to generate millisecond estimates of CCSD(T)/cc-pvdz atomization energies with prediction errors reaching chemical accuracy (∼1 kcal/mol), the CQML model requires only ∼100 training instances at CCSD(T)/cc-pvdz level, rather than thousands within conventional QML, while more training molecules are required at lower levels. Our results suggest a possibly favorable trade-off between various hierarchical approximations whose computational cost scales differently with electron number.
-