'Regression now' meets 'regression later'
Christian Bender  1, *  
1 : Universitaet des Saarlandes
* : Corresponding author

Probabilistic time discretization schemes for Hamilton-Jacobi-Bellman equations lead to discrete-time dynamic programming equations, which feature high order nestings of conditional expectations. Among the most popular techniques to approximate these conditional expectations is least-squares Monte Carlo (LSM). Least squares Monte Carlo is subject to several sources of errors, including a bias due to the choice of basis functions and a simulation error. One the one hand, the 'regression now' approach to LSM imposes little to no restrictions on the choice of the basis functions, but may suffer from high variance, in particular in the presence of Malliavin Monte Carlo weights for the space derivatives. One the other hand, the 'regression later' approach has been demonstrated to achieve enormous variance reduction in certain applications, but imposes heavy conditions on the choice of the basis functions, and can, thus, not generically be implemented.
In this talk, we present a new regression algorithm, which combines the advantages of both approaches (sufficient flexibility in choosing the basis and low variance). We provide some heuristics which suggest that the new algorithm converges at a rate of (1+ 1.5 d/s)^(-1) in the computational cost for a class of drift control problems, where d and s stand for the dimension and the smoothness of the problem. The algorithm can hence even beat the convergence rate of 1/2 of the standard Monte Carlo approach to approximate a single expectation, if the dimension-to-smoothness ratio is small. Our heuristics are supported by several numerical examples.
The talk is based on a joint work in progress with Christian Gärtner and Nikolaus Schweizer.


Online user: 1