Continuous-time speed for discrete-time models: A Markov-chain approximation method
We propose a Markov-chain approximation method for discrete-time control problems, showing how to reap the speed gains from continuous-time algorithms in this class of models. Our approach specifies a discrete Markov chain on a grid, taking a first-order approximation of conditional distributions in their first and second moments around a reference point. Standard dynamic-programming results guarantee convergence. We show how to apply our method to standard consumption-savings problems with and without a portfolio choice, realizing speed gains of up to two orders of magnitude (a factor 100) with respect to state-of-the-art methods, when using the same number of grid points. This is without significant loss of precision. We show how to avoid the curse of dimensionality and keep computation times manageable in high-dimensional problems with independent shocks. Finally, we show how our approach can substantially simplify the computation of dynamic games with a large state space, solving a discrete-time version of the altruistic savings game studied by Barczyk & Kredler (2014).