Reachability in MDPs: Refining Convergence of Value IterationReportar como inadecuado




Reachability in MDPs: Refining Convergence of Value Iteration - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

1 LSV - Laboratoire Spécification et Vérification Cachan 2 ENS Cachan - École normale supérieure - Cachan 3 MEXICO - Modeling and Exploitation of Interaction and Concurrency LSV - Laboratoire Spécification et Vérification Cachan, ENS Cachan - École normale supérieure - Cachan, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8643 4 ULB - Université Libre de Bruxelles Bruxelles

Abstract : Markov Decision Processes MDP are a widely used model including both non-deterministic and probabilistic choices. Minimal and maximal probabilities to reach a target set of states, with respect to a policy resolving non-determinism, may be computed by several methods including value iteration. This algorithm, easy to implement and efficient in terms of space complexity, consists in iteratively finding the probabilities of paths of increasing length. However, it raises three issues: 1 defining a stopping criterion ensuring a bound on the approximation, 2 analyzing the rate of convergence, and 3 specifying an additional procedure to obtain the exact values once a sufficient number of iterations has been performed. The first two issues are still open and for the third one a -crude- upper bound on the number of iterations has been proposed. Based on a graph analysis and transformation of MDPs, we address these problems. First we introduce an interval iteration algorithm , for which the stopping criterion is straightforward. Then we exhibit convergence rate. Finally we significantly improve the bound on the number of iterations required to get the exact values.





Autor: Serge Haddad - Benjamin Monmege -

Fuente: https://hal.archives-ouvertes.fr/



DESCARGAR PDF




Documentos relacionados