Cargando…

Simulation-based Algorithms for Markov Decision Processes

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the n...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autores principales: Chang, Hyeong Soo (Autor), Fu, Michael C. (Autor), Hu, Jiaqiao (Autor), Marcus, Steven I. (Autor)
Autor Corporativo: SpringerLink (Online service)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: London : Springer London : Imprint: Springer, 2007.
Edición:1st ed. 2007.
Colección:Communications and Control Engineering,
Temas:
Acceso en línea:Texto Completo

MARC

LEADER 00000nam a22000005i 4500
001 978-1-84628-690-2
003 DE-He213
005 20220117093312.0
007 cr nn 008mamaa
008 100301s2007 xxk| s |||| 0|eng d
020 |a 9781846286902  |9 978-1-84628-690-2 
024 7 |a 10.1007/978-1-84628-690-2  |2 doi 
050 4 |a T57.6-.97 
072 7 |a KJT  |2 bicssc 
072 7 |a KJMD  |2 bicssc 
072 7 |a BUS049000  |2 bisacsh 
072 7 |a KJT  |2 thema 
072 7 |a KJMD  |2 thema 
082 0 4 |a 658.403  |2 23 
100 1 |a Chang, Hyeong Soo.  |e author.  |4 aut  |4 http://id.loc.gov/vocabulary/relators/aut 
245 1 0 |a Simulation-based Algorithms for Markov Decision Processes  |h [electronic resource] /  |c by Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus. 
250 |a 1st ed. 2007. 
264 1 |a London :  |b Springer London :  |b Imprint: Springer,  |c 2007. 
300 |a XVIII, 189 p. 38 illus.  |b online resource. 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a text file  |b PDF  |2 rda 
490 1 |a Communications and Control Engineering,  |x 2197-7119 
505 0 |a Markov Decision Processes -- Multi-stage Adaptive Sampling Algorithms -- Population-based Evolutionary Approaches -- Model Reference Adaptive Search -- On-line Control Methods via Simulation. 
520 |a Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include: • multi-stage adaptive sampling; • evolutionary policy iteration; • evolutionary random policy search; and • model reference adaptive search. Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality. The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling and control, and simulation but will be a valuable source of instruction and reference for students of control and operations research. 
650 0 |a Operations research. 
650 0 |a Control engineering. 
650 0 |a System theory. 
650 0 |a Control theory. 
650 0 |a Management science. 
650 0 |a Probabilities. 
650 0 |a Algorithms. 
650 1 4 |a Operations Research and Decision Theory. 
650 2 4 |a Control and Systems Theory. 
650 2 4 |a Systems Theory, Control . 
650 2 4 |a Operations Research, Management Science . 
650 2 4 |a Probability Theory. 
650 2 4 |a Algorithms. 
700 1 |a Fu, Michael C.  |e author.  |4 aut  |4 http://id.loc.gov/vocabulary/relators/aut 
700 1 |a Hu, Jiaqiao.  |e author.  |4 aut  |4 http://id.loc.gov/vocabulary/relators/aut 
700 1 |a Marcus, Steven I.  |e author.  |4 aut  |4 http://id.loc.gov/vocabulary/relators/aut 
710 2 |a SpringerLink (Online service) 
773 0 |t Springer Nature eBook 
776 0 8 |i Printed edition:  |z 9781849966436 
776 0 8 |i Printed edition:  |z 9781848005983 
776 0 8 |i Printed edition:  |z 9781846286896 
830 0 |a Communications and Control Engineering,  |x 2197-7119 
856 4 0 |u https://doi.uam.elogim.com/10.1007/978-1-84628-690-2  |z Texto Completo 
912 |a ZDB-2-ENG 
912 |a ZDB-2-SXE 
950 |a Engineering (SpringerNature-11647) 
950 |a Engineering (R0) (SpringerNature-43712)