Selected topics on continuous-time controlled Markov chains and Markov games /

Saved in:
Bibliographic Details
Author / Creator:Prieto-Rumeau, Tomás.
Imprint:London : Imperial College Press [publisher] ; Singapore ; Hackensack, N.J. : World Scientific [distributor], c2012.
Description:xi, 279 p. ; 24 cm.
Language:English
Series:ICP advanced texts in mathematics ; v. 5
Imperial College Press advanced texts in mathematics ; v. 5.
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/8828295
Hidden Bibliographic Details
Other authors / contributors:Hernández-Lerma, O. (Onésimo)
ISBN:9781848168480 (hbk.)
1848168489 (hbk.)
Notes:Includes bibliographical references (p. 265-274) and index.
Table of Contents:
  • Preface
  • 1. Introduction
  • 1.1. Preliminary examples
  • 1.1.1. A controlled population system
  • 1.1.2. A prey-predator game model
  • 1.2. Overview of the book
  • 1.3. Contents
  • 1.4. Notation
  • 2. Controlled Markov Chains
  • 2.1. Introduction
  • 2.2. The control model
  • 2.3. Existence of controlled Markov chains
  • 2.4. Exponential ergodicity
  • 2.5. Proof of Theorem 2.11
  • 2.6. Conclusions
  • 3. Basic Optimality Criteria
  • 3.1. Introduction
  • 3.2. The finite horizon case
  • 3.3. The infinite horizon discounted reward
  • 3.3.1. Definitions
  • 3.3.2. The discounted reward optimality equation
  • 3.3.3. The uniformization technique
  • 3.3.4. A continuity theorem for discounted rewards
  • 3.4. The long-run expected average reward
  • 3.5. The vanishing discount approach to average optimality
  • 3.6. Pathwise average optimality
  • 3.7. Canonical triplets and finite horizon control problems
  • 3.8. Conclusions
  • 4. Policy Iteration and Approximation Theorems
  • 4.1. Introduction
  • 4.2. The policy iteration algorithm
  • 4.2.1. Discounted reward problems
  • 4.2.2. Average reward problems
  • 4.3. Approximating discounted reward CMCs
  • 4.4. Approximating average reward CMCs
  • 4.5. Conclusions
  • 5. Overtaking, Bias, and Variance Optimality
  • 5.1. Introduction
  • 5.2. Bias and overtaking optimality
  • 5.3. Variance minimization
  • 5.4. Comparison of variance and overtaking optimality
  • 5.5. Conclusions
  • 6. Sensitive Discount Optimality
  • 6.1. Introduction
  • 6.2. The Laurent series expansion
  • 6.3. The vanishing discount approach (revisited)
  • 6.4. The average reward optimality equations
  • 6.5. Strong discount optimality
  • 6.6. Sensitive discount optimality in the class of stationary policies
  • 6.7. Conclusions
  • 7. Blackwell Optimality
  • 7.1. Introduction
  • 7.2. Blackwell optimality in the class of stationary policies
  • 7.3. Blackwell optimality in the class of all policies
  • 7.4. Conclusions
  • 8. Constrained Controlled Markov Chains
  • 8.1. Introduction
  • 8.2. Discounted reward constrained CMCs
  • 8.3. Average reward constrained CMCs
  • 8.4. Pathwise constrained CMCs
  • 8.5. The vanishing discount approach to constrained CMCs
  • 8.6. Conclusions
  • 9. Applications
  • 9.1. Introduction
  • 9.2. Controlled queueing systems
  • 9.3. A controlled birth-and-death process
  • 9.4. A population system with catastrophes
  • 9.5. Controlled epidemic processes
  • 9.6. Conclusions
  • 10. Zero-Sum Markov Games
  • 10.1. Introduction
  • 10.2. The zero-sum Markov game model
  • 10.3. Discount optimality
  • 10.4. Average optimality
  • 10.5. The family of average optimal strategies
  • 10.6. Conclusions
  • 11. Bias and Overtaking Equilibria for Markov Games
  • 11.1. Introduction
  • 11.2. Bias optimality
  • 11.3. Overtaking optimality
  • 11.4. A counterexample on bias and overtaking optimality
  • 11.5. Conclusions
  • Notation List
  • Bibliography
  • Index