Markov processes for stochastic modeling /
Covering a wide range of areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes.
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
London :
Elsevier,
2013.
|
Edición: | Second edition. |
Colección: | Elsevier insights.
|
Temas: | |
Acceso en línea: | Texto completo |
Tabla de Contenidos:
- Machine generated contents note: 1.1. Introduction
- 1.1.1. Conditional Probability
- 1.1.2. Independence
- 1.1.3. Total Probability and the Bayes' Theorem
- 1.2. Random Variables
- 1.2.1. Distribution Functions
- 1.2.2. Discrete Random Variables
- 1.2.3. Continuous Random Variables
- 1.2.4. Expectations
- 1.2.5. Expectation of Nonnegative Random Variables
- 1.2.6. Moments of Random Variables and the Variance
- 1.3. Transform Methods
- 1.3.1. The s-Transform
- 1.3.2. The z-Transform
- 1.4. Bivariate Random Variables
- 1.4.1. Discrete Bivariate Random Variables
- 1.4.2. Continuous Bivariate Random Variables
- 1.4.3. Covariance and Correlation Coefficient
- 1.5. Many Random Variables
- 1.6. Fubini's Theorem
- 1.7. Sums of Independent Random Variables
- 1.8. Some Probability Distributions
- 1.8.1. The Bernoulli Distribution
- 1.8.2. The Binomial Distribution
- 1.8.3. The Geometric Distribution
- 1.8.4. The Pascal Distribution
- 1.8.5. The Poisson Distribution
- 1.8.6. The Exponential Distribution
- 1.8.7. The Erlang Distribution
- 1.8.8. Normal Distribution
- 1.9. Limit Theorems
- 1.9.1. Markov Inequality
- 1.9.2. Chebyshev Inequality
- 1.9.3. Laws of Large Numbers
- 1.9.4. The Central Limit Theorem
- 1.10. Problems
- 2.1. Introduction
- 2.2. Classification of Stochastic Processes
- 2.3. Characterizing a Stochastic Process
- 2.4. Mean and Autocorrelation Function of a Stochastic Process
- 2.5. Stationary Stochastic Processes
- 2.5.1. Strict-Sense Stationary Processes
- 2.5.2. Wide-Sense Stationary Processes
- 2.6. Ergodic Stochastic Processes
- 2.7. Some Models of Stochastic Processes
- 2.7.1. Martingales
- 2.7.2. Counting Processes
- 2.7.3. Independent Increment Processes
- 2.7.4. Stationary Increment Process
- 2.7.5. Poisson Processes
- 2.8. Problems
- 3.1. Introduction
- 3.2. Structure of Markov Processes
- 3.3. Strong Markov Property
- 3.4. Applications of Discrete-Time Markov Processes
- 3.4.1. Branching Processes
- 3.4.2. Social Mobility
- 3.4.3. Markov Decision Processes
- 3.5. Applications of Continuous-Time Markov Processes
- 3.5.1. Queueing Systems
- 3.5.2. Continuous-Time Markov Decision Processes
- 3.5.3. Stochastic Storage Systems
- 3.6. Applications of Continuous-State Markov Processes
- 3.6.1. Application of Diffusion Processes to Financial Options
- 3.6.2. Applications of Brownian Motion
- 3.7. Summary
- 4.1. Introduction
- 4.2. State-Transition Probability Matrix
- 4.2.1. The n-Step State-Transition Probability
- 4.3. State-Transition Diagrams
- 4.4. Classification of States
- 4.5. Limiting-State Probabilities
- 4.5.1. Doubly Stochastic Matrix
- 4.6. Sojourn Time
- 4.7. Transient Analysis of Discrete-Time Markov Chains
- 4.8. First Passage and Recurrence Times
- 4.9. Occupancy Times
- 4.10. Absorbing Markov Chains and the Fundamental Matrix
- 4.10.1. Time to Absorption
- 4.10.2. Absorption Probabilities
- 4.11. Reversible Markov Chains
- 4.12. Problems
- 5.1. Introduction
- 5.2. Transient Analysis
- 5.2.1. The s-Transform Method
- 5.3. Birth and Death Processes
- 5.3.1. Local Balance Equations
- 5.3.2. Transient Analysis of Birth and Death Processes
- 5.4. First Passage Time
- 5.5. The Uniformization Method
- 5.6. Reversible CTMCs
- 5.7. Problems
- 6.1. Introduction
- 6.2. Renewal Processes
- 6.2.1. The Renewal Equation
- 6.2.2. Alternative Approach
- 6.2.3. The Elementary Renewal Theorem
- 6.2.4. Random Incidence and Residual Time
- 6.2.5. Delayed Renewal Process
- 6.3. Renewal-Reward Process
- 6.3.1. The Reward-Renewal Theorem
- 6.4. Regenerative Processes
- 6.4.1. Inheritance of Regeneration
- 6.4.2. Delayed Regenerative Process
- 6.4.3. Regenerative Simulation
- 6.5. Markov Renewal Process
- 6.5.1. The Markov Renewal Function
- 6.6. Semi-Markov Processes
- 6.6.1. Discrete-Time SMPs
- 6.6.2. Continuous-Time SMPs
- 6.7. Markov Regenerative Process
- 6.8. Markov Jump Processes
- 6.8.1. The Homogeneous Markov Jump Process
- 6.9. Problems.
- Note continued: 7.1. Introduction
- 7.2. Description of a Queueing System
- 7.3. The Kendall Notation
- 7.4. The Little's Formula
- 7.5. The PASTA Property
- 7.6. The M/M/1 Queueing System
- 7.6.1. Stochastic Balance
- 7.6.2. Total Time and Waiting Time Distributions of the M/M/1 Queueing System
- 7.7. Examples of Other M/M Queueing Systems
- 7.7.1. The M/M/c Queue: The c-Server System
- 7.7.2. The M/M/I/K Queue: The Single-Server Finite-Capacity System
- 7.7.3. The M/M/c/c Queue: The c-Server Loss System
- 7.7.4. The M/M/1//K Queue: The Single-Server Finite Customer Population System
- 7.8.M/G/1 Queue
- 7.8.1. Waiting Time Distribution of the M/G/1 Queue
- 7.8.2. The M/Ek/1 Queue
- 7.8.3. The M/D/1 Queue
- 7.8.4. The M/M/1 Queue Revisited
- 7.8.5. The M/Hk/1 Queue
- 7.9.G/M/1 Queue
- 7.9.1. The Ek/M/1 Queue
- 7.9.2. The D/M/1 Queue
- 7.10.M/G/1 Queues with Priority
- 7.10.1. Nonpreemptive Priority
- 7.10.2. Preemptive Resume Priority
- 7.10.3. Preemptive Repeat Priority
- 7.11. Markovian Networks of Queues
- 7.11.1. Burke's Output Theorem and Tandem Queues
- 7.11.2. Jackson or Open Queueing Networks
- 7.11.3. Closed Queueing Networks
- 7.12. Applications of Markovian Queues
- 7.13. Problems
- 8.1. Introduction
- 8.2. Occupancy Probability
- 8.3. Random Walk as a Markov Chain
- 8.4. Symmetric Random Walk as a Martingale
- 8.5. Random Walk with Barriers
- 8.6. Gambler's Ruin
- 8.6.1. Ruin Probability
- 8.6.2. Alternative Derivation of Ruin Probability
- 8.6.3. Duration of a Game
- 8.7. Random Walk with Stay
- 8.8. First Return to the Origin
- 8.9. First Passage Times for Symmetric Random Walk
- 8.9.1. First Passage Time via the Generating Function
- 8.9.2. First Passage Time via the Reflection Principle
- 8.9.3. Hitting Time and the Reflection Principle
- 8.10. The Ballot Problem and the Reflection Principle
- 8.10.1. The Conditional Probability Method
- 8.11. Returns to the Origin and the Arc-Sine Law
- 8.12. Maximum of a Random Walk
- 8.13. Random Walk on a Graph
- 8.13.1. Random Walk on a Weighted Graph
- 8.14. Correlated Random Walk
- 8.15. Continuous-Time Random Walk
- 8.15.1. The Master Equation
- 8.16. Self-Avoiding Random Walk
- 8.17. Nonreversing Random Walk
- 8.18. Applications of Random Walk
- 8.18.1. Web Search
- 8.18.2. Insurance Risk
- 8.18.3. Content of a Dam
- 8.18.4. Cash Management
- 8.18.5. Mobility Models in Mobile Networks
- 8.19. Summary
- 8.20. Problems
- 9.1. Introduction
- 9.2. Mathematical Description
- 9.3. Brownian Motion with Drift
- 9.4. Brownian Motion as a Markov Process
- 9.5. Brownian Motion as a Martingale
- 9.6. First Passage Time of a Brownian Motion
- 9.7. Maximum of a Brownian Motion
- 9.8. First Passage Time in an Interval
- 9.9. The Brownian Bridge
- 9.10. Geometric Brownian Motion
- 9.11. Introduction to Stochastic Calculus
- 9.11.1. Stochastic Differential Equation and the Ito Process
- 9.11.2. The Ito Integral
- 9.11.3. The Ito's Formula
- 9.12. Solution of Stochastic Differential Equations
- 9.13. Solution of the Geometric Brownian Motion
- 9.14. The Ornstein-Uhlenbeck Process
- 9.14.1. Solution of the OU SDE
- 9.14.2. First Alternative Solution Method
- 9.14.3. Second Alternative Solution Method
- 9.15. Mean-Reverting OU Process
- 9.16. Fractional Brownian Motion
- 9.16.1. Self-Similar Processes
- 9.16.2. Long-Range Dependence
- 9.16.3. Self-Similarity and Long-Range Dependence
- 9.16.4. FBM Revisited
- 9.17. Fractional Gaussian Noise
- 9.18. Multifractional Brownian Motion
- 9.19. Problems
- 10.1. Introduction
- 10.2. Mathematical Preliminaries
- 10.3. Models of Diffusion
- 10.3.1. Diffusion as a Limit of Random Walk: The Fokker-Planck Equation
- 10.3.2. The Langevin Equation
- 10.3.3. The Fick's Equations
- 10.4. Examples of Diffusion Processes
- 10.4.1. Brownian Motion
- 10.4.2. Brownian Motion with Drift
- 10.5. Correlated Random Walk and the Telegraph Equation
- 10.6. Introduction to Fractional Calculus
- 10.6.1. Gamma Function
- 10.6.2. Mittag-Leffler Functions
- 10.6.3. Laplace Transform
- 10.6.4. Fractional Derivatives
- 10.6.5. Fractional Integrals
- 10.6.6. Definitions of Fractional Integro-Differentials
- 10.6.7. Riemann-Liouville Fractional Derivative
- 10.6.8. Caputo Fractional Derivative
- 10.6.9. Fractional Differential Equations
- 10.6.10. Relaxation Differential Equation of Integer Order
- 10.6.11. Oscillation Differential Equation of Inter Order
- 10.6.12. Relaxation and Oscillation FDEs
- 10.7. Anomalous (or Fractional) Diffusion
- 10.7.1. Fractional Diffusion and Continuous-Time Random Walk
- 10.7.2. Solution of the Fractional Diffusion Equation
- 10.8. Problems
- 11.1. Introduction
- 11.2. Generalized Central Limit Theorem
- 11.3. Stable Distribution
- 11.4. Levy Distribution
- 11.5. Levy Processes
- 11.6. Infinite Divisibility
- 11.6.1. Infinite Divisibility of the Poisson Process
- 11.6.2. Infinite Divisibility of the Compound Poisson Process
- 11.6.3. Infinite Divisibility of the Brownian Motion with Drift
- 11.7. Jump-Diffusion Processes
- 11.7.1. Models of Jump-Diffusion Process
- 11.7.2. Normal Jump-Diffusion Model
- 11.7.3. Bernoulli Jump Process
- 11.7.4. Double Exponential Jump-Diffusion Model
- 11.7.5. Jump Diffusions and Levy Processes
- 12.1. Introduction
- 12.2. Overview of Matrix-Analytic Methods
- 12.3. Markovian Arrival Process
- 12.3.1. Properties of MAP
- 12.4. Batch Markovian Arrival Process
- 12.4.1. Properties of BMAP
- 12.5. Markov-Modulated Poisson Process
- 12.5.1. The Interrupted Poisson Process
- 12.5.2. The Switched Poisson Process
- 12.5.3. Properties of MMPP
- 12.5.4. The MMPP(2)/M/1 Queue
- 12.6. Markov-Modulated Bernoulli Process
- 12.6.1. The M. MBP(2)
- 12.7. Sample Applications of MAP and Its Derivatives
- 12.8. Problems
- 13.1. Introduction
- 13.2. Markov Decision Processes
- 13.2.1. Overview of DP
- 13.2.2. Example of DP Problem
- 13.2.3. Markov Reward Processes
- 13.2.4. MDP Basics
- 13.2.5. MDPs with Discounting
- 13.2.6. Solution Methods
- 13.3. Semi-MDPs
- 13.3.1. Semi-Markov Reward Model
- 13.3.2. Discounted Reward.
- Note continued: 13.3.3. Analysis of the Continuous-Decision-Interval SMDPs
- 13.3.4. Solution by Policy Iteration
- 13.3.5. SMDP with Discounting
- 13.3.6. Solution by Policy Iteration When Discounting Is Used
- 13.3.7. Analysis of the Discrete-Decision-Interval SMDPs with Discounting
- 13.3.8. Continuous-Time Markov Decision Processes
- 13.3.9. Applications of SMDPs
- 13.4. Partially Observable MDPs
- 13.4.1. Partially Observable Markov Processes
- 13.4.2. POMDP Basics
- 13.4.3. Solving POMDPs
- 13.4.4.Computing the Optimal Policy
- 13.4.5. Approximate Solutions of POMDP
- 13.5. Problems
- 14.1. Introduction
- 14.2. HMM Basics
- 14.3. HMM Assumptions
- 14.4. Three Fundamental Problems
- 14.5. Solution Methods
- 14.5.1. The Evaluation Problem
- 14.5.2. The Decoding Problem and the Viterbi Algorithm
- 14.5.3. The Learning Problem and the Baum-Welch Algorithm
- 14.6. Types of HMMs
- 14.7. HMMs with Silent States
- 14.8. Extensions of HMMs
- 14.8.1. Hierarchical Hidden Markov Model
- 14.8.2. Factorial Hidden Markov Model
- 14.8.3. Coupled Hidden Markov Model
- 14.8.4. Hidden Semi-Markov Models
- 14.8.5. PHMMs for Biological Sequence Analysis
- 14.9. Other Extensions of HMM
- 14.10. Problems
- 15.1. Introduction
- 15.2. Temporal Point Processes
- 15.3. Specific Temporal Point Processes
- 15.3.1. Poisson Point Processes
- 15.3.2. Cox Point Processes
- 15.4. Spatial Point Processes
- 15.5. Specific Spatial Point Processes
- 15.5.1. Spatial Poisson Point Processes
- 15.5.2. Spatial Cox Point Processes
- 15.5.3. Spatial Gibbs Processes
- 15.6. Spatial-Temporal Point Processes
- 15.7. Operations on Point Processes
- 15.7.1. Thinning
- 15.7.2. Superposition
- 15.7.3. Clustering
- 15.8. Marked Point Processes
- 15.9. Introduction to Markov Random Fields
- 15.9.1. MRF Basics
- 15.9.2. Graphical Representation
- 15.9.3. Gibbs Random Fields and the Hammersley-Clifford Theorem
- 15.10. Markov Point Processes
- 15.11. Markov Marked Point Processes
- 15.12. Applications of Markov Point Processes
- 15.13. Problems.