R. Douc, É. Moulines, T. Rydén (2004)
Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regimeAnnals of Statistics, 32
Daniel Viaud (1995)
Random perturbations of recursive sequences with an application to an epidemic modelJournal of Applied Probability, 32
M. Hassell (1979)
Capture-recapture methodsNature, 279
B. Anderson, J. Moore, M. Eslami (1979)
Optimal Filtering
A. Dempster, N. Laird, D. Rubin (1977)
Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper
P. Moral, M. Ledoux, L. Miclo (2003)
On contraction properties of Markov kernelsProbability Theory and Related Fields, 126
M. Evans, T. Swartz (1995)
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration ProblemsStatistical Science, 10
D. Titterington, A. Smith, U. Makov (1986)
Statistical analysis of finite mixture distributions
M. Segal, E. Weinstein (1989)
A new method for evaluating the log-likelihood gradient, the Hessian, and the Fisher information matrix for linear dynamic systemsIEEE Trans. Inf. Theory, 35
K. Athreya, Hani Doss, J. Sethuraman (1996)
ON THE CONVERGENCE OF THE MARKOV CHAIN SIMULATION METHODAnnals of Statistics, 24
N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller (1953)
Equation of state calculations by fast computing machinesJournal of Chemical Physics, 21
H. Niederreiter (1992)
Random number generation and Quasi-Monte Carlo methods, 63
David Williams (1991)
Probability with Martingales
I. Collings, T. Rydén (1998)
A new maximum likelihood gradient algorithm for on-line hidden Markov model identificationProceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181), 4
(1987)
Beyond Kalman Filters
T. Louis (1982)
Finding the Observed Information Matrix When Using the EM AlgorithmJournal of the royal statistical society series b-methodological, 44
(1999)
Convergence controls
H. Robbins (1948)
Mixture of DistributionsAnnals of Mathematical Statistics, 19
R. Boyles (1983)
On the Convergence of the EM AlgorithmJournal of the royal statistical society series b-methodological, 45
R. Stratonovich (1960)
CONDITIONAL MARKOV PROCESSESTheory of Probability and Its Applications, 5
(1994)
The collapsed Gibbs sampler with applications to a gene regulation problem
W. Feller (1943)
On a General Class of "Contagious" DistributionsAnnals of Mathematical Statistics, 14
L. Younes (1988)
Estimation and annealing for Gibbsian fieldsAnnales De L Institut Henri Poincare-probabilites Et Statistiques, 24
Jun Liu (1996)
Metropolized independent sampling with comparisons to rejection sampling and importance samplingStatistics and Computing, 6
K. Athreya, P. Ney (1978)
A New Approach to the Limit Theory of Recurrent Markov ChainsTransactions of the American Mathematical Society, 245
D. Scott, R. Tweedie (1996)
Explicit Rates of Convergence of Stochastically Ordered Markov Chains
Rudolph Merwe, A. Doucet, Nando Freitas, E. Wan (2000)
The Unscented Particle Filter
Jun Liu, W. Wong, A. Kong (1994)
Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemesBiometrika, 81
L. Rabiner (1989)
A tutorial on hidden Markov models and selected applications in speech recognitionProc. IEEE, 77
Hisashi Tanizaki (2000)
Nonlinear and Non-Gaussian State-Space Modeling with Monte Carlo Techniques : A Survey and Comparative Study
Ing Ser (1980)
Approximation Theorems of Mathematical Statistics
A. Doucet, S. Godsill, C. Andrieu (2000)
On sequential Monte Carlo sampling methods for Bayesian filteringStatistics and Computing, 10
F. Quintana, Jun Liu, G. Pino (1999)
Monte Carlo EM with importance reweighting and its applications in random effects modelsComputational Statistics & Data Analysis, 29
Radford Neal (1997)
Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function
T. Petrie (1967)
Probabilistic functions of finite-state markov chains.Proceedings of the National Academy of Sciences of the United States of America, 57 3
R. Levine, Juanjuan Fan (2004)
An automated (Markov chain) Monte Carlo EM algorithmJournal of Statistical Computation and Simulation, 74
J. Ziv, N. Merhav (1992)
Estimating the number of states of a finite-state sourceIEEE Trans. Inf. Theory, 38
D. Fredkin, J. Rice (1992)
Maximum likelihood estimation and identification directly from single-channel recordingsProceedings of the Royal Society of London. Series B: Biological Sciences, 249
J. Doob (1953)
Stochastic processes
Hisashi Ito, S. Amari, Kingo Kobayashi (1992)
Identifiability of hidden Markov information sources and their minimum degrees of freedomIEEE Trans. Inf. Theory, 38
Author Wu, F. BYC., WU Jeff (1983)
ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHMAnnals of Statistics, 11
K. Lange (1995)
A gradient algorithm locally equivalent to the EM algorithmJournal of the royal statistical society series b-methodological, 57
P. Moral (2004)
Feynman-Kac formulae
P. Bickel, Y. Ritov, T. Rydén (1998)
Asymptotic normality of the maximum-likelihood estimator for general hidden Markov modelsAnnals of Statistics, 26
A. Doucet, Nando Freitas, N. Gordon (2001)
Sequential Monte Carlo Methods in Practice
R. Horn, Charles Johnson (1985)
Matrix analysis
J. Handschin, D. Mayne (1969)
Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering†International Journal of Control, 9
G. Sandmann, S. Koopman (1998)
Estimation of stochastic volatility models via Monte Carlo maximum likelihoodJournal of Econometrics, 87
A. Rukhin (1991)
Tools for statistical inference
E. Ziegel, E. Lehmann, G. Casella (1950)
Theory of point estimation
T. Cover, Joy Thomas (2005)
Elements of Information Theory
A. Gut (1987)
Stopped Random Walks
S. Levinson, L. Rabiner, M. Sondhi (1983)
An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognitionThe Bell System Technical Journal, 62
Jitendra Tugnait (1981)
Adaptive estimation and identification for discrete systems with Markov jump parameters1981 20th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes
G. Kitagawa (1987)
Non-Gaussian State—Space Modeling of Nonstationary Time SeriesJournal of the American Statistical Association, 82
(1984)
Asymptotic properties of procedures of stochastic approximation with dependent noise
(1993)
Bayesian estimation of hidden
(2001)
Implementations of the Monte Carlo
M. Pitt, N. Shephard (1999)
Filtering via Simulation: Auxiliary Particle FiltersJournal of the American Statistical Association, 94
(1987)
Universal sequential coding of messages
L. Rabiner, B. Juang (1993)
Fundamentals of speech recognition
L. Devroye (1986)
Non-Uniform Random Variate Generation
A. Ostrowski (1967)
Solution of equations and systems of equationsMathematics of Computation, 21
John Geweke (1989)
Bayesian Inference in Econometric Models Using Monte Carlo IntegrationEconometrica, 57
(1998)
Reparameterisation strategies for
N. Jain, B. Jamison (1967)
Contributions to Doeblin's theory of Markov processesZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 8
S. Jarner, Ernst Hansen (2000)
Geometric ergodicity of Metropolis algorithmsStochastic Processes and their Applications, 85
E. Punskaya, A. Doucet, W. Fitzgerald (2002)
On the use and misuse of particle filtering in digital communications2002 11th European Signal Processing Conference
Y. Ephraim, N. Merhav (2002)
Hidden Markov processesIEEE Trans. Inf. Theory, 48
K. Mengersen, R. Tweedie (1996)
Rates of convergence of the Hastings and Metropolis algorithmsAnnals of Statistics, 24
B. Carlin, Nicholas Polson, D. Stoffer (1992)
A Monte Carlo Approach to Nonnormal and Nonlinear State-Space ModelingJournal of the American Statistical Association, 87
E. Nummelin (1978)
A splitting technique for Harris recurrent Markov chainsZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 43
L. Baum, T. Petrie (1966)
Statistical Inference for Probabilistic Functions of Finite State Markov ChainsAnnals of Mathematical Statistics, 37
O. Zeitouni, J. Ziv, N. Merhav (1991)
When Is The Generalized Likelihood Ratio Test Optimal?Proceedings. 1991 IEEE International Symposium on Information Theory
E. Beale, W. Zangwill (1970)
Nonlinear Programming: A Unified Approach., 133
S. Scott (2002)
Bayesian Methods for Hidden Markov ModelsJournal of the American Statistical Association, 97
O. Cappé, V. Buchoux, É. Moulines (1998)
Quasi-Newton method for maximum likelihood estimation of hidden Markov modelsProceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181), 4
R. Elliott, V. Krishnamurthy (1999)
New finite-dimensional filters for parameter estimation of discrete-time linear Gaussian modelsIEEE Trans. Autom. Control., 44
P. Moral, J. Jacod (2001)
Interacting Particle Filtering with Discrete-Time Observations: Asymptotic Behaviour in the Gaussian Case
Jun Liu, Rong Chen, T. Logvinenko (2001)
A Theoretical Framework for Sequential Importance Sampling with Resampling
I. Ibragimov, R. Khasʹminskiĭ (1981)
Statistical estimation : asymptotic theory
Philip Kotler, J. Hammersley, D. Handscomb (1965)
Monte Carlo MethodsApplied statistics, 14
L. Devroye, T. Klincsek (1981)
Average time behavior of distributive sorting algorithmsComputing, 26
M. Jamshidian, R. Jennrich (1997)
Acceleration of the EM Algorithm by using Quasi‐Newton MethodsJournal of the Royal Statistical Society: Series B (Statistical Methodology), 59
Paolo Giudici, T. Rydén, P. Vandekerkhove (2000)
Likelihood‐Ratio Tests for Hidden Markov ModelsBiometrics, 56
L. Whitley (1994)
A genetic algorithm tutorialStatistics and Computing, 4
A. Kong, Jun Liu, W. Wong (1994)
Sequential Imputations and Bayesian Missing Data ProblemsJournal of the American Statistical Association, 89
Boris Polyak, A. Juditsky (1992)
Acceleration of stochastic approximation by averagingSiam Journal on Control and Optimization, 30
(1987)
A noniterative sampling/importance resampling
S. MacEachern, M. Clyde, Jun Liu (1999)
Sequential importance sampling for nonparametric Bayes models: The next generationCanadian Journal of Statistics, 27
D. Mayne (1966)
A solution of the smoothing problem for linear dynamic systemsAutom., 4
O. Zeitouni, M. Gutman (1991)
On universal hypotheses testing via large deviationsIEEE Trans. Inf. Theory, 37
Chuang-Chun Liu, P. Narayan (1994)
Order estimation and sequential universal data compression of a hidden Markov source by the method of mixturesIEEE Trans. Inf. Theory, 40
X. Meng (1994)
On the rate of convergence of the ECM algorithmAnnals of Statistics, 22
A. Wald (1949)
Note on the Consistency of the Maximum Likelihood EstimateAnnals of Mathematical Statistics, 20
Nicholas Polson, Jonathan Stroud, Peter Uller (2002)
Practical Filtering with Sequential Parameter Learning
H. Rauch, Fang-Wu Tung, C. Striebel (1965)
On the maximum likelihood estimates for linear dynamic systems
Hisashi Tanizaki, R. Mariano (1998)
Nonlinear and non-Gaussian state-space modeling with Monte Carlo simulationsJournal of Econometrics, 83
I. Schick, S. Mitter (1994)
Robust Recursive Estimation in the Presence of Heavy-Tailed Observation NoiseAnnals of Statistics, 22
L. Macdonald, I. MacDonald, I. MacDonald (1997)
Hidden Markov and Other Models for Discrete- valued Time Series
M. Stephens (2000)
Bayesian analysis of mixture models with an unknown number of components- an alternative to reversible jump methodsAnnals of Statistics, 28
W. Wonham (1964)
Some applications of stochastic difierential equations to optimal nonlinear lteringSiam Journal on Control and Optimization
N. Shephard, M. Pitt (1997)
Likelihood analysis of non-Gaussian measurement time seriesBiometrika, 84
(1973)
Optimum Monte Carlo sampling using
B. Ristic, S. Arulampalam, N. Gordon (2004)
Beyond the Kalman Filter: Particle Filters for Tracking Applications
S. Young (1996)
A review of large-vocabulary continuous-speech recognition
M. Woodbury (1972)
A missing information principle: theory and applications
P. Overschee, B. Moor (1993)
Subspace algorithms for the stochastic identification problem,Autom., 29
N. Gordon, D. Salmond, Adrian Smith (1993)
Novel approach to nonlinear/non-Gaussian Bayesian state estimation, 140
S. Nielsen (2000)
The stochastic EM algorithm: estimation and asymptotic resultsBernoulli, 6
Lawrence Laboratory, Lindsay Schachinger (1994)
Summary of the
A. Shiryayev (1992)
On The Limit Theorems of Probability Theory
Y. Ho, R. Lee (1964)
A Bayesian approach to problems in stochastic estimation and controlIEEE Transactions on Automatic Control, 9
J. Rosenthal (1995)
Minorization Conditions and Convergence Rates for Markov Chain Monte CarloJournal of the American Statistical Association, 90
P. Glynn, D. Iglehart (1989)
Importance sampling for stochastic simulationsQuality Engineering, 36
N. Gupta, R. Mehra (1974)
Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculationsIEEE Transactions on Automatic Control, 19
A. Viterbi (1967)
Error bounds for convolutional codes and an asymptotically optimum decoding algorithmIEEE Trans. Inf. Theory, 13
(1951)
Particle Filters for Target Tracking
C. Robert, G. Casella (2005)
Monte Carlo Statistical MethodsTechnometrics, 47
H. Kiinsch (2000)
State Space and Hidden Markov Models
R. Durrett (1993)
Probability: Theory and Examples
M. Tanner, W. Wong (1987)
The calculation of posterior distributions by data augmentationJournal of the American Statistical Association, 82
G. Petris, L. Tardella (2003)
A geometric approach to transdimensional markov chain monte carloCanadian Journal of Statistics, 31
Thomas Kaijser (1975)
A Limit Theorem for Partially Observed Markov ChainsAnnals of Probability, 3
M. Askar, H. Derin (1981)
A recursive algorithm for the Bayes solution of the smoothing problemIEEE Transactions on Automatic Control, 26
C. Olivier (2001)
Recursive computation of smoothed functionals of hidden Markovian processes using a particle approximationMonte Carlo Methods and Applications, 7
R. Fletcher (1988)
Practical Methods of Optimization
G. Roberts, R. Tweedie (1996)
Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithmsBiometrika, 83
A. Ullah, Kusum Mundra, A. Nagar, S. Basu, Samita Sareen, B. Raj (2002)
Asymmetry of Business Cycles: The Markov-Switching Approach
H. Akashi, H. Kumamoto (1977)
Random sampling approach to state estimation in switching environmentsAutom., 13
Hisashi Tanizaki (2003)
Ch. 22. Nonlinear and non-gaussian state-space modeling with monte carlo techniques: A survey and comparative studyHandbook of Statistics, 21
C. Robert (1994)
The Bayesian choice
A. Shiryaev (1967)
Addendum: On Stochastic Equations in the Theory of Conditional Markov ProcessesTheory of Probability and Its Applications, 12
M. Seiler, F. Seiler (1989)
Numerical Recipes in C: The Art of Scientific ComputingRisk Analysis, 9
G. Roberts, J. Rosenthal (2004)
General state space Markov chains and MCMC algorithmsProbability Surveys, 1
L. Baum, T. Petrie, George Soules, Norman Weiss (1970)
A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov ChainsAnnals of Mathematical Statistics, 41
W. Press (1994)
Numerical recipes in C++: the art of scientific computing, 2nd Edition (C++ ed., print. is corrected to software version 2.10)
O. Zeitouni, A. Dembo (1988)
Exact filters for the estimation of the number of transitions of finite-state continuous-time Markov processesIEEE Trans. Inf. Theory, 34
(2003)
Hidden Markov models and the Baum-Welch algorithm
Jun Liu, Rong Chen (1995)
Blind Deconvolution via Sequential ImputationsJournal of the American Statistical Association, 90
L. Liporace (1982)
Maximum likelihood estimation for multivariate observations of Markov sourcesIEEE Trans. Inf. Theory, 28
B. Leroux
Stochastic Processes and Their Applications Maximum-likelihood Estimation for Hidden Markov Models Markov Chain * Consistency * Subadditive Ergodic Theorem * Identifiability * Entropy * Kullback-leibler Divergence * Shannon-mcmillan-breiman Theorem
P. Moral (2004)
Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications
J. Pearl (1991)
Probabilistic reasoning in intelligent systems - networks of plausible inference
F. Gland, N. Oudjane (2004)
STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC AND APPLICATION TO PARTICLE FILTERS1
X. Meng, D. Dyk (1997)
The EM Algorithm—an Old Folk‐song Sung to a Fast New TuneJournal of the Royal Statistical Society: Series B (Statistical Methodology), 59
A. Budhiraja, D. Ocone (1997)
Exponential stability of discrete-time filters for bounded observation noiseSystems & Control Letters, 30
R. Shumway, D. Stoffer (1991)
Dynamic linear models with switchingJournal of the American Statistical Association, 86
E. Weinstein, A. Oppenheim, M. Feder, J. Buck (1994)
Iterative and sequential algorithms for multisensor signal enhancementIEEE Trans. Signal Process., 42
J. Handschin (1970)
Monte Carlo techniques for prediction and filtering of non-linear stochastic processesAutomatica, 6
R. Liptser, A. Shiryayev (1984)
Statistics of Random Processes I: General Theory
X. Meng, D. Rubin (1991)
Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM AlgorithmJournal of the American Statistical Association, 86
S. Meyn, R. Tweedie (1993)
Markov Chains and Stochastic Stability
Greg Wei, M. Tanner (1990)
A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation AlgorithmsJournal of the American Statistical Association, 85
J. Hammersley, D. Handscomb (1964)
Monte Carlo Methods
A. Doucet, V. Tadić (2005)
Maximum Likelihood Parameter Estimation in General State-Space Models using Particle Methods
This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory. Topics range from filtering and smoothing of the hidden Markov chain to parameter estimation, Bayesian methods and estimation of the number of states. In a unified way the book covers both models with finite state spaces and models with continuous state spaces (also called state-space models) requiring approximate simulation-based algorithms that are also described in detail. Many examples illustrate the algorithms and theory. This book builds on recent developments to present a self-contained view. ; This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory. The book builds on recent developments, both at the foundational level and the computational level, to present a self-contained view. ; Hidden Markov models have become a widely used class of statistical models with applications in diverse areas such as communications engineering, bioinformatics, finance and many more. This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory. Topics range from filtering and smoothing of the hidden Markov chain to parameter estimation, Bayesian methods and estimation of the number of states. In a unified way the book covers both models with finite state spaces, which allow for exact algorithms for filtering, estimation etc. and models with continuous state spaces (also called state-space models) requiring approximate simulation-based algorithms that are also described in detail. Simulation in hidden Markov models is addressed in five different chapters that cover both Markov chain Monte Carlo and sequential Monte Carlo approaches. Many examples illustrate the algorithms and theory. The book also carefully treats Gaussian linear state-space models and their extensions and it contains a chapter on general Markov chain theory and probabilistic aspects of hidden Markov models. This volume will suit anybody with an interest in inference for stochastic processes, and it will be useful for researchers and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The algorithmic parts of the book do not require an advanced mathematical background, while the more theoretical parts require knowledge of probability theory at the measure-theoretical level. From the reviews: "By providing an overall survey of results obtained so far in a very readable manner, and also presenting some new ideas, this well-written book will appeal to academic researchers in the field of HMMs, with PhD students working on related topics included. It will also appeal to practitioners and researchers from other fields by guiding them through the computational steps needed for making inference HMMs and/or by providing them with the relevant underlying statistical theory. In the reviewer's opinion this book will shortly become a reference work in its field." MathSciNet "This monograph is a valuable resource. It provides a good literature review, an excellent account of the state of the art research on the necessary theory and algorithms, and ample illustrations of numerous applications of HMM. It goes much beyond the earlier resources on HMM...I anticipate this work to serve well many Technometrics readers in the coming years." Haikady N. Nagaraja for Technometrics, November 2006 ; Main Definitions and Notations.- Main Definitions and Notations.- State Inference.- Filtering and Smoothing Recursions.- Advanced Topics in Smoothing.- Applications of Smoothing.- Monte Carlo Methods.- Sequential Monte Carlo Methods.- Advanced Topics in Sequential Monte Carlo.- Analysis of Sequential Monte Carlo Methods.- Parameter Inference.- Maximum Likelihood Inference, Part I: Optimization Through Exact Smoothing.- Maximum Likelihood Inference, Part II: Monte Carlo Optimization.- Statistical Properties of the Maximum Likelihood Estimator.- Fully Bayesian Approaches.- Background and Complements.- Elements of Markov Chain Theory.- An Information-Theoretic Perspective on Order Estimation.; From the reviews: "By providing an overall survey of results obtained so far in a very readable manner, and also presenting some new ideas, this well-written book will appeal to academic researchers in the field of HMMs, with PhD students working on related topics included. It will also appeal to practitioners and researchers from other fields by guiding them through the computational steps needed for making inference HMMs and/or by providing them with the relevant underlying statistical theory. In the reviewer's opinion this book will shortly become a reference work in its field." MathSciNet "This monograph is a valuable resource. It provides a good literature review, an excellent account of the state of the art research on the necessary theory and algorithms, and ample illustrations of numerous applications of HMM. It goes much beyond the earlier resources on HMM...I anticipate this work to serve well many Technometrics readers in the coming years." Haikady N. Nagaraja for Technometrics, November 2006 "This monograph is an attempt to present a reasonably complete up-to-date picture of the field of Hidden Markov Models (HMM) that is self-contained from a theoretical point of view and self sufficient from a methodological point of view. … The book is written for academic researchers in the field of HMMs, and also for practitioners and researchers from other fields. … all the theory is illustrated with relevant running examples. This voluminous book has indeed the potential to become a standard text on HMM." (R. Schlittgen, Zentralblatt MATH, Vol. 1080, 2006) "Providing an overall survey of results obtained so far in a very readable manner … this well-written book will appeal to academic researchers in the field of HMMs, with PhD students working on related topics included. It will also appeal to practitioners and researchers from other fields by guiding them through the computational steps needed for making interference on HMMs and/or by providing them with the relevant underlying statistical theory. In the reviewer’s opinion this book will shortly become a reference work in its field." (M. Iosifescu, Mathematical Reviews, Issue 2006 e) "The authors describe Hidden Markov Models (HMMs) as ‘one of the most successful statistical modelling ideas … in the last forty years.’ The book considers both finite and infinite sample spaces. … Illustrative examples … recur throughout the book. … This fascinating book offers new insights into the theory and application of HMMs, and in addition it is a useful source of reference for the wide range of topics considered." (B. J. T. Morgan, Short Book Reviews, Vol. 26 (2), 2006) "In Inference in Hidden Markov Models, Cappé et al. present the current state of the art in HMMs in an emminently readable, thorough, and useful way. This is a very well-written book … . The writing is clear and concise. … the book will appeal to academic researchers in the field of HMMs, in particular PhD students working on related topics, by summing up the results obtained so far and presenting some new ideas … ." (Robert Shearer, Interfaces, Vol. 37 (2), 2007) ; Hidden Markov models have become a widely used class of statistical models with applications in diverse areas such as communications engineering, bioinformatics, finance and many more. This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory. Topics range from filtering and smoothing of the hidden Markov chain to parameter estimation, Bayesian methods and estimation of the number of states. In a unified way the book covers both models with finite state spaces, which allow for exact algorithms for filtering, estimation etc. and models with continuous state spaces (also called state-space models) requiring approximate simulation-based algorithms that are also described in detail. Simulation in hidden Markov models is addressed in five different chapters that cover both Markov chain Monte Carlo and sequential Monte Carlo approaches. Many examples illustrate the algorithms and theory. The book also carefully treats Gaussian linear state-space models and their extensions and it contains a chapter on general Markov chain theory and probabilistic aspects of hidden Markov models. This volume will suit anybody with an interest in inference for stochastic processes, and it will be useful for researchers and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The algorithmic parts of the book do not require an advanced mathematical background, while the more theoretical parts require knowledge of probability theory at the measure-theoretical level. Olivier Cappé is Researcher for the French National Center for Scientific Research (CNRS). He received the Ph.D. degree in 1993 from Ecole Nationale Supérieure des Télécommunications, Paris, France, where he is currently a Research Associate. Most of his current research concerns computational statistics and statistical learning. Eric Moulines is Professor at Ecole Nationale Supérieure des Télécommunications (ENST), Paris, France. He graduated from Ecole Polytechnique, France, in 1984 and received the Ph.D. degree from ENST in 1990. He has authored more than 150 papers in applied probability, mathematical statistics and signal processing. Tobias Rydén is Professor of Mathematical Statistics at Lund University, Sweden, where he also received his Ph.D. in 1993. His publications include papers ranging from statistical theory to algorithmic developments for hidden Markov models. ; Builds on recent developments, both at the foundational level and the computational level, to present a self-contained view; Hidden Markov models have become a widely used class of statistical models with applications in diverse areas such as communications engineering, bioinformatics, finance and many more. This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory. In a unified way the book covers both models with finite state spaces, which allow for exact algorithms for filtering, estimation etc. and models with continuous state spaces (also called state-space models) requiring approximate simulation-based algorithms that are also described in detail. Many examples illustrate the algorithms and theory. The book builds on recent developments, both at the foundational level and the computational level, to present a self-contained view. ; US
Published: Apr 18, 2006
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote