A course in time series analysis /

Saved in:
Bibliographic Details
Imprint:New York : John Wiley, c2001.
Description:xvii, 460 p. : ill. ; 25 cm.
Language:English
Series:Wiley series in probability and statistics : probability and statistics section.
Wiley series in probability and statistics. Probability and statistics.
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/4375623
Hidden Bibliographic Details
Other authors / contributors:Peña, Daniel, 1948-
Tiao, George C., 1933-
Tsay, Ruey S., 1951-
ISBN:047136164X (cloth : alk. paper)
Notes:Includes bibliographical references and index.
Table of Contents:
  • Preface
  • About ECAS
  • Contributors
  • 1.. Introduction
  • 1.1.. Examples of time series problems
  • 1.1.1.. Stationary series
  • 1.1.2.. Nonstationary series
  • 1.1.3.. Seasonal series
  • 1.1.4.. Level shifts and outliers in time series
  • 1.1.5.. Variance changes
  • 1.1.6.. Asymmetric time series
  • 1.1.7.. Unidirectional-feedback relation between series
  • 1.1.8.. Comovement and cointegration
  • 1.2.. Overview of the book
  • 1.3.. Further reading
  • Part I. Basic Concepts in Univariate Time Series
  • 2.. Univariate Time Series: Autocorrelation, Linear Prediction, Spectrum, and State-Space Model
  • 2.1.. Linear time series models
  • 2.2.. The autocorrelation function
  • 2.3.. Lagged prediction and the partial autocorrelation function
  • 2.4.. Transformations to stationarity
  • 2.5.. Cycles and the periodogram
  • 2.6.. The spectrum
  • 2.7.. Further interpretation of time series acf, pacf, and spectrum
  • 2.8.. State-space models and the Kalman Filter
  • 3.. Univariate Autoregressive Moving-Average Models
  • 3.1.. Introduction
  • 3.1.1.. Univariate ARMA models
  • 3.1.2.. Outline of the chapter
  • 3.2.. Some basic properties of univariate ARMA models
  • 3.2.1.. The [phi] and [pi] weights
  • 3.2.2.. Stationarity condition and autocovariance structure of z[subscript t]
  • 3.2.3.. The autocorrelation function
  • 3.2.4.. The partial autocorrelation function
  • 3.2.5.. The extended autocorrelation function
  • 3.3.. Model specification strategy
  • 3.3.1.. Tentative specification
  • 3.3.2.. Tentative model specification via SEACF
  • 3.4.. Examples
  • 4.. Model Fitting and Checking, and the Kalman Filter
  • 4.1.. Prediction error and the estimation criterion
  • 4.2.. The likelihood of ARMA models
  • 4.3.. Likelihoods calculated using orthogonal errors
  • 4.4.. Properties of estimates and problems in estimation
  • 4.5.. Checking the fitted model
  • 4.6.. Estimation by fitting to the sample spectrum
  • 4.7.. Estimation of structural models by the Kalman filter
  • 5.. Prediction and Model Selection
  • 5.1.. Introduction
  • 5.2.. Properties of minimum mean-square error prediction
  • 5.2.1.. Prediction by the conditional expectation
  • 5.2.2.. Linear predictions
  • 5.3.. The computation of ARIMA forecasts
  • 5.4.. Interpreting the forecasts from ARIMA models
  • 5.4.1.. Nonseasonal models
  • 5.4.2.. Seasonal models
  • 5.5.. Prediction confidence intervals
  • 5.5.1.. Known parameter values
  • 5.5.2.. Unknown parameter values
  • 5.6.. Forecast updating
  • 5.6.1.. Computing updated forecasts
  • 5.6.2.. Testing model stability
  • 5.7.. The combination of forecasts
  • 5.8.. Model selection criteria
  • 5.8.1.. The FPE and AIC criteria
  • 5.8.2.. The Schwarz criterion
  • 5.9.. Conclusions
  • 6.. Outliers, Influential Observations, and Missing Data
  • 6.1.. Introduction
  • 6.2.. Types of outliers in time series
  • 6.2.1.. Additive outliers
  • 6.2.2.. Innovative outliers
  • 6.2.3.. Level shifts
  • 6.2.4.. Outliers and intervention analysis
  • 6.3.. Procedures for outlier identification and estimation
  • 6.3.1.. Estimation of outlier effects
  • 6.3.2.. Testing for outliers
  • 6.4.. Influential observations
  • 6.4.1.. Influence on time series
  • 6.4.2.. Influential observations and outliers
  • 6.5.. Multiple outliers
  • 6.5.1.. Masking effects
  • 6.5.2.. Procedures for multiple outlier identification
  • 6.6.. Missing-value estimation
  • 6.6.1.. Optimal interpolation and inverse autocorrelation function
  • 6.6.2.. Estimation of missing values
  • 6.7.. Forecasting with outliers
  • 6.8.. Other approaches
  • 6.9.. Appendix
  • 7.. Automatic Modeling Methods for Univariate Series
  • 7.1.. Classical model identification methods
  • 7.1.1.. Subjectivity of the classical methods
  • 7.1.2.. The difficulties with mixed ARMA models
  • 7.2.. Automatic model identification methods
  • 7.2.1.. Unit root testing
  • 7.2.2.. Penalty function methods
  • 7.2.3.. Pattern identification methods
  • 7.2.4.. Uniqueness of the solution and the purpose of modeling
  • 7.3.. Tools for automatic model identification
  • 7.3.1.. Test for the log-level specification
  • 7.3.2.. Regression techniques for estimating unit roots
  • 7.3.3.. The Hannan--Rissanen method
  • 7.3.4.. Liu's filtering method
  • 7.4.. Automatic modeling methods in the presence of outliers
  • 7.4.1.. Algorithms for automatic outlier detection and correction
  • 7.4.2.. Estimation and filtering techniques to speed up the algorithms
  • 7.4.3.. The need to robustify automatic modeling methods
  • 7.4.4.. An algorithm for automatic model identification in the presence of outliers
  • 7.5.. An automatic procedure for the general regression--ARIMA model in the presence of outlierw, special effects, and, possibly, missing observations
  • 7.5.1.. Missing observations
  • 7.5.2.. Trading day and Easter effects
  • 7.5.3.. Intervention and regression effects
  • 7.6.. Examples
  • 7.7.. Tabular summary
  • 8.. Seasonal Adjustment and Signal Extraction Time Series
  • 8.1.. Introduction
  • 8.2.. Some remarks on the evolution of seasonal adjustment methods
  • 8.2.1.. Evolution of the methodologic approach
  • 8.2.2.. The situation at present
  • 8.3.. The need for preadjustment
  • 8.4.. Model specification
  • 8.5.. Estimation of the components
  • 8.5.1.. Stationary case
  • 8.5.2.. Nonstationary series
  • 8.6.. Historical or final estimator
  • 8.6.1.. Properties of final estimator
  • 8.6.2.. Component versus estimator
  • 8.6.3.. Covariance between estimators
  • 8.7.. Estimators for recent periods
  • 8.8.. Revisions in the estimator
  • 8.8.1.. Structure of the revision
  • 8.8.2.. Optimality of the revisions
  • 8.9.. Inference
  • 8.9.1.. Optical Forecasts of the Components
  • 8.9.2.. Estimation error
  • 8.9.3.. Growth rate precision
  • 8.9.4.. The gain from concurrent adjustment
  • 8.9.5.. Innovations in the components (pseudoinnovations)
  • 8.10.. An example
  • 8.11.. Relation with fixed filters
  • 8.12.. Short-versus long-term trends; measuring economic cycles
  • Part II. Advanced Topics in Univariate Time Series
  • 9.. Heteroscedastic Models
  • 9.1.. The ARCH model
  • 9.1.1.. Some simple properties of ARCH models
  • 9.1.2.. Weaknesses of ARCH models
  • 9.1.3.. Building ARCH models
  • 9.1.4.. An illustrative example
  • 9.2.. The GARCH Model
  • 9.2.1.. An illustrative example
  • 9.2.2.. Remarks
  • 9.3.. The exponential GARCH model
  • 9.3.1.. An illustrative example
  • 9.4.. The CHARMA model
  • 9.5.. Random coefficient autoregressive (RCA) model
  • 9.6.. Stochastic volatility model
  • 9.7.. Long-memory stochastic volatility model
  • 10.. Nonlinear Time Series Models: Testing and Applications
  • 10.1.. Introduction
  • 10.2.. Nonlinearity tests
  • 10.2.1.. The test
  • 10.2.2.. Comparison and application
  • 10.3.. The Tar model
  • 10.3.1.. U.S. real GNP
  • 10.3.2.. Postsample forecasts and discussion
  • 10.4.. Concluding remarks
  • 11.. Bayesian Time Series Analysis
  • 11.1.. Introduction
  • 11.2.. A general univariate time series model
  • 11.3.. Estimation
  • 11.3.1.. Gibbs sampling
  • 11.3.2.. Griddy Gibbs
  • 11.3.3.. An illustrative example
  • 11.4.. Model discrimination
  • 11.4.1.. A mixed model with switching
  • 11.4.2.. Implementation
  • 11.5.. Examples
  • 12.. Nonparametric Time Series Analysis: Nonparametric Regression, Locally Weighted Regression, Autoregression, and Quantile Regression
  • 12.1. Introduction
  • 12.2. Nonparametric regression
  • 12.3. Kernel estimation in time series
  • 12.4. Problems of simple kernel estimation and restricted approaches
  • 12.5. Locally weighted regression
  • 12.6. Applications of locally weighted regression to time series
  • 12.7. Parameter selection
  • 12.8. Time series decomposition with locally weighted regression
  • 13.. Neural Network Models
  • 13.1.. Introduction
  • 13.2.. The multilayer perceptron
  • 13.3.. Autoregressive neural network models
  • 13.3.1.. Example: Sunspot series
  • 13.4.. The recurrent perceptron
  • 13.4.1.. Examples of recurrent neural network models
  • 13.4.2.. A unifying view
  • Part III. Multivariate Time Series
  • 14.. Vector ARMA Models
  • 14.1.. Introduction
  • 14.2.. Transfer function or unidirectional models
  • 14.3.. The vector ARMA model
  • 14.3.1.. Some simple examples
  • 14.3.2.. Relationship to transfer function model
  • 14.3.3.. Cross-covariance and correlation matrices
  • 14.3.4.. The partial autoregression matrices
  • 14.4.. Model building strategy for multiple time series
  • 14.4.1.. Tentative specification
  • 14.4.2.. Estimation
  • 14.4.3.. Diagnostic checking
  • 14.5.. Analyses of three examples
  • 14.5.1.. The SCC data
  • 14.5.2.. The gas furnace data
  • 14.5.3.. The census housing data
  • 14.6.. Structural analysis of multivariate time series
  • 14.6.1.. A canonical analysis of multiple time series
  • 14.7.. Scalar component models in multiple time series
  • 14.7.1.. Scalar component models
  • 14.7.2.. Exchangeable models and overparameterization
  • 14.7.3.. Model specification via canonical correlation analysis
  • 14.7.4.. An illustrative example
  • 14.7.5.. Some further remarks
  • 15.. Cointegration in the VAR Model
  • 15.1.. Introduction
  • 15.1.1.. Basic definitions
  • 15.2.. Solving autoregressive equations
  • 15.2.1.. Some examples
  • 15.2.2.. An inversion theorem for matrix polynomials
  • 15.2.3.. Granger's representation
  • 15.2.4.. Prediction
  • 15.3.. The statistical model for I(1) variables
  • 15.3.1.. Hypotheses on cointegrating relations
  • 15.3.2.. Estimation of cointegrating vectors and calculation of test statistics
  • 15.3.3.. Estimation of [beta] under restrictions
  • 15.4.. Asymptotic theory
  • 15.4.1.. Asymptotic results
  • 15.4.2.. Test for cointegrating rank
  • 15.4.3.. Asymptotic distribution of [beta] and test for restrictions on [beta]
  • 15.5.. Various applications of the cointegration model
  • 15.5.1.. Rational expectations
  • 15.5.2.. Arbitrage pricing theory
  • 15.5.3.. Seasonal cointegration
  • 16.. Identification of Linear Dynamic Multiinput/Multioutput Systems
  • 16.1.. Introduction and problem statement
  • 16.2.. Representations of linear systems
  • 16.2.1.. Input/output representations
  • 16.2.2.. Solutions of linear vector difference equations (VDEs)
  • 16.2.3.. ARMA and state-space representations
  • 16.3.. The structure of state-space systems
  • 16.4.. The structure of ARMA systems
  • 16.5.. The realization of state-space systems
  • 16.5.1.. General structure
  • 16.5.2.. Echelon forms
  • 16.6.. The realization of ARMA systems
  • 16.7.. Parametrization
  • 16.8.. Estimation of real-valued parameters
  • 16.9.. Dynamic specification
  • Index