Statistical computing /

Saved in:
Bibliographic Details
Author / Creator:Kennedy, William J., 1936-
Imprint:New York : M. Dekker, c1980.
Description:xi, 591 p. : ill. ; 24 cm.
Language:English
Series:Statistics, textbooks and monographs v. 33
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/371889
Hidden Bibliographic Details
Other authors / contributors:Gentle, James E., 1943- joint author.
ISBN:0824768981
Notes:Includes bibliographies and index.
Table of Contents:
  • Preface
  • 1. Introduction
  • 1.1. Orientation
  • 1.2. Purpose
  • 1.3. Prerequisites
  • 1.4. Presentation of Algorithms
  • 2. Computer Organization
  • 2.1. Introduction
  • 2.2. Components of the Digital Computer System
  • 2.3. Representation of Numeric Values
  • 2.3.1. Integer Mode Representation
  • 2.3.2. Representation in Floating-Point Mode
  • 2.4. Floating- and Fixed-Point Arithmetic
  • 2.4.1. Floating-Point Arithmetic Operations
  • 2.4.2. Fixed-Point Arithmetic Operations
  • Exercises
  • References
  • 3. Error in Floating-Point Computation
  • 3.1. Introduction
  • 3.2. Types of Error
  • 3.3. Error Due to Approximation Imposed by the Computer
  • 3.4. Analyzing Error in a Finite Process
  • 3.5. Rounding Error in Floating-Point Computations
  • 3.6. Rounding Error in Two Common Floating-Point Calculations
  • 3.7. Condition and Numerical Stability
  • 3.8. Other Methods of Assessing Error in Computation
  • 3.9. Summary
  • Exercises
  • References
  • 4. Programming and Statistical Software
  • 4.1. Programming Languages: Introduction
  • 4.2. Components of Programming Languages
  • 4.2.1. Data Types
  • 4.2.2. Data Structures
  • 4.2.3. Syntax
  • 4.2.4. Control Structures
  • 4.3. Program Development
  • 4.4. Statistical Software
  • References and Further Readings
  • 5. Approximating Probabilities and Percentage Points in Selected Probability Distributions
  • 5.1. Notation and General Considerations
  • 5.1.1. Probability Distributions
  • 5.1.2. Accuracy Considerations
  • 5.2. General Methods in Approximation
  • 5.2.1. Approximate Transformation of Random Variables
  • 5.2.2. Closed Form Approximations
  • 5.2.3. General Series Expansion
  • 5.2.4. Exact Relationship Between Distributions
  • 5.2.5. Numerical Root Finding
  • 5.2.6. Continued Fractions
  • 5.2.7. Gaussian Quadrature
  • 5.2.8. Newton-Cotes Quadrature
  • 5.3. The Normal Distribution
  • 5.3.1. Normal Probabilities
  • 5.3.2. Normal Percentage Points
  • 5.4. Student's t Distribution
  • 5.4.1. t Probabilities
  • 5.4.2. t-Percentage Points
  • 5.5. The Beta Distribution
  • 5.5.1. Evaluating the Incomplete Beta Function
  • 5.5.2. Inverting the Incomplete Beta Function
  • 5.6. F Distribution
  • 5.6.1. F Probabilities
  • 5.6.2. F Percentage Points
  • 5.7. Chi-Square Distribution
  • 5.7.1. Chi-Square Probabilities
  • 5.7.2. Chi-Square Percentage Points
  • Exercises
  • References and Further Readings
  • 6. Random Numbers: Generation, Tests and Applications
  • 6.1. Introduction
  • 6.2. Generation of Uniform Random Numbers
  • 6.2.1. Congruential Methods
  • 6.2.2. Feedback Shift Register Methods
  • 6.2.3. Coupled Generators
  • 6.2.4. Portable Generators
  • 6.3. Tests of Random Number Generators
  • 6.3.1. Theoretical Tests
  • 6.3.2. Empirical Tests
  • 6.3.3. Selecting a Random Number Generator
  • 6.4. General Techniques for Generation of Nonuniform Random Deviates
  • 6.4.1. Use of the Cumulative Distribution Function
  • 6.4.2. Use of Mixtures of Distributions
  • 6.4.3. Rejection Methods
  • 6.4.4. Table Sampling Methods for Discrete Distributions
  • 6.4.5. The Alias Method for Discrete Distributions
  • 6.5. Generation of Variates from Specific Distributions
  • 6.5.1. The Normal Distribution
  • 6.5.2. The Gamma Distribution
  • 6.5.3. The Beta Distribution
  • 6.5.4. The F, t, and Chi-Square Distributions
  • 6.5.5. The Binomial Distribution
  • 6.5.6. The Poisson Distribution
  • 6.5.7. Distribution of Order Statistics
  • 6.5.8. Some Other Univariate Distributions
  • 6.5.9. The Multivariate Normal Distribution
  • 6.5.10. Some Other Multivariate Distributions
  • 6.6. Applications
  • 6.6.1. The Monte Carlo Method
  • 6.6.2. Sampling and Randomization
  • Exercises
  • References and Further Readings
  • 7. Selected Computational Methods in Linear Algebra
  • 7.1. Introduction
  • 7.2. Methods Based on Orthogonal Transformations
  • 7.2.1. Householder Transformations
  • 7.2.2. Givens Transformations
  • 7.2.3. The Modified Gram-Schmidt Method
  • 7.2.4. Singular-value Decomposition
  • 7.3. Gaussian Elimination and the Sweep Operator
  • 7.4. Cholesky Decomposition and Rank-One Update
  • Exercises
  • References and Further Readings
  • 8. Computational Methods for Multiple Linear Regression Analysis
  • 8.1. Basic Computational Methods
  • 8.1.1. Methods Using Orthogonal Triangularization of X
  • 8.1.2. Sweep Operations and Normal Equations
  • 8.1.3. Checking Programs, Computed Results and Improving Solutions Iteratively
  • 8.2. Regression Model Building
  • 8.2.1. All Possible Regressions
  • 8.2.2. Stepwise Regression
  • 8.2.3. Other Methods
  • 8.2.4. A Special Case--Polynomial Models
  • 8.3. Multiple Regression Under Linear Restrictions
  • 8.3.1. Linear Equality Restrictions
  • 8.3.2. Linear Inequality Restrictions
  • Exercises
  • References and Further Readings
  • 9. Computational Methods for Classification Models
  • 9.1. Introduction
  • 9.1.1. Fixed-effects Models
  • 9.1.2. Restrictions on Models and Constraints on Solutions
  • 9.1.3. Reductions in Sums of Squares
  • 9.1.4. An Example
  • 9.2. The Special Case of Balance and Completeness for Fixed-Effects Models
  • 9.2.1. Basic Definitions and Considerations
  • 9.2.2. Computer-related Considerations in the Special Case
  • 9.2.3. Analysis of Covariance
  • 9.3. The General Problem for Fixed-Effects Models
  • 9.3.1. Estimable Functions
  • 9.3.2. Selection Criterion
  • 9.3.3. Selection Criterion 2
  • 9.3.4. Summary
  • 9.4. Computing Expected Mean Squares and Estimates of Variance Components
  • 9.4.1. Computing Expected Mean Squares
  • 9.4.2. Variance Component Estimation
  • Exercises
  • References and Further Readings
  • 10. Unconstrained Optimization and Nonlinear Regression
  • 10.1. Preliminaries
  • 10.1.1. Iteration
  • 10.1.2. Function Minima
  • 10.1.3. Step Direction
  • 10.1.4. Step Size
  • 10.1.5. Convergence of the Iterative Methods
  • 10.1.6. Termination of Iteration
  • 10.2. Methods for Unconstrained Minimization
  • 10.2.1. Method of Steepest Descent
  • 10.2.2. Newton's Method and Some Modifications
  • 10.2.3. Quasi-Newton Methods
  • 10.2.4. Conjugate Gradient Method
  • 10.2.5. Conjugate Direction Method
  • 10.2.6. Other Derivative-Free Methods
  • 10.3. Computational Methods in Nonlinear Regression
  • 10.3.1. Newton's Method for the Nonlinear Regression Problem
  • 10.3.2. The Modified Gauss-Newton Method
  • 10.3.3. The Levenberg-Marquardt Modification of Gauss-Newton
  • 10.3.4. Alternative Gradient Methods
  • 10.3.5. Minimization Without Derivatives
  • 10.3.6. Summary
  • 10.4. Test Problems
  • Exercises
  • References and Further Readings
  • 11. Model Fitting Based on Criteria Other Than Least Squares
  • 11.1. Introduction
  • 11.2. Minimum L[subscript p] Norm Estimators
  • 11.2.1. L[subscript 1] Estimation
  • 11.2.2. L[subscript infinity] Estimation
  • 11.2.3. Other L[subscript p] Estimators
  • 11.3. Other Robust Estimators
  • 11.4. Biased Estimation
  • 11.5. Robust Nonlinear Regression
  • Exercises
  • References and Further Readings
  • 12. Selected Multivariate Methods
  • 12.1. Introduction
  • 12.2. Canonical Correlations
  • 12.3. Principal Components
  • 12.4. Factor Analysis
  • 12.5. Multivariate Analysis of Variance
  • Exercises
  • References and Further Readings
  • Index