Finite-dimensional linear algebra /

Saved in:
Bibliographic Details
Author / Creator:Gockenbach, Mark S.
Imprint:Boca Raton, FL : CRC Press, c2010.
Description:xxi, 650 p. : ill. ; 25 cm.
Language:English
Series:Discrete mathematics and its applications
CRC Press series on discrete mathematics and its applications.
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/8137691
Hidden Bibliographic Details
ISBN:9781439815632 (hardcover : alk. paper)
1439815631 (hardcover : alk. paper)
Notes:"A Chapman & Hall book."
Includes bibliographical references and index.
Table of Contents:
  • Preface
  • About the author
  • 1. Some problems posed on vector spaces
  • 1.1. Linear equations
  • 1.1.1. Systems of linear algebraic equations
  • 1.1.2. Linear ordinary differential equations
  • 1.1.3. Some interpretation: The structure of the solution set to a linear equation
  • 1.1.4. Finite fields and applications in discrete mathematics
  • 1.2. Best approximation
  • 1.2.1. Overdetermined linear systems
  • 1.2.2. Best approximation by a polynomial
  • 1.3. Diagonalization
  • 1.4. Summary
  • 2. Fields and vector spaces
  • 2.1. Fields
  • 2.1.1. Definition and examples
  • 2.1.2. Basic properties of fields
  • 2.2. Vector spaces
  • 2.2.1. Examples of vector spaces
  • 2.3. Subspaces
  • 2.4. Linear combinations and spanning sets
  • 2.5. Linear independence
  • 2.6. Basis and dimension
  • 2.7. Properties of bases
  • 2.8. Polynomial interpolation and the Lagrange basis
  • 2.8.1. Secret sharing
  • 2.9. Continuous piecewise polynomial functions
  • 2.9.1. Continuous piecewise linear functions
  • 2.9.2. Continuous piecewise quadratic functions
  • 2.9.3. Error in polynomial interpolation
  • 3. Linear operators
  • 3.1. Linear operators
  • 3.1.1. Matrix operators
  • 3.2. More properties of linear operators
  • 3.2.1. Vector spaces of operators
  • 3.2.2. The matrix of a linear operator on Euclidean spaces
  • 3.2.3. Derivative and differential operators
  • 3.2.4. Representing spanning sets and bases using matrices
  • 3.2.5. The transpose of a matrix
  • 3.3. Isomorphic vector spaces
  • 3.3.1. Injective and surjective functions; inverses
  • 3.3.2. The matrix of a linear operator on general vector spaces
  • 3.4. Linear operator equations
  • 3.4.1. Homogeneous linear equations
  • 3.4.2. Inhomogeneous linear equations
  • 3.4.3. General solutions
  • 3.5. Existence and uniqueness of solutions
  • 3.5.1. The kernel of a linear operator and injectivity
  • 3.5.2. The rank of a linear operator and surjectivity
  • 3.5.3. Existence and uniqueness
  • 3.6. The fundamental theorem; inverse operators
  • 3.6.1. The inverse of a linear operator
  • 3.6.2. The inverse of a matrix
  • 3.7. Gaussian elimination
  • 3.7.1. Computing A -1
  • 3.7.2. Fields other than R
  • 3.8. Newton's method
  • 3.9. Linear ordinary differential equations
  • 3.9.1. The dimension of ker(L)
  • 3.9.2. Finding a basis for ker(L)
  • 3.9.2.1. The easy case: Distinct real roots
  • 3.9.2.2. The case of repeated real roots
  • 3.9.2.3. The case of complex roots
  • 3.9.3. The Wronskian test for linear independence
  • 3.9.4. The Vandermonde matrix
  • 3.10. Graph theory
  • 3.10.1. The incidence matrix of a graph
  • 3.10.2. Walks and matrix multiplication
  • 3.10.3. Graph isomorphisms
  • 3.11. Coding theory
  • 3.11.1. Generator matrices; encoding and decoding
  • 3.11.2. Error correction
  • 3.11.3. The probability of errors
  • 3.12. Linear programming
  • 3.12.1. Specification of linear programming problems
  • 3.12.2. Basic theory
  • 3.12.3. The simplex method
  • 3.12.3.1. Finding an initial BFS
  • 3.12.3.2. Unbounded LPs
  • 3.12.3.3. Degeneracy and cycling
  • 3.12.4. Variations on the standard LPs
  • 4. Determinants and eigenvalues
  • 4.1. The determinant function
  • 4.1.1. Permutations
  • 4.1.2. The complete expansion of the determinant
  • 4.2. Further properties of the determinant function
  • 4.3. Practical computation of det (A)
  • 4.3.1. A recursive formula for det (A)
  • 4.3.2. Cramer's rule
  • 4.4. A note about polynomials
  • 4.5. Eigenvalues and the characteristic polynomial
  • 4.5.1. Eigenvalues of real matrix
  • 4.6. Diagonalization
  • 4.7. Eigenvalues of linear operators
  • 4.8. Systems of linear ODEs
  • 4.8.1. Complex eigenvalues
  • 4.8.2. Solving the initial value problem
  • 4.8.3. Linear systems in matrix form
  • 4.9. Integer programming
  • 4.9.1. Totally unimodular matrices
  • 4.9.2. Transportation problems
  • 5. The Jordan canonical form
  • 5.1. Invariant subspaces
  • 5.1.1. Direct sums
  • 5.1.2. Eigenspaces and generalized eigenspaces
  • 5.2. Generalized eigenspaces
  • 5.2.1. Appendix: Beyond generalized eigenspaces
  • 5.2.2. The Cayley-Hamilton theorem
  • 5.3. Nilpotent operators
  • 5.4. The Jordan canonical form of a matrix
  • 5.5. The matrix exponential
  • 5.5.1. Definition of the matrix exponential
  • 5.5.2. Computing the matrix exponential
  • 5.6. Graphs and eigenvalues
  • 5.6.1. Cospectral graphs
  • 5.6.2. Bipartite graphs and eigenvalues
  • 5.6.3. Regular graphs
  • 5.6.4. Distinct eigenvalues of a graph
  • 6. Orthogonality and best approximation
  • 6.1. Norms and inner products
  • 6.1.1. Examples of norms and inner products
  • 6.2. The adjoint of a linear operator
  • 6.2.1. The adjoint of a linear operator
  • 6.3. Orthogonal vectors and bases
  • 6.3.1. Orthogonal bases
  • 6.4. The projection theorem
  • 6.4.1. Overdetermined linear systems
  • 6.5. The Gram-Schmidt process
  • 6.5.1. Least-squares polynomial approximation
  • 6.6. Orthogonal complements
  • 6.6.1. The fundamental theorem of linear algebra revisited
  • 6.7. Complex inner product spaces
  • 6.7.1. Examples of complex inner product spaces
  • 6.7.2. Orthogonality in complex inner product spaces
  • 6.7.3. The adjoint of a linear operator
  • 6.8. More on polynomial approximation
  • 6.8.1. A weighted L 2 inner product
  • 6.9. The energy inner product and Galerkin's method
  • 6.9.1. Piecewise polynomials
  • 6.9.2. Continuous piecewise quadratic functions
  • 6.9.3. Higher degree finite element spaces
  • 6.10. Gaussian quadrature
  • 6.10.1. The trapezoidal rule and Simpson's rule
  • 6.10.2. Gaussian quadrature
  • 6.10.3. Orthogonal polynomials
  • 6.10.4. Weighted Gaussian quadrature
  • 6.11. The Helmholtz decomposition
  • 6.11.1. The divergence theorem
  • 6.11.2. Stokes's theorem
  • 6.11.3. The Helmholtz decomposition
  • 7. The spectral theory of symmetric matrices
  • 7.1. The spectral theorem for symmetric matrices
  • 7.1.1. Symmetric positive definite matrices
  • 7.1.2. Hermitian matrices
  • 7.2. The spectral theorem for normal matrices
  • 7.2.1. Outer products and the spectral decomposition
  • 7.3. Optimization and the Hessian matrix
  • 7.3.1. Background
  • 7.3.2. Optimization of quadratic functions
  • 7.3.3. Taylor's theorem
  • 7.3.4. First-and second-order optimality conditions
  • 7.3.5. Local quadratic approximations
  • 7.4. Lagrange multipliers
  • 7.5. Spectral methods for differential equations
  • 7.5.1. Eigenpairs of the differential operator
  • 7.5.2. Solving the BVP using eigenfunctions
  • 8. The singular value decomposition
  • 8.1. Introduction to the SVD
  • 8.1.1. The SVD for singular matrices
  • 8.2. The SVD for general matrices
  • 8.3. Solving least-squares problems using the SVD
  • 8.4. The SVD and linear inverse problems
  • 8.4.1. Resolving inverse problems through regularization
  • 8.4.2. The truncated SVD method
  • 8.4.3. Tikhonov regularization
  • 8.5. The Smith normal form of a matrix
  • 8.5.1. An algorithm to compute the Smith normal form
  • 8.5.2. Applications of the Smith normal form
  • 9. Matrix factorizations and numerical linear algebra
  • 9.1. The LU factorization
  • 9.1.1. Operation counts
  • 9.1.2. Solving Ax=b using the LU factorization
  • 9.2. Partial pivoting
  • 9.2.1. Finite-precision arithmetic
  • 9.2.2. Examples of errors in Gaussian elimination
  • 9.2.3. Partial pivoting
  • 9.2.4. The PLU factorization
  • 9.3. The Cholesky factorization
  • 9.4. Matrix norms
  • 9.4.1. Examples of induced matrix norms
  • 9.5. The sensitivity of linear systems to errors
  • 9.6. Numerical stability
  • 9.6.1. Backward error analysis
  • 9.6.2. Analysis of Gaussian elimination with partial pivoting
  • 9.7. The sensitivity of the least-squares problem
  • 9.8. The QR factorization
  • 9.8.1. Solving the least-squares problem
  • 9.8.2. Computing the QR factorization
  • 9.8.3. Backward stability of the Householder QR algorithm
  • 9.8.4. Solving a linear system
  • 9.9. Eigenvalues and simultaneous iteration
  • 9.9.1. Reduction to triangular form
  • 9.9.2. The power method
  • 9.9.3. Simultaneous iteration
  • 9.10. The QR algorithm
  • 9.10.1. A practical QR algorithm
  • 9.10.1.1. Reduction to upper Hessenberg form
  • 9.10.1.2. The explicitly shifted QR algorithm
  • 9.10.1.3. The implicitly shifted QR algorithm
  • 10. Analysis in vector spaces
  • 10.1. Analysis in R n
  • 10.1.1. Convergence and continuity in R n
  • 10.1.2. Compactness
  • 10.1.3. Completeness of R n
  • 10.1.4. Equivalence of norms on R n
  • 10.2. Infinite-dimensional vector spaces
  • 10.2.1. Banach and Hilbert spaces
  • 10.3. Functional analysis
  • 10.3.1. The dual of a Hilbert space
  • 10.4. Weak convergence
  • 10.4.1. Convexity
  • A. The Euclidean algorithm
  • A.0.1. Computing multiplicative inverses in Z p
  • A.0.2. Related results
  • B. Permutations
  • C. Polynomials
  • C.1. Rings of Polynomials
  • C.2. Polynomial functions
  • C.2.1. Factorization of polynomials
  • D. Summary of analysis in R
  • D.0.1. Convergence
  • D.0.2. Completeness of R
  • D.0.3. Open and closed sets
  • D.0.4. Continuous functions
  • Bibliography
  • Index