Risk assessment and decision analysis with Bayesian networks /

Saved in:
Bibliographic Details
Author / Creator:Fenton, Norman E., 1956-
Imprint:Boca Raton : Taylor & Francis, 2012.
Description:xix, 503 p. : ill. ; 27 cm
Language:English
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/9141641
Hidden Bibliographic Details
Other authors / contributors:Neil, Martin (Martin D.)
ISBN:9781439809105 (hardcover : alk. paper)
1439809100 (hardcover : alk. paper)
Notes:"A CRC title."
Includes bibliographical references and index.
Table of Contents:
  • Foreword
  • Preface
  • Acknowledgments
  • Authors
  • Chapter 1. There Is More to Assessing Risk Than Statistics
  • 1.1. Introduction
  • 1.2. Predicting Economic Growth: The Normal Distribution and Its Limitations
  • 1.3. Patterns and Randomness: From School League Tables to Siegfried and Roy
  • 1.4. Dubious Relationships: Why You Should Be Very Wary of Correlations and Their Significance Values
  • 1.5. Spurious Correlations: How You Can Always Find a Silly 'Cause' of Exam Success
  • 1.6. The Danger of Regression: Looking Back When You Need to Look Forward
  • 1.7. The Danger of Averages
  • 1.7.1. What Type of Average?
  • 1.7.2. When Averages Alone Will Never Be Sufficient for Decision Making
  • 1.8. When Simpson's Paradox Becomes More Worrisome
  • 1.9. Uncertain Information and Incomplete Information: Do Not Assume They Are Different
  • 1.10. Do Not Trust Anybody (Even Experts) to Properly Reason about Probabilities
  • 1.11. Chapter Summary
  • Further Reading
  • Chapter 2. The Need for Causal, Explanatory Models in Risk Assessment
  • 2.1. Introduction
  • 2.2. Are You More Likely to Die in an Automobile Crash When the Weather Is Good Compared to Bad?
  • 2.3. When Ideology and Causation Collide
  • 2.4. The Limitations of Common Approaches to Risk Assessment
  • 2.4.1. Measuring Armageddon and Other Risks
  • 2.4.2. Risks and Opportunities
  • 2.4.3. Risk Registers and Heat Maps
  • 2.5. Thinking about Risk Using Causal Analysis
  • 2.6. Applying the Causal Framework to Armageddon
  • 2.7. Summary
  • Further Reading
  • Chapter 3. Measuring Uncertainty: The Inevitability of Subjectivity
  • 3.1. Introduction
  • 3.2. Experiments, Outcomes, and Events
  • 3.2.1. Multiple Experiments
  • 3.2.2. Joint Experiments
  • 3.2.3. Joint Events and Marginalization
  • 3.3. Frequentist versus Subjective View of Uncertainty
  • 3.4. Summary
  • Further Reading
  • Chapter 4. The Basics of Probability
  • 4.1. Introduction
  • 4.2. Some Observations Leading to Axioms and Theorems of Probability
  • 4.3. Probability Distributions
  • 4.3.1. Probability Distributions with Infinite Outcomes
  • 4.3.2. Joint Probability Distributions and Probability of Marginalized Events
  • 4.3.3. Dealing with More than Two Variables
  • 4.4. Independent Events and Conditional Probability
  • 4.5. Binomial Distribution
  • 4.6. Using Simple Probability Theory to Solve Earlier Problems and Explain Widespread Misunderstandings
  • 4.6.1. The Birthday Problem
  • 4.6.2. The Monty Hall Problem
  • 4.6.3. When Incredible Events Are Really Mundane
  • 4.6.4. When Mundane Events Really Are Quite Incredible
  • 4.7. Summary
  • Further Reading
  • Chapter 5. Bayes' Theorem and Conditional Probability
  • 5.1. Introduction
  • 5.2. All Probabilities Are Conditional
  • 5.3. Bayes' Theorem
  • 5.4. Using Bayes' Theorem to Debunk Some Probability Fallacies
  • 5.4.1. Traditional Statistical Hypothesis Testing
  • 5.4.2. The Prosecutor Fallacy Revisited
  • 5.4.3. The Defendant's Fallacy
  • 5.4.4. Odds Form of Bayes and the Likelihood Ratio
  • 5.5. Second-Order Probability
  • 5.6. Summary
  • Further Reading
  • Chapter 6. From Bayes' Theorem to Bayesian Networks
  • 6.1. Introduction
  • 6.2. A Very Simple Risk Assessment Problem
  • 6.3. Accounting for Multiple Causes (and Effects)
  • 6.4. Using Propagation to Make Special Types of Reasoning Possible
  • 6.5. The Crucial Independence Assumptions
  • 6.6. Structural Properties of BNs
  • 6.6.1. Serial Connection: Causal and Evidential Trials
  • 6.6.2. Diverging Connection: Common Cause
  • 6.6.3. Converging Connection: Common Effect
  • 6.6.4. Determining Whether Any Two Nodes in a BN Are Dependent
  • 6.7. Propagation in Bayesian Networks
  • 6.8. Using BNs to Explain Apparent Paradoxes
  • 6.8.1. Revisiting the Monty Hall Problem
  • 6.8.1.1. Simple Solution
  • 6.8.1.2. Complex Solution
  • 6.8.2. Revisiting Simpson's Paradox
  • 6.9. Steps in Building and Running a BN Model
  • 6.9.1. Building a BN Model
  • 6.9.2. Running a BN Model
  • 6.9.3. Inconsistent Evidence
  • 6.10. Summary
  • Further Reading
  • Theoretical Underpinnings
  • BN Applications
  • Nature and Theory of Causality
  • Uncertain Evidence (Soft and Virtual)
  • Chapter 7. Defining the Structure of Bayesian Networks
  • 7.1. Introduction
  • 7.2. Causal Inference and Choosing the Correct Edge Direction
  • 7.3. The Idioms
  • 7.3.1. The Cause-Consequence Idiom
  • 7.3.2. Measurement Idiom
  • 7.3.3. Definitional/Synthesis Idiom
  • 7.3.3.1. Case 1: Definitional Relationship between Variables
  • 7.3.3.2. Case 2: Hierarchical Definitions
  • 7.3.3.3. Case 3: Combining Different Nodes Together to Reduce Effects of Combinatorial Explosion ("Divorcing")
  • 7.3.4. Induction Idiom
  • 7.4. The Problems of Asymmetry and How to Tackle Them
  • 7.4.1. Impossible Paths
  • 7.4.2. Mutually Exclusive Paths
  • 7.4.3. Distinct Causal Pathways
  • 7.4.4. Taxonomic Classification
  • 7.5. Multiobject Bayesian Network Models
  • 7.6. The Missing Variable Fallacy
  • 7.7. Conclusions
  • Further Reading
  • Chapter 8. Building and Eliciting Node Probability Tables
  • 8.1. Introduction
  • 8.2. Factorial Growth in the Size of Probability Tables
  • 8.3. Labeled Nodes and Comparative Expressions
  • 8.4. Boolean Nodes and Functions
  • 8.4.1. The Asia Model
  • 8.4.2. The OR Function for Boolean Nodes
  • 8.4.3. The AND Function for Boolean Nodes
  • 8.4.4. M from N Operator
  • 8.4.5. NoisyOR Function for Boolean Nodes
  • 8.4.6. Weighted Averages
  • 8.5. Ranked Nodes
  • 8.5.1. Background
  • 8.5.2. Solution: Ranked Nodes with the TNormal Distribution
  • 8.5.3. Alternative Weighted Functions for Ranked Nodes
  • 8.5.4. Hints and Tips When Working with Ranked Nodes and NPTs
  • 8.5.4.1. Tip 1: Use the Weighted Functions as Far as Possible
  • 8.5.4.2. Tip 2: Make Use of the Fact That a Ranked Node Parent Has an Underlying Numerical Scale
  • 8.5.4.3. Tip 3: Do Not Forget the Importance of the Variance in the TNormal Distribution
  • 8.5.4.4. Tip 4: Change the Granularity of a Ranked Scale without Having to Make Any Other Changes
  • 8.5.4.5. Tip 5: Do Not Create Large, Deep, Hierarchies Consisting of Rank Nodes
  • 8.6. Elicitation
  • 8.6.1. Elicitation Protocols and Cognitive Biases
  • 8.6.2. Scoring Rules and Validation
  • 8.6.3. Sensitivity Analysis
  • 8.7. Summary
  • Further Reading
  • Chapter 9. Numeric Variables and Continuous Distribution Functions
  • 9.1. Introduction
  • 9.2. Some Theory on Functions and Continuous Distributions
  • 9.3. Static Discretization
  • 9.4. Dynamic Discretization
  • 9.5. Using Dynamic Discretization
  • 9.5.1. Prediction Using Dynamic Discretization
  • 9.5.2. Conditioning on Discrete Evidence
  • 9.5.3. Parameter Learning (Induction) Using Dynamic Discretization
  • 9.5.3.1. Classical versus Bayesian Modeling
  • 9.5.3.2. Bayesian Hierarchical Model Using Beta-Binomial
  • 9.6. Avoiding Common Problems When Using Numeric Nodes
  • 9.6.1. Unintentional Negative Values in a Node's State Range
  • 9.6.2. Potential Division by Zero
  • 9.6.3. Using Unbounded Distributions on a Bounded Range
  • 9.6.4. Observations with Very Low Probability
  • 9.7. Summary
  • Further Reading
  • Chapter 10. Hypothesis Testing and Confidence Intervals
  • 10.1. Introduction
  • 10.2. Hypothesis Testing
  • 10.2.1. Bayes Factors
  • 10.2.2. Testing for Hypothetical Differences
  • 10.2.3. Comparing Bayesian and Classical Hypothesis Testing
  • 10.2.4. Model Comparison: Choosing the Best Predictive Model
  • 10.2.5. Accommodating Expert Judgments about Hypotheses
  • 10.2.6. Distribution Fitting as Hypothesis Testing
  • 10.2.7. Bayesian Model Comparison and Complex Causal Hypotheses
  • 10.3. Confidence Intervals
  • 10.3.1. The Fallacy of Frequentist Confident Intervals
  • 10.3.2. The Bayesian Alternative to Confidence Intervals
  • 10.4. Summary
  • Further Reading
  • Chapter 11. Modeling Operational Risk
  • 11.1. Introduction
  • 11.2. The Swiss Cheese Model for Rare Catastrophic Events
  • 11.3. Bow Ties and Hazards
  • 11.4. Fault Tree Analysis (FTA)
  • 11.5. Event Tree Analysis (ETA)
  • 11.6. Soft Systems, Causal Models, and Risk Arguments
  • 11.7. KUUUB Factors
  • 11.8. Operational Risk in Finance
  • 11.8.1. Modeling the Operational Loss Generation Process
  • 11.8.2. Scenarios and Stress Testing
  • 11.9. Summary
  • Further Reading
  • Chapter 12. Systems Reliability Modeling
  • 12.1. Introduction
  • 12.2. Probability of Failure on Demand for Discrete Use Systems
  • 12.3. Time to Failure for Continuous Use Systems
  • 12.4. System Failure Diagnosis and Dynamic Bayesian Networks
  • 12.5. Dynamic Fault Trees (DFTs)
  • 12.6. Software Defect Prediction
  • 12.7. Summary
  • Further Reading
  • Chapter 13. Bayes and the Law
  • 13.1. Introduction
  • 13.2. The Case for Bayesian Reasoning about Legal Evidence
  • 13.3. Building Legal Arguments Using Idioms
  • 13.3.1. The Evidence Idiom
  • 13.3.2. The Evidence Accuracy Idiom
  • 13.3.3. Idioms to Deal with the Key Notions of "Motive" and "Opportunity"
  • 13.3.4. Idiom for Modeling Dependency between Different Pieces of Evidence
  • 13.3.5. Alibi Evidence Idiom
  • 13.3.6. Explaining away Idiom
  • 13.4. Putting it All Together: Vole Example
  • 13.5. Using BNs to Expose Further Fallacies of Legal Reasoning
  • 13.5.1. The Jury Observation Fallacy
  • 13.5.2. The "Crimewatch UK" Fallacy
  • 13.6. Summary
  • Further Reading
  • Appendix A. The Basics of Counting
  • Appendix B. The Algebra of Node Probability Tables
  • Appendix C. Junction Tree Algorithm
  • Appendix D. Dynamic Discretization
  • Appendix E. Statistical Distributions
  • Index