Software testing foundations : a study guide for the certified tester exam /

Saved in:
Bibliographic Details
Author / Creator:Spillner, Andreas.
Edition:4th ed.
Imprint:Santa Barbara, CA : Rocky Nook, ©2014.
Description:1 online resource (1 volume) : illustrations
Language:English
Subject:
Format: E-Resource Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/13621394
Hidden Bibliographic Details
Other authors / contributors:Linz, Tilo.
Schaefer, H. (Hans)
ISBN:1937538427
9781937538422
9781492001492
149200149X
9781937538422
Notes:Includes bibliographical references and index.
Print version record.
Summary:Teaching the most important methods of software testing, this book is designed for self-study and provides the information necessary to pass the Certified Tester-Foundations Level exam, version 2011, as defined by the ISTQB. --
Other form:Print version: Spillner, Andreas. Software testing foundations. Fourth edition. Santa Barbara, CA : Rocky Nook, ©2014 9781937538422
Table of Contents:
  • 1. Introduction
  • 2. Fundamentals of Testing
  • 2.1. Terms and Motivation
  • 2.1.1. Error, Defect, and Bug Terminology
  • 2.1.2. Testing Terms
  • 2.1.3. Software Quality
  • 2.1.4. Test Effort
  • 2.2. The Fundamental Test Process
  • 2.2.1. Test Planning and Control
  • 2.2.2. Test Analysis and Design
  • 2.2.3. Test Implementation and Execution
  • 2.2.4. Test Evaluation and Reporting
  • 2.2.5. Test Closure Activities
  • 2.3. The Psychology of Testing
  • 2.4. General Principles of Testing
  • 2.5. Ethical Guidelines
  • 2.6. Summary
  • 3. Testing in the Software Life Cycle
  • 3.1. The General V-Model
  • 3.2. Component Test
  • 3.2.1. Explanation of Terms
  • 3.2.2. Test objects
  • 3.2.3. Test Environment
  • 3.2.4. Test objectives
  • 3.2.5. Test Strategy
  • 3.3. Integration Test
  • 3.3.1. Explanation of Terms
  • 3.3.2. Test objects
  • 3.3.3. The Test Environment
  • 3.3.4. Test objectives
  • 3.3.5. Integration Strategies
  • 3.4. System Test
  • 3.4.1. Explanation of Terms
  • 3.4.2. Test Objects and Test Environment
  • 3.4.3. Test Objectives
  • 3.4.4. Problems in System Test Practice
  • 3.5. Acceptance Test
  • 3.5.1. Contract Acceptance Testing
  • 3.5.2. Testing for User Acceptance
  • 3.5.3. Operational (Acceptance) Testing
  • 3.5.4. Field Testing
  • 3.6. Testing New Product Versions
  • 3.6.1. Software Maintenance
  • 3.6.2. Testing after Further Development
  • 3.6.3. Testing in Incremental Development
  • 3.7. Generic Types of Testing
  • 3.7.1. Functional Testing
  • 3.7.2. Nonfunctional Testing
  • 3.7.3. Testing of Software Structure
  • 3.7.4. Testing Related to Changes and Regression Testing
  • 3.8. Summary
  • 4. Static Test
  • 4.1. Structured Group Evaluations
  • 4.1.1. Foundations
  • 4.1.2. Reviews
  • 4.1.3. The General Process
  • 4.1.4. Roles and Responsibilities
  • 4.1.5. Types of Reviews
  • 4.2. Static Analysis
  • 4.2.1. The Compiler as a Static Analysis Tool
  • 4.2.2. Examination of Compliance to Conventions and Standards
  • 4.2.3. Execution of Data Flow Analysis
  • 4.2.4. Execution of Control Flow Analysis
  • 4.2.5. Determining Metrics
  • 4.3. Summary
  • 5. Dynamic Analysis - Test Design Techniques
  • 5.1. Black Box Testing Techniques
  • 5.1.1. Equivalence Class Partitioning
  • 5.1.2. Boundary Value Analysis
  • 5.1.3. State Transition Testing
  • 5.1.4. Logic-Based Techniques (Cause-Effect Graphing and Decision Table Technique, Pairwise Testing)
  • 5.1.5. Use-Case-Based Testing
  • 5.1.6. General Discussion of the Black Box Technique
  • 5.2. White Box Testing Techniques
  • 5.2.1. Statement Testing and Coverage
  • 5.2.2. Decision/Branch Testing and Coverage
  • 5.2.3. Test of Conditions
  • 5.2.4. Further White Box Techniques
  • 5.2.5. General Discussion of the White Box Technique
  • 5.2.6. Instrumentation and Tool Support
  • 5.3. Intuitive and Experience-Based Test Case Determination
  • 5.4. Summary
  • 6. Test Management
  • 6.1. Test Organization
  • 6.1.1. Test Teams
  • 6.1.2. Tasks and Qualifications
  • 6.2. Planning
  • 6.2.1. Quality Assurance Plan
  • 6.2.2. Test Plan
  • 6.2.3. Prioritizing Tests
  • 6.2.4. Test Entry and Exit Criteria
  • 6.3. Cost and Economy Aspects
  • 6.3.1. Costs of Defects
  • 6.3.2. Cost of Testing
  • 6.3.3. Test Effort Estimation
  • 6.4. Choosing the Test Strategy and Test Approach
  • 6.4.1. Preventative vs. Reactive Approach
  • 6.4.2. Analytical vs. Heuristic Approach
  • 6.4.3. Testing and Risk
  • 6.5. Managing The Test Work
  • 6.5.1. Test Cycle Planning
  • 6.5.2. Test Cycle Monitoring
  • 6.5.3. Test Cycle Control
  • 6.6. Incident Management
  • 6.6.1. Test Log
  • 6.6.2. Incident Reporting
  • 6.6.3. Defect Classification
  • 6.6.4. Incident Status
  • 6.7. Requirements to Configuration Management
  • 6.8. Relevant Standards
  • 6.9. Summary
  • 7. Test Tools
  • 7.1. Types of Test Tools
  • 7.1.1. Tools for Management and Control of Testing and Tests
  • 7.1.2. Tools for Test Specification
  • 7.1.3. Tools for Static Testing
  • 7.1.4. Tools for Dynamic Testing
  • 7.1.5. Tools for Nonfunctional Test
  • 7.2. Selection and Introduction of Test Tools
  • 7.2.1. Cost Effectiveness of Tool Introduction
  • 7.2.2. Tool Selection
  • 7.2.3. Tool Introduction
  • 7.3. Summary
  • Appendix
  • A. Test Plans According to IEEE Standard 829-1998
  • Test Plans According to IEEE Standard 829-2008
  • B. Important Information about the Syllabus and the Certified Tester Exam
  • C. Exercises
  • Glossary
  • Literature
  • Index