Testing Safety-Related Software A Practical Handbook için kapak resmi
Testing Safety-Related Software A Practical Handbook
Başlık:
Testing Safety-Related Software A Practical Handbook
ISBN:
9781447132776
Edition:
1st ed. 1999.
Yayın Bilgileri:
London : Springer London : Imprint: Springer, 1999.
Fiziksel Tanımlama:
IX, 226 p. online resource.
Contents:
1 Introduction -- 1.1 Context -- 1.2 Audience -- 1.3 Structure -- 1.4 Applicable Systems -- 1.5 Integrity Levels -- 1.6 Typical Architectures -- 1.7 The Safety Lifecycle and the Safety Case -- 1.8 Testing Issues across the Development Lifecycle -- 1.9 Tool Support -- 1.10 Current Industrial Practice -- 1.11 The Significance Placed upon Testing by Standards and Guidelines -- 1.12 Guidance -- 2 Testing and the Safety Case -- 2.1 Introduction -- 2.2 Safety and Risk Assessment -- 2.3 Hazard Analysis -- 2.4 The System Safety Case -- 2.5 Lifecycle Issues -- 2.6 Guidance -- 3 Designing for Testability -- 3.1 Introduction -- 3.2 Architectural Considerations -- 3.3 PES Interface Considerations -- 3.4 Implementation Options and Testing Attributes -- 3.5 Software Features -- 3.6 Guidance -- 4 Testing of Timing Aspects -- 4.1 Introduction -- 4.2 Correctness of Timing Requirements -- 4.3 Scheduling Issues -- 4.4 Scheduling Strategies -- 4.5 Calculating Worst Case Execution Times -- 4.6 Guidance -- 5 The Test Environment -- 5.1 Introduction -- 5.2 Test Activities Related to the Development of a Safety Case -- 5.3 A Generic Test Toolset -- 5.4 Safety and Quality Requirements for Test Tools -- 5.5 Statemate -- 5.6 Requirements and Traceability Management (RTM) -- 5.7 AdaTEST -- 5.8 Integrated Tool Support -- 5.9 Tool Selection Criteria -- 5.10 Guidance -- 6 The Use of Simulators -- 6.1 Introduction -- 6.2 Types of Environment Simulators -- 6.3 Use of Software Environment Simulation in Testing Safety-Related Systems -- 6.4 Environment Simulation Accuracy and its Assessment Based on the Set Theory Model -- 6.5 Justification of Safety from Envirormient Simulation -- 6.6 Guidance -- 7 Test Adequacy -- 7.1 Introduction -- 7.2 The Notion of Test Adequacy -- 7.3 The Role of Test Data Adequacy Criteria -- 7.4 Approaches to Measurement of Software Test Adequacy -- 7.5 The Use of Test Data Adequacy -- 7.6 Guidance -- 8 Statistical Software Testing -- 8.1 Introduction -- 8.2 Statistical Software Testing and Related Work -- 8.3 Test Adequacy and Statistical Software Testing -- 8.4 Environment Simulations in Dynamic Software Testing -- 8.5 Performing Statistical Software Testing -- 8.6 The Notion of Confidence in Statistical Software Testing -- 8.7 Criticisms of Statistical Software Testing -- 8.8 The Future of Statistical Software Testing -- 8.9 Guidance -- 9 Empirical Quantifiable Measures of Testing -- 9.1 Introduction -- 9.2 Test Cost Assessment -- 9.3 Test Regime Assessment -- 9.4 Discussion of Test Regime Assessment Model -- 9.5 Evidence to Support the Test Regime Assessment Model -- 9.6 Guidance -- References -- Appendix A Summary of Advice from the Standards.
Abstract:
As software is very complex, we can only test a limited range of the possible states of the software in a reasonable time frame. In 1972, Dijkstra [1] claimed that 'program testing can be used to show the pres­ ence of bugs, but never their absence' to persuade us that a testing approach alone is not acceptable. This frequently quoted statement represented our knowledge about software testing at that time, and after over 25 years intensive practice, experiment and research, although software testing has been developed into a validation and ver­ ification technique indispensable to software engineering discipline, Dijkstra's state­ ment is still valid. To gain confidence in the safety of software based systems we must therefore assess both the product and the process of its development. Testing is one of the main ways of assessing the product, but it must be seen, together with process assessment, in the context of an overall safety case. This book provides guidance on how to make best use of the limited resources available for testing and to maximise the contribution that testing of the product makes to the safety case. 1.1 Context The safety assurance of software based systems is a complex task as most fail­ ures stem from design errors committed by humans. To provide safety assur­ on the integrity of the system and put ance, evidence needs to be gathered forward as an argued case (the safety case) that the system is adequately safe.
Added Author:
Dil:
English