Software Lifecycle Model Selection Criteria for Safety-critical Software

cockeysvilleuterusΛογισμικό & κατασκευή λογ/κού

2 Δεκ 2013 (πριν από 3 χρόνια και 8 μήνες)

274 εμφανίσεις

1



Software Lifecycle Model Selection
Criteria for Safety-critical Software

Robert William Boyd

18
th
September 2009

Project report submitted for the degree of Master of Science in Safety Critical Systems Engineering
in the Department of Computer Science at the University of York.

Number of words = 44179, as counted by the MS Word word count command.
2
Abstract

Software lifecycle models are used as tools for planning and monitoring software projects.
Numerous models have been proposed. Each model’s effectiveness varies with project
circumstances. It is widely acknowledged that no single model is effective in all situations.
Because of this, an effective model must be selected for every project. Selecting an effective
model is an important decision. Use of an ineffective model can adversely affect project speed,
cost and manageability. It can also reduce software quality. For safety-critical software, poor
software quality can adversely affect safety and the ability to provide assurance of safety. It is
therefore important to identify an effective software lifecycle model for a safety-critical software
project.

This report identifies a set of criteria that can be used to select an effective software lifecycle model
for a safety-critical software project. Applicable software lifecycle model selection criteria for non-
safety-critical software are included, along with criteria specific to safety-critical software. The
criteria specific to safety-critical software have been systematically identified. The applicability of
each of the proposed criteria to safety-critical software is justified. The proposed set of selection
criteria is incomplete. A method for identifying a complete set of criteria is defined however.





3
Contents

1 Introduction.............................................................................6
2 Literature Survey.....................................................................7
2.1 Introduction...........................................................................................7
2.2 The Software Lifecycle.........................................................................7
2.2.1 Definition.................................................................................................................7
2.2.2 General Software Process Models...........................................................................8
2.2.3 Safety-critical Software Process Models................................................................10
2.2.4 Comparison of Software Process Models..............................................................11
2.2.5 Relationship with the System Lifecycle..................................................................15
2.3 Software Lifecycle Models..................................................................15
2.3.1 Definition...............................................................................................................15
2.3.2 Purpose.................................................................................................................17
2.3.3 General Software Lifecycle Models........................................................................18
2.3.4 Safety-Critical Software Lifecycle Models..............................................................32
2.3.5 Software Lifecycle Model Selection Criteria...........................................................39
2.4 Conclusions........................................................................................50
3 Proposal.................................................................................51
3.1 Scope..................................................................................................51
3.2 Method................................................................................................51
3.2.1 Overview...............................................................................................................51
3.2.2 Identifying Safety-critical Software Attributes.........................................................52
3.2.3 Identifying Selection Criteria..................................................................................53
3.3 Proposed Safety-Critical Software Attributes.....................................54
3.4 Proposed Selection Criteria................................................................59
4 Evaluation..............................................................................71
4.1 Scope..................................................................................................71
4.2 Method................................................................................................71
4.2.1 Overview...............................................................................................................71
4.2.2 Evaluating the Proposed Safety-critical Software Attributes...................................71
4.2.3 Evaluating the Proposed Selection Criteria............................................................72
4.3 Evaluation of Proposed Safety-Critical Software Attributes...............73
4.4 Evaluation of Proposed Selection Criteria..........................................87
5 Further Work..........................................................................89
6 Conclusions...........................................................................92
7 Bibliography..........................................................................94
A. ISO/IEC FDIS 12207:2007 Software Process Model..........96
B. CMMI-DEV Product or Service Process Model................101
C. IEC61508 Part 3 Software Process Model.......................103
D. DO-178B Software Process Model...................................105
E. Alexander et al Selection Criteria....................................107
F. McConnell Selection Criteria............................................110
4
Glossary

Abbreviation Full Name
AEL Assurance Evidence Level
ALARP As Low As Reasonably Practicable
COTS Commercial Off-the-Shelf
E/E/PE Electrical/Electronic/Programmable Electronic
E/E/PES Electrical/Electronic/Programmable Electronic System
FMEA Failure Modes and Effects Analysis
FTA Fault Tree Analysis
GSN Goal Structuring Notation
HAZOP Hazard and Operability Studies
HSWA Health and Safety at Work Act (1974)
MC/DC Modified Condition/Decision Coverage
MOD Ministry of Defence
N/A Not Applicable
PC Personal Computer
SIL Safety Integrity Level
UK United Kingdom

5
Acknowledgements

I would like to thank my employer, Simulation Systems Ltd, for generously funding this course and
for providing me with time to attend at York.

I would also like to thank the staff at the University of York for their help throughout the course, and
in particular, my project supervisor, Dr Mark Nicholson, for his help and advice with this project.

Finally, I would like to thank my wife Jill for putting up with the evenings and weekends spent
studying.


6
1 Introduction

Software lifecycle models are representations of the sequence and interrelationship of broad
phases within the software lifecycle. Their principal purpose is to provide a high-level plan for
software lifecycle activities. They are therefore essentially management tools. The use of a
software lifecycle model on a software project is important. Without the plan it provides, it can be
difficult to effectively manage the project.

Within the field of Computer Science, a large number of software lifecycle models have been
proposed. Each model has its own strengths and weaknesses, and each is more appropriate in
certain project circumstances than others. It is generally recognised that no single software
lifecycle model is appropriate in all circumstances. Because of this, for a particular software
project, it is necessary to select a software lifecycle model that suits the project’s characteristics.
This is an important decision. The use of an inappropriate software lifecycle model can increase
project costs and timescales and reduce software quality.

It could be argued that the choice of an appropriate software lifecycle model for a safety-critical
software project is especially important, given that safety-critical software has the potential to
cause harm to people and perhaps the environment. A reduction in the quality of safety-critical
software can reduce safety. Despite this, an examination of current safety standards relevant to
software reveals little guidance on the selection of software lifecycle models. Where they present
a software lifecycle model, this is based on the classic Waterfall model or a Waterfall variant.

The preceding discussion raises an interesting point. No single software lifecycle model is
appropriate in all project circumstances. Waterfall models, while being useful in certain project
circumstances, have well-known deficiencies in others. Many of the software lifecycle models
documented in the Computer Science literature were developed to overcome these deficiencies.
Therefore, in some project circumstances, a Waterfall model will not be the most appropriate
model to use. Given this, the question arises, would some of the other software lifecycle models
be more appropriate for the development of safety-critical software in these situations? If so, how
do you identify which?

The Computer Science literature contains some guidance on the selection of an appropriate
software lifecycle model for a project. Safety-critical software is, in certain respects, different to
other types of software however. It is required to possess a range of attributes that are not always
required by non-safety-critical software. This is necessary in order to ensure, with sufficient
confidence, that the software adequately reduces safety risks. These include attributes that must
be possessed by the software itself and by its development process. Given the differences
between safety-critical software and other types of software, is it appropriate to apply the software
lifecycle model selection criteria that have been proposed for non-safety-critical software, to safety-
critical software? Do the different attributes of safety-critical software in any way alter the attributes
that must be possessed by a software lifecycle model? These are questions that are not currently
answered by in the literature.

This report aims to begin to address these questions by defining a set of selection criteria for a
software lifecycle model for a safety-critical software project. Such a set of criteria can be used to
identify whether different software lifecycle models are appropriate for a particular safety-critical
software project. This will be of practical benefit to safety-critical software engineering
practitioners. In order to develop such a set of criteria, it will be necessary to identify the attributes
that safety-critical software must be possess that are not required by non-safety-critical software,
and whose achievement is affected by software lifecycle model selection.
7
2 Literature Survey
2.1 Introduction

This section of the report is a review of published literature for information relevant to software
lifecycle model selection criteria for a safety-critical software project. This literature survey forms
the basis upon which the proposal in section 3 is made.

A number of comments are necessary regarding some of the references reviewed in this survey:

 Two of the safety standards considered in this review are currently being revised. DO-178B
34
is
scheduled to be superseded by DO-178C in December 2009
37
. IEC61508
6
is scheduled to be
up issued in early 2010 (the IEC website states that the issue date is May 2008
23
, however at
the time of writing the new version of this standard has still not been issued. Other web pages
indicate the issue date will be early 2010). The new versions of these standards may contain
new information relevant to this report (DO-178B
34
and IEC61508
6
are seventeen and eleven
years old respectively, so this might be expected given advances in Computer Science).
Unfortunately it has not been possible to obtain draft material for either standard. The author
has been able to communicate privately with a DO-178C committee member however;
 A draft of a new sector-specific implementation of IEC61508
6
, ISO WD 26262 “Road vehicles -
Functional safety” was released in July 2009
42
. Unfortunately, it was not possible to obtain a
copy of this standard. This again may contain new relevant information;
 A main reference for software lifecycle processes is ISO/IEC 12207:2008 “Systems and
software engineering - Software life cycle processes”
40
. Unfortunately, it was not possible to
obtain a copy of this standard. It was possible however to obtain a copy of the standard’s final
draft, ISO/IEC FDIS 12207:2007
26
. The discussion in this literature survey refers to this draft.
The extent of differences between this draft and ISO/IEC 12207:2008
40
is unknown.

2.2 The Software Lifecycle

The objective of this section is to review the literature to identify what the software lifecycle is, what
high-level activities take place in it, and how it relates to the system lifecycle. Safety-critical
software and software in general are considered. This information allows the relationship between
software lifecycle models and the real software lifecycle to be understood. Additionally, an
understanding of real software lifecycles is required to assess the merits of different software
lifecycle models.

2.2.1 Definition

The IEEE Standard Glossary of Software Engineering Terminology
19
defines a software lifecycle
as:

“The period of time that begins when a software product is conceived and ends when the software
is no longer available for use. The software life cycle typically includes a concept phase,
requirements phase, design phase, implementation phase, test phase, installation and check-out
phase, operation and maintenance phase, and sometimes, retirement phase.”

Similar definitions are given in other sources
8, 28, 34, 40
. Sommerville
18
states that four fundamental
activities occur in all software lifecycles. These are
18
:

 Software specification: defines the software’s functionality and any constraints on its operation;
8
 Software design and implementation: produces software that meets its specification;
 Software validation: validates that the software meets its specification;
 Software evolution: changes the software to satisfy changing customer needs.

Sommerville’s
18
fundamental activities are all clearly essential to produce software that dependably
satisfies stakeholder needs. This will be the case for both safety-critical and non-safety-critical
software. Obviously stakeholder needs must first be established. Software then needs to be
constructed. It must then be validated to establish how well the stakeholder needs are satisfied.
Changes in stakeholder needs over time are inevitable, hence evolution is required
4
. These
activities must therefore necessarily be performed in the order stated. This does not however
necessarily preclude several activities being performed concurrently.

Pressman
32
identifies a similar set of fundamental activities to Sommerville’s
18
. Sommerville’s
18

activities are also consistent with the phases described in the IEEE Standard Glossary
19
definition.
Sommerville’s
18
software specification activity corresponds to the IEEE Standard Glossary’s
19
concept and requirements phases; his software design and implementation activity corresponds to
design and implementation phases; his software validation activity corresponds to the test phase;
his software evolution activity corresponds to the operation and maintenance phase. The other
phases described in the IEEE Standard Glossary
19
definition may or may not be required
depending on the nature of the software (installation maybe a trivial activity and not warrant being
defined as a separate lifecycle phase. An example is a home user installing mass-market software
on a PC).

In addition to these fundamental activities, a number of processes need to occur continuously
throughout the software lifecycle
32
. These are relevant to all phases of the lifecycle and support
the achievement of these phases’ objectives
32
. Pressman
32
calls these processes umbrella
activities. He states the umbrella activities include
32
: project management; quality assurance;
configuration management; reusability management; collecting metrics and risk management.
Clearly some of these umbrella activities, such as project management, quality assurance and
configuration management are essential for the delivery of quality software within budget and
timescales. They should therefore always occur in the software lifecycle.

The activities discussed above represent the software lifecycle at a high-level of abstraction.
Within each activity, more detailed processes will occur to its objectives. Individual processes will
themselves be composed of lower-level sub-processes or tasks designed to achieve their
objectives
26
. These will involve the application of selected methods and tools. Individual and team
competencies will be required to effectively implement these processes and activities. A number of
references in the literature define sets of processes (called software process models) relevant to
software in general and to safety-critical software. These are reviewed in sections 2.2.2 and 0.

Real software lifecycles are inherently unpredictable. They are influenced by a combination of
technical, organisational, sociological and economic factors. This means that unforeseeable
events always have the potential to occur. For instance, a deadline may suddenly move forward; a
software tool may not work as expected; an experienced developer may become unavailable.
Therefore while the software lifecycle can, and should, be planned, changes in the face of events
will always be necessary
4
. Because of this, every software lifecycle will be unique and involve
iteration between lifecycle phases.

2.2.2 General Software Process Models

This section reviews the literature to identify software process models that have been proposed for
software in general.

9
ISO/IEC FDIS 12207:2007
26
defines a set of generic processes that can take place during any
software lifecycle. Each process is categorised into one of the following process groups according
to their relationship with one another and who has responsibility for performing them
26
:

 Agreement Processes: processes defining activities necessary to establish an agreement
between two organisations;
 Organisational Project-Enabling Processes: processes managing the organisation’s capability
to acquire and supply products or services through the initiation, support and control of
projects;
 Project Processes: processes used to plan, execute, assess and control the progress of a
project and support management objectives;
 Technical Processes: processes used to: define the requirements for a system; transform
these requirements into an effective product; permit consistent reproduction of the product;
use the product; provide and sustain the provision of required services; dispose of the product;
 Software Implementation Processes: processes used to produce a specified system element
(software item) implemented in software;
 Software Support Processes: processes providing a specific focused set of activities for
performing a specialised software process;
 Software Reuse Processes: processes supporting an organisation’s ability to reuse software
items across project boundaries.

The first four process groups in this list are called System Context Processes or System Life Cycle
Processes
26
. They relate to the system context in which the software lifecycle occurs and are not
specific to software
26
. The remaining process groups contain processes specific to software
26
.
Each process is composed of a number of tasks. The ISO/IEC FDIS 12207:2007
26
processes,
reproduced from the standard, are listed in Appendix A.

CMMI-DEV
36
is a model that defines generic processes applicable to any product or service. This
therefore includes software. CMMI-DEV
36
replaces the software-specific Capability Maturity Model
for Software (CMM) specification, which has been withdrawn. CMMI-DEV
36
defines a number of
process areas. A process area is defined as “A cluster of related practices in an area that, when
implemented collectively, satisfy a set of goals considered important for making improvement in
that area”
36
. Process areas are categorised as either
36
:

 Process Management: cross-project activities related to defining, planning, deploying,
implementing, monitoring, controlling, appraising, measuring, and improving processes;
 Project Management: project management activities related to planning, monitoring, and
controlling the project;
 Engineering: development and maintenance activities shared across engineering disciplines;
 Support: activities that support product development and maintenance.

Each process area contains a number of specific and generic practices
36
. These are activities that
are considered important for achieving a goal associated with the process area
36
. For example,
specific practices associated with the Verification process area include Conduct Peer Reviews and
Analyze Verification Results. The specific practices correspond to activities in ISO/IEC FDIS
12207:2007’s
26
definition of a process. The CMMI-DEV
36
process areas are listed in Appendix B.

Software process models are also described in the academic literature and textbooks. Examples
of such models have been identified in the review of software lifecycle models in section 2.3.3.
These models are not reproduced here due to the space constraints on this report. See section
2.3.3 for details. The models identified in section 2.3.3 define processes associated with the
production and verification of software only. These processes are equivalent to the software
production and verification processes described in ISO/IEC FDIS 12207:2007’s
26
and CMMI-
DEV
36
.

10
2.2.3 Safety-critical Software Process Models

This section reviews the literature to identify software process models have been proposed for
safety-critical software.

IEC61508 Part 3
8
specifies requirements for software used in, or to develop, an Electrical/
Electronic/Programmable Electronic (E/E/PE) safety-related system. These include mandatory
requirements for a number of software lifecycle processes (called phases). These phases and
their objectives, reproduced from IEC61508 Part 3
8
, are listed in Appendix C.

IEC61508 Part 3
8
states that each phase should be divided into elementary activities. It refers the
reader to ISO/IEC12207 for details
8
. The version of ISO/IEC12207 current at the time of IEC61508
Part 3’s
8
publication was ISO/IEC 12207:1995
24
. ISO/IEC 12207:1995
24
differs somewhat from
ISO/IEC FDIS 12207:2007
26
. The current interpretation of this statement is presumably that
ISO/IEC FDIS 12207:2007’s
26
tasks should be referred to. IEC61508 Part 3
8
phases largely map
to ISO/IEC FDIS 12207:2007’s
26
processes. IEC61508 Part 3
8
states that quality and safety
assurance procedures should be integrated with the phases’ activities.

IEC61511 “Functional Safety – Safety Instrumented Systems for the Process Industry Sector –
Part 1: Framework, Definitions, System, Hardware and Software Requirements”
39
is a sector-
specific implementation of IEC61508. It also defines a software process model. This is similar to
the model defined in IEC61508 Part 3
8
. It has not been reviewed in detail however due to the time
constraints on this project.

DO-178B
34
provides guidance for software for airborne systems and equipment used on aircraft or
engines. Its guidance describes a number of software lifecycle processes. These processes and
their objectives, reproduced from DO-178B
34
, are listed in Appendix D.

Like ISO/IEC FDIS 12207:2007
26
and CMMI-DEV
36
, DO-178B
34
organises its processes into
process groups. These are
34
:

 Software development processes group: processes related to developing the software;
 Integral processes group: processes related to verifying, controlling and increasing confidence
in the outputs of the software lifecycle processes;
 Software planning process: defines and coordinates the software development processes and
integral process activities.

In DO-178B
34
the objectives and outputs of each process can vary with software level (a software
level is a measure of safety criticality). A control category is assigned to each process output
dependent on the software level
34
. Additionally, the independence with which some process
activities must be carried out varies with software level
34
.

Def Stan 00-56 Issue 4 “Safety Management Requirements for Defence Systems”
29, 30
provides
requirements for the management of safety that can be applied to any UK MOD project and any
phase of a project’s life. Def Stan 00-56 Issue 4 Part 1
29
contains the standard’s mandatory
requirements. These are technology independent however, and therefore a software process
model is not defined.

CAP 670 “Air Traffic Services Safety Requirements”
14
contains requirements for Air Traffic
Services systems where the software is needed to fulfill a system safety requirement. CAP 670
14

gives little guidance on software processes. Instead it refers readers to other safety standards
14
. It
explicitly mentions IEC 61508 Part 3
8
, DO178B
34
and Def Stan 00-55 (this standard has now
superseded by Def Stan 00-56 Issue 4
29, 30
).

11
2.2.4 Comparison of Software Process Models

This section compares the software process models identified in sections 2.2.2 and 2.2.3.

The processes described in each model are listed in Table 1. Each CMMI-DEV
36
, DO-178B
34
and
ISO/IEC FDIS 12207:2007
26
process is colour coded according to the category in which the
standard places it. The colours assigned to each category for each standard are defined in
appendices A, B and D. For example, CMMI-DEV
36
process areas in the Process Management
category are colour coded light blue. IEC61508 Part 3
8
processes are not colour coded as this
standard does not categorise processes. Processes from the different standards with broadly
similar objectives are shown in the same row of the table. Processes that the literature survey has
identified as being typically represented in software lifecycle models (see section 2.3.3 for details)
are enclosed in a box with a thick black border. The processes within the box are directly
concerned with the production and verification of the software. They implement three of
Sommerville’s
18
fundamental activities discussed in section 2.2.1: software specification; software
design and implementation; software validation.

From the table, it can be seen that both the safety and general software standards specify
equivalent processes within the box. The principal difference between the standards is that some
standards combine several processes that are defined individually in others. For example, the
CMMI-DEV
36
Technical Solution process is equivalent to three individual IEC61508 Part 3
8

processes. From this it can be concluded that the same types of software lifecycle processes are
associated with the production and verification of safety-critical software as are associated with the
production and verification of software in general. Additionally, the high-level objectives of these
processes are broadly similar for both types of software. It should be noted however that there will
be differences in how these objectives are achieved for the two types of software. This conclusion
indicates that software lifecycle models for safety-critical software will need to represent the
software lifecycle phases represented by software lifecycle models for any type of software.

From the table it can also be seen that the general software standards define processes that are
not defined in the safety standards. The converse is not true however, with the exception of a
single process from IEC61508 Part 3
8
. The safety standards do not include organisational-level
processes, system lifecycle processes, process improvement processes or software reuse
processes. The system lifecycle is outside the scope of both IEC61508 Part 3
8
and DO-178B
34
.
System lifecycle processes are addressed in other related standards however (IEC61508 Part 1
6

and ARP4754
35
respectively). Some of the organisational processes will have a bearing on
software quality. It could therefore be argued that these ought to be considered in the safety
standards (e.g. a Human Resource Management Process to ensure that safety-critical project staff
are sufficiently competent). Software reuse processes will also be relevant to safety-critical
software that reuses existing software components.

From the table, it can be seen that some processes will occur over discrete periods of time within
the software lifecycle. Processes of this type that are directly related to software (i.e. ignoring the
system lifecycle processes) are the processes within the box. The other processes will be
performed over the entire software lifecycle. These latter processes are Pressman’s
32
umbrella
activities mentioned in section 2.2.1. It is therefore reasonable that only the processes within the
box tend to be represented in software lifecycle models, as representing the sequencing of
continuous processes with discrete processes at the level of abstraction of software lifecycle
models does not add any value. It would also tend to complicate the models. From this it can be
concluded that the software lifecycle models identified in section 2.3.3 represent all required
software lifecycle phases than can be usefully represented in a model at this level of abstraction.





12
ISO/IEC FDIS 12207:2007
26

CMMI-DEV
36

IEC61508 Part 3
8
DO-178B
34

Acquisition Process
Supplier Agreement Management
Supply Process

Organisational Process Focus

Organisational Innovation and
Deployment

Organisational Process Definition
+IPPD
Life Cycle Model Management
Process
Organisational Process Performance

Infrastructure Management Process
Project Portfolio Management Process
Human Resource Management
Process
Organisational Training
Quality Management Process
Process and Product Quality
Assurance

Software Quality Management System
Software Design and Development:
support Tools and Programming
Languages
Project Planning Process
Project Planning
Software Safety Validation Planning
Software Planning Process
Project Assessment and Control
Process
Project Monitoring and Control Software Quality Management System
Decision Management Process
Decision Analysis and Resolution
Risk Management Process
Risk Management
Hazard and Risk Analysis (defined in
IEC61508 Part 1
6
)
System Safety Assessment Process
(defined in ARP4754
35
)
Configuration Management Process
Configuration Management Software Quality Management System
Information Management Process
Measurement and Analysis
Measurement Process
Quantitative Project Management

Stakeholder Requirements Definition
System Requirements Analysis
Process
-
(Overall Safety Requirements process
in IEC61508 Part 1
6
)

System Architectural Design Process
Requirements Development
-
(Safety Requirements Allocation
process in IEC61508 Part 1
6
)


Requirements Management
Implementation Process
Technical Solution
-
(Realisation processes in IEC61508
Part 1
6
)


13
ISO/IEC FDIS 12207:2007
26

CMMI-DEV
36

IEC61508 Part 3
8
DO-178B
34

-
(E/E/PES Integration process in
IEC61508 Part 2
7
)
System Integration Process
Product Integration
Programmable Electronics Integration
(hardware/software)

System Qualification Testing Process
Verification
-
(Overall Safety Validation process in
IEC61508 Part 1
6
)

Software Installation Process
Product Integration
-
(Overall Installation and
Commissioning process in IEC61508
Part 1
6
)

Software Acceptance Support Process
Software Operation Process
Software Maintenance Process
Causal Analysis and Resolution Software Modification
Software Disposal Process
-
(Decommissioning or Disposal process
defined in IEC61508 Part 1
6
)

Software Requirements Analysis
Process
Software Safety Requirements
Specification
Software Requirements Process
Software Architectural Design Process
Requirements Development
Software Design and Development:
Software Architecture
Software Design and Development:
Detailed Design and Development
(Software System Design)
Software Detailed Design Process
Software Design and Development:
Detailed Design and Development
(individual Software Module Design)
Software Design Process
Software Construction Process
Technical Solution
Software Design and Development:
Code Implementation
Software Coding Process
Software Design and Development:
Software Module Testing
Software Design and Development:
Software Integration Testing
Software Integration Process
Product Integration
Programmable Electronics Integration
(hardware/software)
Software Verification Process

14
ISO/IEC FDIS 12207:2007
26

CMMI-DEV
36

IEC61508 Part 3
8
DO-178B
34

Integration Process
Software Qualification Testing Process
Verification Software Safety Validation
Software Documentation Management
Process

Software Configuration Management
Process
Configuration Management Software Quality Management System
Software Configuration Management
Process
Software Quality Assurance Process
Process and Product Quality
Assurance
Software Safety Lifecycle
Requirements (not described as a
phase)
Software Quality Assurance Process
Software Verification Process
Verification Software Verification
Software Verification Process
Software Validation Process
Validation
Software Review Process
Integrated Project Management +IPPD
Certification (not described as a
process)
Software Audit Process Functional Safety Assessment
Certification Liaison Process
Software Problem Resolution Process
Domain Engineering Process
Reuse Asset Management Process
Reuse Program Management Process

Software Operation and Modification
Procedures


Table 1 Software Process Model Comparison
15
2.2.5 Relationship with the System Lifecycle

This section briefly reviews the literature for information relating to the relationship between the
system and software lifecycles.

Software is always a component of a wider system. As a minimum, this system will consist of the
software and the hardware platform upon which it runs
26
. Often however, a piece of software can
be a component of a much larger system, composed of other hardware and software components,
human operators or maintainers and procedures. This is relationship is reflected in all the
standards discussed in sections 2.2.2 and 0.

The software lifecycle must therefore interface with the lifecycle of the system of which it is part. A
number of important flows of information need to pass between the system and software lifecycles.
The system requirements allocated to the software and details of the system architecture must flow
from the system to software lifecycles so that the software’s requirements can be derived.
Validation and verification information must flow from the software to system lifecycle to support
the validation and verification of the system.

Safety is a property of an entire system in a particular operating context, rather than a property of
any of its constituent parts. An extra dimension to the relationship between the system and
software lifecycles for safety-critical software is therefore the need to develop the software to be
safe in its system context. In practice this involves identifying how the software can contribute to
system-level safety hazards and mitigating these contributions such that the risk associated with
the hazards is acceptable
35
. This approach is reflected in all of the safety standards reviewed in
section 0.

2.3 Software Lifecycle Models

The objective of this section is to review the literature to identify what a software lifecycle model is
and what purpose it serves. Different software lifecycle models for both safety-critical software and
software in general are then identified and assessed. Software lifecycle model selection criteria
are also identified.

2.3.1 Definition

This section reviews the literature to identify what a software lifecycle model is.

The following definitions of a software lifecycle model can be found in the literature:

 “framework of processes and activities concerned with the life cycle that may be organised into
stages, which also acts as a common reference for communication and understanding”
(ISO/IEC FDIS 12207:2007
26
);

 “A partitioning of the life of a product or project into phases.” (CMMI-DEV
36
. This is the
definition for a lifecycle model of any product or service. This may be software);

 “software life cycle models serve as a high-level definition of the phases that occur during
development. They are not aimed at providing detailed definitions but at highlighting the key
activities and their interdependencies” (ISO/IEC TR 19759
40
);

 “Lifecycle models describe the interrelationship between software development phases” (The
NASA Software Safety Guidebook
31
);
16

The first point to note is the use of the word model. A software lifecycle model is just that, a model
of the real software lifecycle. It is therefore a representation of the software lifecycle from a
particular viewpoint. As the ISO/IEC TR 19759
40
definition states it is a high-level representation.
This means that it is at a high-level of abstraction of the real lifecycle (i.e. lower-level details are
omitted). This is implied by the other definitions which they talk of stages or phases within the
lifecycle. As stated by the last two of the definitions, software lifecycle models also represent the
relationship between these phases. It may also define the criteria for moving from one phase to
another
3
. A software lifecycle model is therefore a high-level representation of the sequence and
interrelationship of broad phases within the software lifecycle. Within each software lifecycle
model phase a number of lower-level processes and tasks can occur
26
. This is described in the
software process models reviewed in previous sections. These detailed activities are usually not
described by software lifecycle models however. A software lifecycle model is therefore only an
element of a complete software lifecycle process model. This is shown by the model in Figure 1,
reproduced from Pressman
32
.


Process
Methods
Tools
Quality Focus


Figure 1 Pressman’s
32
layered Model of the Software Lifecycle

In Figure 1 the software lifecycle is represented in terms of the means by which its activities are
performed. The lowest layer is the Quality Focus layer. This represents an organisational focus
on quality, including process improvement. This layer corresponds to the organisational-level
processes defined in ISO/IEC FDIS 12207:2007
26
and CMMI-DEV
36
(see section 2.2.2). The next
layer is the Process layer. This represents a defined software process. This process provides a
framework within which different project activities are performed. The process defined in the
process layer would include a software lifecycle model and more detailed definitions of individual
processes from the software process model being used. These processes will often be tailored for
use on a particular project
26
. The third layer is the Methods layer. This layer represents a set of
methods that are selected to perform the activities and tasks in the process layer (e.g. object-
oriented design, Fagan inspections, etc.)
32
. The fourth layer is Tools. This represents the selected
set of tools that support the implementation of the processes and methods (e.g. development
environment, automated testing tool, etc.). CMMI-DEV
36
contains a similar description of a
process to that represented in Figure 1. It describes processes as binding together people,
procedures and methods and tools and equipment
36
.

All software lifecycle models are high-level abstractions of the real software lifecycle. This level of
abstraction can vary however
21
. This means that different software lifecycle models at different
levels of abstraction can exist in a hierarchy. Alexander et al
21
categorises two levels of abstraction
for software lifecycle models (a third level, Level 1, is not relevant to software lifecycle models).
These are:

 Level 3: models representing the partitioning of the delivery of user requirements over time. In
this report, for clarity, Level 3 models are termed Delivery Strategy Models;
 Level 2: models representing the interrelationship of the processes within each broad group in
a Level 3 model. In this report, for clarity, Level 2 models are termed Development Models.

Delivery Strategy Models do not define the phases associated with delivering each increment of
the software. A Development Model is needed to do this. A combination a single Delivery
17
Strategy Model and a Development Model for each increment of delivery is therefore required to
produce a complete software lifecycle model.

Software lifecycle models can be used to represent the entire software lifecycle or just the portion
of it associated with a particular project
26
. Both CMMI-DEV
36
and IEEE 1074-2006 “Standard for
Developing a Software Project Life Cycle Process”
13
describe a software lifecycle model being
selected for a particular project. As will be discussed in subsequent sections, the appropriateness
of a particular software lifecycle model varies according to project circumstances. A software
lifecycle model therefore ought to be selected for each project. Sommerville
18
points out that
different software lifecycle models can be applied to different components within a software
architecture. In this situation, the development of each component would be treated as a
subproject.

2.3.2 Purpose

The definition of a software lifecycle model for a project serves a number of purposes:

 It facilitates managing the software lifecycle
26, 40
. It does this by providing a plan for the
lifecycle against which progress can be tracked
3, 26, 27
. The provision of a high-level
abstraction of the software lifecycle facilitates managing complexity;
 It provides a common basis for project stakeholders to understand and communicate about the
software lifecycle
26, 40
. This promotes more effective working
41
;
 It improves a software development organisation’s ability to consistently repeat a software
process
5
and supports process improvement
40
. Process improvement is widely believed to
improve product quality
41
. This belief is manifest in the widespread adoption of process
improvement approaches such as ISO 9001:2000 “Quality management systems -
Requirements” and CMMI-DEV
36
and its predecessors. In the safety-critical software domain,
this belief is manifested in standards such as IEC61508 Part 3
8
and DO-178B
34
, in which the
level of safety integrity that can be claimed for the software is linked to the rigour of its
development process. Additionally, the implementation of a continuous improvement process
is a requirement of some quality standards (e.g. ISO 9001:2000)
5, 38
. Organisations may wish
to show compliance with such standards for commercial reasons;
 It may be a requirement for standards conformance. Some general software engineering
standards (e.g. The TickIT Guide
5
) and safety standards (IEC61508 Part 3
8
and DO-178B
34
)
require the definition of a software lifecycle model (see section 2.3.5 for details).

The first software lifecycle models were developed to address the problems associated with not
using a software lifecycle model at all
18
. In this situation a project is carried out in an informal, ad
hoc manner. This is sometimes referred to as the Code and Fix approach. In the Code and Fix
approach, typically no software lifecycle model and few or no formally-defined processes are used.
The majority of the time is spent coding
28
. Often there is little or no formal project management or
quality assurance
28
. Documentation is also often minimal
28
. This approach usually leads to poor
software quality and an inability to effectively manage a project. It is interesting to note that despite
its weaknesses McConnell
28
, writing in 1996, describes the Code and Fix approach as being
“…seldom useful, but nonetheless common”. There is clearly a continuum between a completely
unplanned Code and Fix approach at one extreme and completely pre-planned approach at the
other. Some software processes occupy the mid-ground in this continuum. Agile methods are an
example. Agile methods adopt an informal approach to many aspects of the software lifecycle.
Agile methods such as Extreme Programming do have a clearly defined high-level software
process model however
10
, which includes a software lifecycle model (see section 2.3.3.5). While
informality may be effective for lower-level software lifecycle processes, it is difficult to see a
situation where the absence of the high-level plan provided by a software lifecycle model would not
have an adverse affect on a project. The definition of a software lifecycle model is therefore
essential for almost all software projects.

18
2.3.3 General Software Lifecycle Models

This section reviews the literature for software lifecycle models that have been proposed for
software in general.

The literature describes a large number of software lifecycle models
21, 40, 18
. Because of the
constraints on this project, this review concentrates on widely-documented models. The fact that a
model is widely-documented is taken to indicate that it is effective under certain circumstances.
Many other models also exist.

The software lifecycle models mentioned in the software engineering standards considered in this
report have been identified as a starting point. These are:

 ISO/IEC FDIS 12207:2007
26
: Waterfall, Spiral, Incremental Development and Evolutionary
Development models. This standard states that a technical report, ISO/IEC TR 24748, will
provide additional information on software lifecycle models
26
. At the time of writing, this report
is unpublished however. In future, it may provide relevant information. ISO/IEC TR
15271:1998 “Information technology - Guide for ISO/IEC 12207 (Software Life Cycle
Processes)”, which provides guidance on the application of ISO/IEC 12207:1995
24
, describes
three software lifecycle models and provides guidance on tailoring these models
16
. Again,
unfortunately it was not possible to obtain ISO/IEC TR 15271:1998 for this project. This may
also provide further useful information;
 CMMI-DEV
36
: Waterfall, Spiral, Evolutionary, Incremental and Iterative models;
 The TickIT Guide
5
: Iterative Prototyping, Waterfall, V-model, Rapid Application Development
(RAD) models, or a combination of these;
 ISO/IEC TR 19759 (SWEBOK)”
40
: Waterfall, Throwaway Prototyping, Evolutionary
Development, Incremental/iterative Delivery, Spiral, Reusable Software Model, and Automated
Software Synthesis models.

Additionally, Sommerville
18
, in his popular textbook on software engineering, states that most
software lifecycle models are based on one of three general models: Waterfall, Iterative
Development and Component-based Software Engineering (CBSE).

Some of these models are reviewed in the following subsections, along with the Delivery Strategy
Models identified by Alexander et al
21
. Each model’s strengths and weaknesses are identified, as
these are of key importance when selecting an appropriate software lifecycle model for a project.
This review shows that new software lifecycle models have generally been developed to address
the shortcomings of the models current at the time, which is to be expected.

In reality, software lifecycle models are often hybridised
18
. The hybridisation of two or more
models can help address the weaknesses of the individual models. The type of hybridisation that
is appropriate will depend on project circumstances.

2.3.3.1 Conventional Development Model

The Conventional development software lifecycle model is a Delivery Strategy Model
21
. It is shown
in Figure 2.

In this model, all the stakeholder requirements the software must satisfy are implemented in a
single iteration. This is then delivered to the customer
21
. The software then enters the operation
and maintenance phase
21
.

This model contrasts with incremental Delivery Strategy Models, in which the software is delivered
to the customer in a number of iterations. In these models, each iteration delivers software that
implements a subset of its stakeholder requirements; only the final iteration implements all the
19
requirements. Incremental Delivery Strategy Models are described in sections 2.3.3.5, 2.3.3.6 and
2.3.3.7.


Determine all
Requirements and
develop Software
to meet them
Operation and
Maintenance


Figure 2 Conventional Development Model (adapted from Alexander et al
21
)

It should be noted that software delivery strategy is often dictated by customer requirements. No
information on the strengths and weaknesses of this model were identified in the literature. In the
opinion of the author however, these include:

Strengths:

 Simple to understand;
 The customer receives software that satisfies all agreed stakeholder requirements;
 There is no risk of dependencies between different increments of software not being identified
(i.e. new functionality in increment N+1 is dependent on functionality in increment N, but the
design of increment N does not easily allow the new functionality to be implemented. This can
happen if the dependency was not foreseen when developing increment N).

Weaknesses:

 The customer must wait until all requirements are implemented before receiving working
software. An incremental Delivery Strategy Model is likely to deliver part of the required
functionality faster;
 The customer cannot provide feedback on whether the software meets their needs until the
entire development effort is complete. Depending on how well stakeholder needs are
understood, this can increase the risk of stakeholder needs not being satisfied by the delivered
software. This risk could be reduced by apply requirements analysis techniques that facilitate
eliciting and agreeing a complete set of stakeholder requirements (e.g. Throwaway
Prototyping, simulation);
 Fewer signs of progress are visible compared to incremental Delivery Strategy Models;
 The development project is likely to be larger and more complex in comparison to incremental
Delivery Strategy Models.

2.3.3.2 Waterfall Model

The Waterfall software lifecycle model is a Development Model
21
. It is shown in Figure 3.

The Waterfall model was defined by Royce in 1970 and later modified by Boehm in 1976
27
. It is
probably the best known of all software lifecycle models
28
. It was defined as an attempt to manage
the increasing complexity of software development
27
. This complexity could not be effectively
managed using the prevalent Code and Fix approach. A number of variations of the Waterfall
model exist, which use different terminology
27
. They all share the following basic principles
however.

The Waterfall model divides the software lifecycle into a sequence of discrete phases. The phases
progress from the initial requirements definition activity, through varying levels of design,
implementation and then verification. These phases correspond to the processes enclosed within
the thick black border in Table 1. Some Waterfall models also include an operation and
20
maintenance phase. This is equivalent to combining the Waterfall in Figure 3 with the
Conventional Development model (see section 2.3.3.1). The operation and maintenance phase is
not required if the Waterfall is used to model a development-only project however.


Software
Concept
Requirements
Analysis
Architectural
Design
Detailed
Design
Coding and
Debugging
System
Testing


Figure 3 Waterfall Model (reproduced from McConnell
28
)

Like any Development Model, the Waterfall can be used in conjunction with both the Conventional
Development and iterative Delivery Strategy Models (see sections 2.3.3.5, 2.3.3.6 and 2.3.3.7), as
the Development Model for each increment. In this situation, early lifecycle phases such as
Software Concept, Requirements Analysis and Architectural design may only be carried out once
during the first delivery increment, while the Waterfall model for later increments starts with the
Detailed Design phase.

In the Waterfall, the phases are carried out in sequence. Only one phase is carried out at a time
28
.
At the end of each phase, a review is held to determine whether the transition can be made to the
next phase
28
. The Waterfall encourages an approach where the activities in each phase are
completed before progressing to the next phase (e.g. requirements are completely specified before
design starts)
28
. This is sometimes referred to as the “Big Design Up Front” approach. The
philosophy behind this approach is that errors are more expensive to correct the later in the
software lifecycle they are discovered. By carefully completing each phase, the likelihood of errors
being detected earlier is increased. In the Waterfall, it is permissible to move backwards between
phases as well as forwards. This is indicated by the feedback arrows in Figure 3. Once a previous
phase has been re-entered however, it is necessary to pass through all subsequent phases. The
Waterfall therefore caters for iteration between phases.

In the Waterfall, the main outputs of each phase are documents
28
. Documents produced in one
phase are inputs to the subsequent phase. For this reason, the Waterfall is sometimes referred to
as a document-driven model. Ideally, it should be possible for these documents to be complete
enough to pass to a different team with responsibility for the next phase
28
.

The Waterfall is effective under certain circumstances, but has a number of well-known
weaknesses
28
. During the late 1970s and 1980s, these weaknesses prompted the development of
a number of alternative software lifecycle models, to attempt to overcome them
27
. Some of these
models are discussed in subsequent sections. Despite this, a survey published in IEEE Software
in 2003 showed that more than forty percent of organisations surveyed use the Waterfall model
18
.
21
Strengths:

 Works well when the software’s requirements are stable and its technologies and technical
solutions are well understood. In this situaiton, it can be the most efficient software lifecycle
model for small projects
28
. When requirements are stable and technologies and technical
solutions are well understood, complete requirements and designs can be produced prior to
starting subsequent lifecycle phases with a reasonably low risk of them needing to significantly
change. In this situation, the model facilitates the detection of errors early in the lifecycle
28
. It
does this by allowing the complete requirements and design specifications to be verified before
they are used. The model becomes more efficient as the need for rework and the effort
associated with rework reduces. The effort associated with rework should reduce as the use of
automation in software lifecycle processes increases (e.g. automated code generation; model
transformation; automated testing). A relationship therefore exists between the Waterfall
model’s efficiency and the software processes and tools that are used in conjunction with it.
This is true for other software lifecycle models;
 Works well for well-understood but complex projects as complexity is dealt with in an orderly
manner
28
. This is due to the model decomposing the problem into a number of stages, which
are dealt with one at a time;
 Can work well when software quality is more important than project costs and timescales
28
.
The documentation-driven approach and the strict requirement to re-enter previous phases
when changes are required can be expensive and time-consuming, but can promotes quality.
Paradoxically however, the Waterfall is often associated with poor reliability and failing to meet
the user’s needs
27
. This contradiction may be due to the model being applied in unsuitable
situations (see weaknesses), or to short-cuts being made when budgets and timescales are
running out;
 Works well when project staff are inexperienced or have poor technical skills
28
. This is due to
the model organising activities in a straightforward, easy-to-understand way;
 Documentation is produced that can be used as the basis of software testing and
maintenance
27
. For safety-critical software it can also be used as safety assurance evidence;
 All planning is carried out at the start of the lifecycle, reducing the amount of planning effort
28
.
Depending project events however, this plan may need updating.

Weaknesses:

 Does not work well when requirements are initially not well-understood. The model requires
requirements to be completely defined before the start of the architectural design phase.
However when requirements are not initially well-understood this is not always possible
3, 28
. In
this situation, requirements can often only be fully specified after partially building the software
5,
28
(i.e. carrying out some design and/or coding). Requirements may not be initially well-
understood for a number of reasons. These include: stakeholders often being uncertain about
their needs
32, 5
; the software’s application domain being unfamiliar
28
or complex
5
; technological
constraints only emerging after the software has been partially constructed. This weakness,
and the following one, are likely to be two reasons why the Waterfall model is often associated
with failing to satisfy stakeholder needs;
 Does not work well when requirements are unstable. Requirements may change for a number
of reasons in addition to a poor initial understanding. These reasons include unpredictable
external factors (e.g. change in customer priorities) and identifying requirements that have
been forgotten
3, 28
. In the Waterfall model, changes to requirements later in the lifecycle require
returns to earlier lifecycle phases which can be expensive or impractical;
 Does not work well when the software’s technical solution and technologies are initially not
well-understood. The model requires the architectural design to be completely defined at the
start of the detailed design phase and the detailed design to be completely defined at the start
of the coding phase. However if the technical solution is not well-understood this is not always
possible. A reason for this that the technical feasibility of a new design can often only be
verified by implementing it. Additionally what is possible may be limited by technological
constraints which only emerge after the software has been partially constructed. These factors
22
mean that designs can often only be fully specified after partially building the software (i.e.
carrying out some lower-level design and/or coding);
 It may be difficult to return to previous phases to correct mistakes
28
. There may be political
problems associated with doing this if artifacts from previous phases have already been agreed
with others
28
(e.g. another organisation is already using an agreed interface specification);
 The documentation-driven approach and the strict requirement to re-enter previous phases
when changes are required can be expensive and time-consuming. Development time is likely
to be increased by the requirement that only one phase should be performed at a time.
Because of this, the Waterfall Model is not well suited to rapid development
28
. Also it is often
associated with late and over-budget delivery
27
;
 System testing occurs late in the lifecycle. This may mean that missing
28
or misunderstood
requirements are exposed late in the lifecycle;
 There is little visible sign of working software until the end of the project
28
. This is the case
when the Waterfall model is used in conjunction with a Conventional Development Delivery
Strategy Model (see section 2.3.3.1). There will be more visible signs of progress when the
Waterfall model is used in conjunction with an iterative Delivery Strategy Model however;
 Some lifecycle processes take place in more than one of the Waterfall model’s phases
28
. It can
be difficult to implement these processes when phases must be carried out sequentially
28
.

2.3.3.3 Spiral Model

The Spiral model does not fall into either of the categories proposed by Alexander et al
21
and
discussed in section 2.3.1. This model is principally concerned with identifying and mitigating
project risk. It is at a higher level of abstraction from the software lifecycle that Development
Models. The Spiral model is shown in Figure 4.

The Spiral model was introduced by Boehm in 1988, as an attempt to address weaknesses
associated with other software lifecycle models
3
. Unlike the Waterfall model which is document
driven, the Spiral model is risk-driven
3
. McConnell describes the Spiral model as a best practice
approach
28
.

In the Spiral model, the software lifecycle is represented as a series of revolutions around an axis
perpendicular to the page. Each revolution is split into four phases. These are represented by the
model’s four quadrants. Each revolution addresses a particular risk associated with the project. A
revolution starts in the top-left quadrant. In this quadrant, the objectives for the aspect of the
product considered in current revolution are defined (e.g. performance, functionality, etc.), as are
the alternative ways to meet these objectives and the constraints associated with the application of
each alternative (e.g. cost, schedule, interface)
3
. The top-right quadrant is then entered. In this
quadrant, the alternatives are evaluated against the objectives and constraints
3
. This activity often
identifies uncertainties which can be sources of project risk
3
. If this is the case, a cost-effective
strategy is identified to reduce the risk
3
. Examples of risk reduction strategies are prototyping,
simulation, benchmarking, reference checking, user questionnaires, analytic modeling
3
. The
activities in the bottom-right quadrant are then performed. The activity carried out in this quadrant
addresses some of the major remaining risks
3
. They are therefore project specific. Each activity
relating to specifying the software is followed by a validation activity
3
. The bottom-left quadrant is
then entered. In this quadrant the next revolution is planned
3
. A review is conducted at the end of
each revolution by the main people and organisations concerned with the product
3
. The products
produced in the revolution and the plans for the next revolution are reviewed with the aim of
gaining commitment from everybody for the next revolution
3
. At the end of a revolution, the
product may be partitioned into a number of components, each of which will start their own
revolutions
3
. This will lead to a number of parallel revolutions being executed for each
component
3
. The model ends either when the software becomes operational or before then if the
project is abandoned
3
. A new Spiral model is started when a software modification is envisaged
3
.
Although Figure 4 shows four revolutions around the axis, any number of spirals can be performed
as required by the project
28
.
23


Figure 4 Spiral Model (reproduced from Boehm
3
)

The Spiral model is at a high level of abstraction from the real software lifecycle. The only
requirement is that the four quadrant activities are carried out in each revolution. The model does
not specify the sequence of software lifecycle processes (Figure 4 is just an example). Instead this
is determined by project risks. Because of this, a Spiral model will vary from project to project,
depending on the project’s most significant risks and the effectiveness of different strategies for
their reduction
3
. In some projects it may resemble a Waterfall model, in others an Evolutionary
Development model (see section 2.3.3.6), in others a hybrid of different models
3
.

In the Spiral model, the amount of time devoted to activities such as planning, configuration
management, quality assurance, verification and testing is determined by the risk associated with
them
3
. Another feature of the Spiral model is that specifications do not contain a uniform level of
detail; instead higher-risk areas are elaborated in greater detail, while lower risk areas have less
detail
3
.

The Spiral model’s emphasis on risk mitigation has obvious relevance for safety-critical software.
Schmedake has presented a paper entitled “Software Safety in a Spiral Development” at the 20
th

International System Safety Conference
11
. Unfortunately, it was not possible to obtain a copy of
this paper. No other references have been identified relating to the use of the Spiral model to
develop safety-critical software. An interesting question is: could the Spiral model be used to
address safety risk in addition to project risk? Additionally, the way in which the Spiral model
facilitates the identification and evaluation of alternative solutions may be useful as part of a Health
and Safety at Work Act (1974) (HSWA) compliance strategy. This might be the case if the model
could be used to help identify, assess and select risk mitigation strategies from a number of
potential alternatives as part of an ALARP argument. An examination of the potential use of the
Spiral model for safety-critical software is beyond the scope of this project however. This would be
further work.
24
Strengths:

 The identification and evaluation of alternatives highlights potential for software reuse at an
early stage
3
;
 Products tend to be designed for change as this is identified
3
;
 Quality objectives for the software can are accommodated by defining them as objectives and
constraints
3
;
 The risk analysis, validation and commitment activities help remove errors and less beneficial
options early
3
;
 Identifying the risks associated with the project allows the amount of effort that needs to be
expended on its different activities to be determined (i.e. enough to reduce the risk to an
acceptable level)
3
;
 Initial development and maintenance are treated in the same way
3
;
 Can be applied to systems comprised of hardware and software as well as just software
3
;
 Works well on internal software development
3
;
 Provides as much management control as the Waterfall model by including a review at the end
of each spiral
28
.

Weaknesses:

 More work is needed for it to be used for software produced by other organisations
3
;
 Reliance is placed on the ability of those involved to identify and manage project risk
3, 28
;
 It is complicated
28
;
 Where the lifecycle is straightforward and project risks are not high the risk management
provided by the Spiral model may not be needed
28
;
 As risk decreases, cost tends to increase
28
;
 The Spiral model needs further elaboration so that it is consistently understood and applied
3
.

2.3.3.4 Waterfall Variant Models

As discussed in section 2.3.3.2, the Waterfall model has a number of weaknesses. Variations on
the Waterfall model have been proposed that address some of these weaknesses. McConnell
states that Waterfall variants are often more effective than the Waterfall model itself
28
. Waterfall
variants, like the Waterfall model, are Development Models
21
.

The first Waterfall variant identified in this review is the Sashimi model. The Sashimi model gets its
name from a Japanese style of presenting raw fish that resembles its structure
28
. The Sashimi
model is shown in Figure 5.

The Sashimi model addresses the Waterfall model’s weakness of only allowing one phase to be
performed at time
28
. The Sashimi model represents the same phases as the Waterfall model.
Sequential phases overlap however. This indicates that work can be carried out concurrently in
more than one phase. For instance, aspects of the requirements analysis, architectural design and
detailed design phases can occur in parallel
28
. In order to maintain quality when using the Sashimi
model, it will be important to ensure that partially-completed outputs from one phase are properly
verified and configuration controlled before being used as an input a subsequent phase.

Strengths:

 Allowing phases to be performed in parallel has the potential to improve development speed;
 Allowing phases to be performed in parallel can help resolve uncertainties associated with the
software’s requirements and technical solution. These can be resolved by carrying out
exploratory design and implementation work to improve understanding.

25
Software Concept
Requirements Analysis
Architectural Design
Detailed Design
Coding and Debugging
System Testing


Figure 5 Sashimi Model (reproduced from McConnell
28
)

Weaknesses:

 The ends of lifecycle phases are less clearly defined, making it harder to track project
progress
28
;
 A complete documented output of the previous phase (e.g. requirements specification) is not
available to those performing subsequent phases
28
. This can lead to misunderstandings and
mistaken assumptions
28
. This will be the case if all dependencies between the specified and
unspecified parts of the output are not foreseen. If this happens, rework will be required when
the dependency is finally identified. This risk could be mitigated by including risk assessment
activities to assess the risk of dependencies between parts of outputs that are produced at
different times. The risk of this reduces for small projects teams in which the same individuals
perform all phases
28
. This is due to there being fewer communications interfaces and thus
fewer opportunities for misunderstanding and failure to transmit information.

The next Waterfall variant identified in this review is the V-model (or Vee Model). The version of
the V-model described in IEC61508 Part 3
8
is reproduced in Figure 13 in section 2.3.4.2. The V-
model was developed by NASA in 1987. It is essentially the Waterfall model bent in half around
the software implement phase. It differs from the Waterfall model however in that it highlights the
relationship between the different levels of software design decomposition (shown on the left-hand
side of the V) and the different software verification phases (shown on the right-hand side of the V).
The verification phase that verifies the outputs of a design phase is shown opposite that design
phase in the V.

Strengths:

 Highlights the relationship between design and verification phases.

Weaknesses:

 As for the Waterfall model.

The next Waterfall variant identified in this review is the Waterfall with Sub-projects model. This is
shown in Figure 6.

This model is essentially the same as the Waterfall model, except that different components within
the software architecture are developed in separate sub-projects. Each sub-projects can progress
at its own pace, independent of one anothers. This attempts to overcome the Waterfall model’s
26
limitation of only allowing work to be performed in one phase at a time. Only two sub-project are
shown in Figure 6 for clarity, but the number is not restricted. The components developed in each
sub-project are integrated during the System Testing phase.



Software
Concept
Requirements
Analysis
Architectural
Design
System
Testing
Detailed
Design
Coding and
Debugging
Sub-system
Testing
Detailed
Design
Coding and
Debugging
Sub-system
Testing


Figure 6 Waterfall with Sub-projects Model (reproduced from McConnell
28
)

Strengths:

 Each sub-project can proceed at a different pace. For example, the design and implementation
of one component can proceed immediately, while the development of other components is
held up while understanding is improved
28
or staff become available;
 The complete software architectural design is created before detailed design proceeds. This
promotes the identification of dependencies between components being developed in different
sub-projects.

Weaknesses:

 As for the Sashimi model, there is a risk that all dependencies between different components
are not foreseen. The creation of the software architecture reduces this risk however (see
above). Rework may be required when these dependencies emerge
28
;
 Although not identified from the reviewed literature, an additional weakness of this model is that
the reintegration of components occurs late in the lifecycle (i.e. system testing). Integration
problems will therefore only be highlighted late in the lifecycle when they are more expensive to
fix. The model could be adapted however so that the components developed in the
subprojects are individual modules within a software application. In this situation, reintegration
would occur during the module integration testing phase. Integration problems would therefore
be highlighted earlier in the lifecycle. This situation would be similar to that in the Waterfall
model, except that module reintegration could commence when only some of the modules are
completed.

The last Waterfall variant identified in this review is the Risk-reduction Waterfall model
28
. The Risk-
reduction Waterfall is a hybrid of the Waterfall model and the Spiral model. It aims to address the
27
Waterfall model’s weaknesses when the software’s requirements and design cannot be fully
defined at the start of a project
28
. It does this by starting the lifecycle with a spiral in which risks
(e.g. requirements uncertainty or technology risk) are addressed as in the Spiral model. Once the
significant risks have been resolved, the lifecycle changes into the Waterfall model. A possible
Risk-reduction Waterfall is shown in Figure 4. In Figure 4 the final quadrant of the outer spiral is
equivalent to the Waterfall model. The transition point between the Spiral model and the Waterfall
model can vary however, depending on the perceived risks
28
. For instance, if the risks are
associated with requirements, the Waterfall model could start at the Architectural Design phase.
However if there are risks associated with the design, the Spiral model could incorporate part of
the design phases.

Strengths:

 Helps resolve risks early, reducing the likelihood of rework later in the lifecycle
28
. The Spiral
part of the model has the strengths of the Spiral model (see section 2.3.3.3). Once risks are
resolved the model becomes the Waterfall model and possesses its strengths (see section
2.3.3.2).

Weaknesses:

 As for the Spiral model (see section 2.3.3.3);
 The Waterfall section of the model still possesses those weaknesses of the Waterfall model not
associated with lack of understanding of requirements or the design.

2.3.3.5 Incremental Development Model

Incremental Development (sometimes also called Staged Delivery
28
) is a Delivery Strategy
software lifecycle model
21
. The Incremental Development model is shown in Figure 7.


Software
Concept
Requirements
Analysis
Architectural
Design
Stage 1: Detailed design, code
and debug, test and delivery
Stage 2: Detailed design, code
and debug, test and delivery
Stage 3: Detailed design, code
and debug, test and delivery


Figure 7 Incremental Development Model (reproduced from McConnell
28
)

This model addresses the Waterfall model’s weakness of not providing the customer with any
working software until the software completely satisfies all stakeholder needs
28
. In the Incremental
28
Development model, the requirements for the entire software are identified at the start of the
lifecycle
28, 21
. A conscious decision is then made to split the delivery of these requirements over a
number of separate software issues
21
. The stakeholder requirements and software architectural
design are produced as in the Waterfall model
19, 28, 21
. The software is then delivered to the
customer in a number of increments or stages
19
. Each increment delivers a subset of the
stakeholders’ needs
28, 21
. It is necessary to produce the software’s architectural design at the start
of the lifecycle so that the dependencies between different software modules can be defined. The
modules to be delivered in each increment can then be developed taking into account these
dependencies. In some circumstances, the use of an Incremental Development model may be a
project requirement. For example in the safety-critical field, Integrated Modular Avionics (IMA)
developed to the DO-297 standard is accepted in an incremental manner
2
. Software developed for
such systems may therefore need to be developed incrementally. Agile methods use Incremental
Development models. For example, the software lifecycle model used by the Extreme
Programming method
10
is very similar to that shown in Figure 7.

If the Incremental Development model were used to develop safety-critical software, it would be
necessary to consider whether each increment would be acceptably safe to operate. This is an
issue, as early increments of the software may not implement all of the safety requirements
implemented by the software’s final increment. It might be possible to address this problem by
implementing other safety risk mitigations external to the software (e.g. other safety systems,
procedures) while early increments of the software are operational. These mitigations would need
to reduce the risk that will eventually be mitigated by safety requirements that are yet to be
implemented in the software.

The term Incremental Development is sometimes also used as a synonym for Evolutionary
Development (e.g. in The TickIT Guide
5
). From the results of this literature review however it
seems that the term Evolutionary Development usually describes the software lifecycle model
discussed in section 2.3.3.6.

Strengths:

 The customer receives working software before the end of the project
28
. As well as satisfying
some of the stakeholder’s needs early, this provides visible signs of progress
28
. The cost
expended before some functionality is delivered to the customer is less compared to the
Waterfall model
27
;
 Small projects to develop individual increments tend to be less risky than larger projects
developing the entire software
28
. This is likely to be because they are smaller and simpler.

Weaknesses:

 Requires careful management to ensure that each increment delivered to the customer
provides useful functionality
28
. Resources also need to be carefully managed to ensure that
each increment is delivered on time
28
;
 Failure to identify all dependencies between different increments can lead to the late discovery
that functionality required for an increment is only scheduled to be delivered in a later
increment
28
;
 An iterative approach does not suit a customer procurement processes that involves defining a
single requirements specification as part of the contract
18
.

2.3.3.6 Evolutionary Development Model

Evolutionary Development is a Delivery Strategy software lifecycle model
21
. It is shown in Figure
8.


29
Software
Concept
Preliminary
Requirements
Analysis
Design of
Architecture
and System Core
Develop a
Version
Deliver the
Version
Elicit Customer
Feedback
Incorporate
Customer
Feedback
Deliver Final
Version


Figure 8 Evolutionary Development Model (reproduced from McConnell
28
)

The Evolutionary Development model was developed to address the Waterfall model’s weakness
in that the software’s requirements and design must be completed before subsequent lifecycle
phases can commence. As mentioned previously, in certain circumstances the requirements
and/or technical solutions will not be well enough understood to specify them completely at the
start of the project.

In the Evolutionary Development model a subset of the software’s requirements are initially
implemented as a useable version of the software
28, 27
. This is then delivered to the customer
28, 27
.
The subset of requirements implemented by this version will be well-understood core stakeholder
requirements
28, 27
. The customer provides feedback on the use of the software
28, 27
. The software
is then modified based on this feedback
28, 27
. The modified version of the software is then
delivered to the customer. This cycle continues until either the customer is satisfied with a delivery
or resources to fund further increments are exhausted
28
. In Evolutionary Development, unlike
Incremental Development, the requirements for the next increment are not known until the previous
increment has gone into operation and been evaluated
21
.

Evolutionary Development is very similar to Evolutionary Prototyping (section 2.3.3.7)
28
, and the
terms are sometimes used synonymously. In Evolutionary Prototyping however, the initial software
increments implement less-well understood requirements, rather than well-understood
requirements. This is done with the intention of gaining a better understanding of these
requirements by using the software as a prototype
28, 27
. McConnell describes Evolutionary
Development as a combination of Evolutionary Prototyping and Incremental Development
28
. The
greater the uncertainty about the requirements, the more it resembles Evolutionary Prototyping; the
greater the certainty about the requirements, the more it resembles Incremental Development
28
.

Strengths:

 Helps identify stakeholder needs when these are initially unclear;
 All the strengths associated with Incremental Development (section 2.3.3.5).

Weaknesses:

 The design of earlier increments may not be flexible enough to accommodate unforeseen
requirements for future increments
3
. This can lead to the need to refactor the design and code.
30
The alternative is to attempt to modify the unsuitable design or code. This can lead to a poor
architectural design and code that is difficult to maintain
3
. The refactoring approach is adopted
by Agile methods such as Extreme Programming
10
. In these methods, the cost associated with
refactoring is reduced due to the use of relatively informal development methods. Where
detailed design documentation needs to be maintained and the impact of changes need to be
carefully assessed, as is the case for safety-critical software, the cost will be much greater;
 Although not mentioned in the reviewed literature, the use of the Evolutionary Development
model for safety-critical software would need to address the issue that is will be potentially
unsafe to use software in operation when there is uncertainty over whether its functionality is
correct. It may be possible to avoid this problem however by evaluating incremental versions
of the software on test rigs.

2.3.3.7 Evolutionary Prototyping Model

Evolutionary Prototyping is a Delivery Strategy software lifecycle model. This model is identical to
the Evolutionary Development model shown in Figure 8, with the exception that the “Design of
Architecture and System Core” phase is replaced with a “Design most visible Parts of Software”
phase.

As discussed in the previous section, Evolutionary Prototyping is very similar to Evolutionary
Development. In the Evolutionary Prototyping model, the stakeholder requirements for the
software are uncertain at the start of the lifecycle. Typically, the most visible parts of the software
developed in the first increments
28, 32
. Successive increments are then delivered, used and
evaluated in the manner of Evolutionary Development. The software concept evolves as different
increments are delivered
28
. This process continues until the customer and software developer
agree to stop the process