28
. An important feature of the Evolutionary Prototyping model is that the
development of each evolution is carried out in a way that produces quality, maintainable
software
28, 27
. This involves requirements capture and software design activities
28
. This is in
contrast to the Throwaway Prototyping approach, in which the prototype is developed in a “quick
and dirty” manner before being discarded
27
. Throwaway Prototyping is described as a software
lifecycle model by ISO/IEC TR 19759 (SWEBOK)
40
. In the author’s opinion however, it is a method
for increasing understanding during the software lifecycle’s requirements or design phases rather
than a software lifecycle model. It does not conform to the definitions of a software lifecycle model
described in section 2.3.1. Adequate user involvement is essential to prototyping
5
. This is
required in order to receive feedback on the prototype. As prototyping involves making many
decisions related to the software’s requirements and design, a greater level of management control
is often required compared to the Waterfall model
5
.

Strengths:

 Useful in project circumstances where stakeholder requirements are unclear or changing
rapidly, or where the application domain is unfamiliar, or where the best technical solutions are
unclear
28
;
 Facilitates addressing high risk areas early
5
;
 All the strengths associated with Incremental Development (section 2.3.3.5).

Weaknesses:

 All the weaknesses associated with Evolutionary Development (section 2.3.3.6);
 It is not possible to tell at the start of the project how much time and effort will be required to
deliver acceptable software
28
;
 Technologies may be selected that are suited to developing a prototype quickly
32
. These may
be used in the final software, even though they are not the most appropriate
32
;
 Can easily degenerate into Code and Fix
28
.

31
2.3.3.8 Component-based Software Engineering Model

The Component-based Software Engineering (CBSE) software lifecycle model is a Development
model. It is shown in Figure 9.



Requirements
Specification
Component
Analysis
Requirements
Modification
System Design
with Reuse
Development
and Integration
System
Validation



Figure 9 CBSE Software Lifecycle Model (reproduced from Sommerville
18
)

CBSE was developed in the late 1990s as a means of achieving the widespread software reuse
that had failed to materialise through object-oriented development
18
. It involves implementing
software by integrating loosely-coupled, pre-existing, software components
18
. These components
are typically executables that are standardised to operate within a particular framework (e.g.
CORBA)
18
. CBSE is developing into a mainstream approach to software development and is
currently mainly used to develop enterprise information systems
18
.

Figure 9 is a generic CBSE software lifecycle model. The Requirements Specification phase is
similar to this phase in other software lifecycle models
18
. The exception is that the requirements
are not defined precisely and stakeholders are encouraged to be flexible about them
18
. This is
done to increase the range of pre-existing components that will be available to satisfy the
requirements
18
. The Component Analysis phase involves identifying pre-existing components that
can satisfy the requirements specification
18
. Components may not be available that satisfy all
requirements
18
. In this case, new software will need to be developed to fill the gaps
18
. The
Requirements Modification phase involves altering the requirements to match the characteristics of
the identified components
18
. If requirements cannot be altered, the Component Analysis phase is
revisited in an attempt to identify other components that can better satisfy the requirements
18
. The
System Design with Reuse phase involves designing a system architecture that integrates the
selected components and any new software with a component model infrastructure (e.g. CORBA,
Enterprise Java Beans or COM+)
18
. This infrastructure provides services that support the
execution of the components
18
. The Development and Integration phase involves implementing
any new software and integrating this and the reused components
18
. This usually involves writing
adaptor components that reconcile incompatibilities between the interfaces of communicating
components
18
. System Validation is similar to this phase in other software lifecycle models
18
.

Strengths:

 Less software needs to be developed. This reduces costs and risk
18
. Also, development times
are shorter in comparison to developing new components
18
;
 The reuse of middleware abstracts the developer from lower-level implementation details and
allows them to concentrate on application-level functionality
18
. Thus complexity is reduced;
 Loosely couple components have fewer interdependencies reducing the risk of interference
between components
18
;
 Completely-defined component interfaces lead to information hiding. The impact of changes to
a component is limited to that component provided that its interface is maintained
18
. This
promotes maintainability. It also allows components to be replaced with other components with
the same interface
18
.

32
Weaknesses:

 A wide range of components are not commercially available
18
(this comment was made by
Sommerville
18
writing in 2007);
 Altering requirements to fit the characteristics of available components can mean that real
stakeholder needs are not satisfied
18
. This is a significant issue for safety-critical software, as
software safety requirements will derived from system safety requirements. While there may
be some flexibility in the system design to allow for changes in the safety requirements
allocated to software, the requirement for the system to operate with acceptable safety is
cannot be altered;
 COTS components are not under the control of the organisation developing the software
18
.
The dependability and non-functional behaviour of COTS components is uncertain if their
source code is not provided
18
. This is a significant issue for safety-critical software, where
dependability must be assured. Also, it will not be possible to correct bugs
18
. If source code is
available, effort needs to be spent on source code analysis;
 Predicting emergent system behaviour when components are integrated together is difficult if
the component’s source code is unavailable
18
;
 Making components reusable in any situation increases their complexity as they must provide
functionality to cater for all situations
18
;
 Validation of components increases cost;
 The coverage of component validation testing will be unknown for components where no
source code is provided as test cases can only be developed from the interface specification
18
.
Again, this is a significant issue for safety-critical software as the software’s correct behaviour
needs to be demonstrated with sufficient confidence;
 Generic components are likely to possess functionality that is not required in a particular
application
18
. Unintended execution of this functionality may cause the system to behave in an
unintended manner
18
. This again is a significant issue for safety-critical software, as
unintended behaviour could be unsafe;
 Assumptions about a component’s operating environment will be made when they are
developed
18
. If these assumptions are not available to those reusing the component, it may be
used in a manner in which the assumptions do not hold (e.g. Ariane 5 accident)
18
.

2.3.4 Safety-Critical Software Lifecycle Models

This section reviews the literature to identify software lifecycle models that have been proposed for
safety-critical software.

2.3.4.1 Academic Software Safety Lifecycles

A number of software lifecycle models for safety-critical software have been identified from the
academic literature.

Lee et al.
20
describe a software safety lifecycle model used in a nuclear power plant reactor
protection system project. This is reproduced in Figure 10. This model is based on the IEEE
1228, IEEE 7-4.3.2, IEC 61513, and IEC 60880 standards
20
. The model includes three categories
of lifecycle phase
20
. The first is “Design Process” phases. These relate to activities directly
associated with the production of the software
20
. The phases in this category are arranged as a
Waterfall model (see section 2.3.3.2). The first two phases in the Waterfall model are part of the
system lifecycle rather than the software lifecycle. The Waterfall model ends with a “Code, CT, IT,
ST Reports” phase. The meaning of CT, IT and ST are not defined (possibly “Code Test”,
“Integration Test” and “System Test”?). The Waterfall model does not represent separate
verification phases following the code phase. Such verification phases are required to verify that
software meets its design and stakeholder requirements. Representing them as separate phases
in the model would be beneficial, as it would show their sequencing with other phases. The
33
second category is “Safety Analysis Process” phases. These relate to qualitative safety analysis
activities performed on the outputs of the “Design Process” activities
20
. “Safety Analysis Process”
phases are carried out in parallel with “Design Process” phases
20
. The objectives of the “Safety
Analysis Process” phases are not clearly explained. From Figure 10, they can be seen to consist
of well-known safety analysis techniques. The objective of these techniques is to identify potential
contributions to hazards and the causes of these contributions. In the model, different “Safety
Analysis Process” phases relate to different software design and implementation phases. It seems
therefore that each design and implementation phase is subject to a separate safety analysis. The
objective of these analyses is presumably to identify hazard and their causes at this particular level
of the design. The third category is “Reliability Analysis Process” phases. These relate to
quantitative reliability analysis activities
20
. Again, the purpose of these phases is not explained.
The “Software FTA” activity maybe draws together the hazard causes identified in the individual
“Safety Analysis Process” phases and creates a single fault tree for the software. This is unclear
however.



Figure 10 Software Safety Lifecycle Model from Lee et al.
20


Rodríguez-Dapena
33
describes a generic “reference process for safety certification”. This is a
software lifecycle model in which development and verification software lifecycle phases are
arranged in a Waterfall model. In the model, safety analyses are performed in parallel with each of
the development and verification phases. Details of these analyses are not described however.
Like the model in Lee et al.
20
, this model represents safety analyses being performed in parallel
with each development phase.

Czerny et al.
15
describe a set of software safety tasks to be performed during the development of
safety-critical software. These are reproduced in Table 2. In Table 2, the software safety tasks
associated with each software development phase are shown on the same row as the phase. The
software safety tasks analyse the outputs from the software development phase tasks and are
performed in parallel with them
15
. This is therefore consistent with Lee et al.
20
and Rodríguez-
Dapena
33
. Unlike Lee et al.
20
however details of the analysis processes to be used are not
specified in the model. Czerny et al.
15
state that the techniques used to perform each software
safety task will vary according to the project’s characteristics
15
. This approach seems more
34
consistent with the role of a software lifecycle model. As identified in section 2.3.1, software
lifecycle models define the relationship between broad phases within the software lifecycle, rather
than the details of how these phases are performed. Table 2 includes safety activities related to
testing. These are not described in Lee et al.
20
. It will be necessary to verify that the software
implementation meets its safety requirements. It could be argued however that this is an integral
part of the SW Verification and Validation phase so would not need to be represented as a
separate phase on a software lifecycle model.




Table 2 Software Safety Lifecycle from Czerny et al.
15


Czerny et al.
15
describe an example software lifecycle model for safety-critical software. In this
model, the software development phases are arranged in a Waterfall model. The software safety
tasks to be performed in parallel with each software development phase are shown along side it,
highlighted in a different colour. This model is similar to that presented by Lee et al.
20
.



Figure 11 Software Safety Lifecycle Model from Hawkins
1
35

Hawkins
1
describes a software safety lifecycle model. This is reproduced in Figure 11. This model
is similar to the system safety lifecycle model described in the MSc course lectures. Software
development phases (shown in rectangular boxes) are shown as a V-model (see section 2.3.3.4).
Various safety activities (shown in boxes with rounded corners) are performed in parallel with each
development phase. The Hazard Identification and Risk Assessment activities identify and assess
hazards associated with the software’s requirements
1
. The PSSA (Preliminary System Safety
Assessment) aims to identify how the software design can hazardously fail
1
. Safety requirements
are specified to control these failures
1
. The SSA (System Safety Analysis) activity aims to confirm
that the software implements its safety requirements
1
. The Common Cause/Common Mode
Analysis attempts to identify common cause failure modes
1
. This model is therefore similar to the
other models discussed in this section, in that it represents safety activities as being performed in
parallel with development and verification activities.

2.3.4.2 IEC61508

IEC61508 Part 3
8
defines a software lifecycle model called the Software Safety Lifecycle. This is a
Development Model. This is reproduced in Figure 12.




Figure 12 Software Safety Lifecycle from IEC61508 Part 3
8


The Software Safety Lifecycle comprises the IEC61508 Part 3
8
software process model processes
described in section 2.2.3, with the exception of the umbrella Software Quality Management
System, Software Verification and Functional Safety Assessment processes. IEC61508 Part 3
8

states that the Functional Safety Assessment phase may be performed either at the completion of
each phase or of a number of phases
6, 8
.

36
Unlike the academic models identified in section 2.3.4.1, the Software Safety Lifecycle model does
not show safety analysis activities occurring in parallel with development activities. IEC61508 Part
1
6
defines a Hazard and Risk Analysis phase as part of the system lifecycle. It states that if
decisions are taken during Software Safety Lifecycle phases that change the basis upon which
earlier hazard and risk analysis decisions were taken, then a further hazard and risk analysis
should be undertaken. Decision taken during the Software Safety Lifecycle will affect the system-
level hazard and risk assessment, as software design decisions have the potential to contribute to
system-level hazards. IEC61508 provides no guidance on how the Hazard and Risk Analysis
phase should be revised in the light of Software Safety Lifecycle decisions. The academic models
provide models that could be followed. IEC61508 Part 3
8
would benefit from more clarity on this
matter, as current IEC61508 Part 3
8
does not mention safety analyses.

IEC61508 Part 1
6
states that the Software Safety Lifecycle is a simplified view of reality and does
not show all the iteration that will occur during the actual lifecycle. Given the inherently
unpredictable nature of the software lifecycle, iteration will almost certainly occur. The Software
Safety Lifecycle diagram does not show any backward transitions between phases. IEC61508
Part 3
8
states however that if changes are required during the Software Safety Lifecycle, a return is
required to a previous phase; all subsequent phases must then be reentered. The model does
therefore allow iteration.

In addition to the sequence of Software Safety Lifecycle phases shown in Figure 12, IEC61508
Part 3
8
Table 1 specifies further requirements for sequencing. It also specifies requirements for
data flow between phases. These are summarised in Table 3.

Predecessor Phase Successor Phase
Data Flow between
Predecessor and Successor
Software safety validation
planning
Software design and
development: software
architecture
Software safety requirements
specification
Software design and
development: support tools and
programming languages
Software safety requirements
specification
As above
Software design and
development: software
architecture
Software design and
development: detailed design and
development (software system
design)
Software architecture design
description
As above
Software design and
development: code
implementation
Software design and
development: support tools and
programming languages
Software design and
development: detailed design and
development (individual software
module design)
Support tools and coding
standards
Software design and
development: detailed design and
development (software system
design)
As above
Software system design
specification
Software design and
development: code
implementation
Software module design
specification
Software design and
development: detailed design and
development (individual software
module design)
Software design and
development: software module
testing
Software module test specification
Software design and
development: code
implementation
As above
Source code listing;
Code review report
37
Predecessor Phase Successor Phase
Data Flow between
Predecessor and Successor
Software design and
development: detailed design and
development (software system
design)
Software design and
development: software integration
testing
Software system integration test
specification
Software design and
development: software
architecture
Programmable electronics
integration (hardware/software)
Software architecture integration
test specification;
Software/programmable
electronics integration test
specification
All phases listed above the
Software operation and
modification procedures in
Appendix C
Software operation and
modification procedures
Relevant outputs of the
predecessor phases
Software safety validation
planning
Software safety validation Software safety validation plan
Software operation and
modification procedures
Software modification Software modification procedures

Table 3 IEC61508 Part 3
8
Table 1 Software Safety Lifecycle Sequencing Requirements


Figure 13 V-model Implementation of the IEC61508 Part 3
8
Software Safety Lifecycle

IEC61508 Part 3
8
illustrates the use of a V-model (see section 2.3.3.4) to implement the Software
Safety Lifecycle. This diagram is reproduced in Figure 13. IEC61508 Part 3
8
states that the depth
and number of phases in the V-model can be tailored to suit the project’s safety integrity and
complexity.

The Software Safety Lifecycle is a Waterfall-variant model, although it is different from any of the
variants identifed in section 2.3.3.4. IEC61508 Part 3
8
does not state whether performing several
phases concurrently is permited or not. It does not preclude this however. As discussed in section
2.3.3.2, the Waterfall model has a number of weakness which make its use problematic in certain
project circumstances.
38
2.3.4.3 IEC61511

IEC61511
39
defines a software lifecycle model called the Application Software Safety Lifecycle.
This model is identical to IEC61508 Part 3
8
‘s Software Safety Lifecycle (Figure 12), with the
following exceptions:

 The Software Safety Lifecycle’s “Software Design and Development” phase is replaced by a
“Software Design, Configuration and Simulation” phase. This reflects the widespread use of
configurable COTS components in the process sector;
 The Software Safety Lifecycle’s “Software Safety Validation” phase is not included. In
IEC61511
39
this phase is performed in the system lifecycle.

Like IEC61508 Part 3
8
, IEC61511
39
illustrates the Application Software Safety Lifecycle using a V-
model. This illustration is essentially the same as that shown in Figure 13. IEC61511’s
39

requirements relating to process sequencing and the data flow between processes are also very
similar to those described in IEC61508 Part 3’s
8
(see Table 3). IEC61511
39
states that application
software developed in full variability languages (e.g. ADA, C++) should meet the requirements of
IEC61508 Part 3
8
. The IEC61511
39
software lifecycle model is therefore very similar to that
proposed in IEC61508 Part 3
8
.

2.3.4.4 DO-178B

DO-178B
34
describes relationships between software lifecycle processes, but does not specify the
use of a particular software lifecycle model. Private correspondence between the author and a
DO-178C committee member indicates that DO-178C’s content regarding software lifecycle
models will be unchanged from that in DO-178B
34
. DO-178B
34
does provide following guidance
relevant to software lifecycle models however:

 A software lifecycle is defined by selecting activities for each process, sequencing these
activities and allocating responsibility for implementing them;
 Some software lifecycle processes may not be appropriate for some projects (e.g. the software
coding process for a COTS component). These may be omitted;
 The usual sequence of activities is: requirements, design, coding and integration;
 The minimum conditions to be met to move from one software lifecycle process to another
(transition criteria) should be explicitly specified as part of the software planning process;
 Software lifecycle processes may occur in parallel;
 Software lifecycle processes may be carried out in an iterative manner during the software
lifecycle;
 The integration process and software verification process activities will be carried out
incrementally;
 The integral processes occur at the same time as the software development processes
throughout the software lifecycle.

This guidance is consistent with information on the software lifecycle, software lifecycle processes
and software lifecycle models from other sources described in previous sections.

DO-178B
34
briefly describes the sequence of processes associated with a number of software
lifecycle models. These models include prototyping (section 2.3.3.7), and although not named as
such, models that roughly correspond to the Waterfall model (section 2.3.3.2), CBSE (section
2.3.3.8).

DO-178B
34
describes information on fault containment boundaries, software requirements, the
software architecture, and error sources detected or eliminated by the software architecture as
being passed to the system lifecycle’s System Safety Assessment Process for analysis. The
objective of this analysis is to determine the impact of the software’s design and implementation on
39
system safety
34
. This is therefore similar to the safety analysis activities described in the academic
models in section 2.3.4.1.

Unlike IEC61508 Part 3
8
and IEC61511
39
, DO-178B
34
does not attempt to specify a detailed
software lifecycle model. The standard’s users are therefore not provided with a model to use. As
will be seen in section 2.3.5 however, a single universally-ideal software lifecycle model does not
exist. It might therefore be safer for IEC61508 Part 3
8
and IEC61511
39
to include a health warning
stating that their software lifecycle models may not be appropriate in certain project circumstances.
Currently there is a risk that these standard’s users will adopt these models irrespective of the
circumstances, as they feel that by following the standard’s guidance their approach is less likely to
be criticised.

2.3.5 Software Lifecycle Model Selection Criteria

This section reviews the literature to identify software lifecycle model selection criteria for both
safety-critical software and software in general.

ISO/IEC TR 19759
40
identifies references by Comer et al
22
, Davis et al
27
and Alexander et al
21
as
providing comparisons between different software lifecycle models. Davis et al
27
and Alexander et
al
21
are discussed below, along with various relevant software engineering standards and other
references. Unfortunately it was not possible to obtain Comer et al
22
, which may include additional
useful information. Software lifecycle model selection is not mentioned in CAP 670
14
or Def Stan
00-56 Issue 4
29, 30
.

2.3.5.1 The Importance of Software Lifecycle Model Selection

The selection for a software lifecycle model for a project is an important decision
5
. It impacts on
project success by affecting the following:

 The software lifecycle’s overall cost
18
;
 The distribution of cost over the software lifecycle
18
;
 Software development speed
28
;
 Software quality
28, 18
;
 The ability to tracking and control the project
28
;
 The project’s overhead
28
;
 The level of risk associated with the project
28
;
 Client relations
28
.

No single software lifecycle model is appropriate for all situations
5,

26, 28, 40, 36, 18
. This is due to the
diversity in project, system and organisational characteristics
18
. This fact is reflected by the
standards and other references discussed in the following subsections. None of these references
prescribe the use of a particular software lifecycle model. Because of this, a software lifecycle
model must to be selected to suit each project’s characteristics
5, 8, 26, 36
.

2.3.5.2 ISO/IEC FDIS 12207:2007

ISO/IEC FDIS 12207:2007
26
does not mandate the use of a particular software lifecycle model.
Neither does it mandate the software lifecycle phases the model should represent. It does state
that an appropriate software lifecycle model should be defined for a project however
26
. The
standard’s users are responsible for selecting this model and for mapping processes, activities and
tasks onto it. ISO/IEC FDIS 12207:2007
26
states that it encourages iteration between lifecycle
activities. This implies that the selected model should allow iteration between phases.
40

ISO/IEC FDIS 12207:2007
26
states that it prefers the use of software lifecycle models defined by
an organisation for use on multiple projects. The definition of such models will occur during the
Life Cycle Model Management Process (see Appendix A). Policies and procedures for deploying
the models in projects will also be defined during this process. No guidance is provided on the
contents of these policies and procedures however. As discussed in section 2.3.5, different
software life models are appropriate in different project circumstances. The likely project
circumstances in which the organisationally-defined models will be used therefore need to be
anticipated. The selection of a software lifecycle model for a particular project from the
organisationally-defined models occurs during the Project Planning Process (see Appendix A).
The following factors are identified as influencing software lifecycle model selection:

 Project scope;
 Project size;
 Project complexity;
 Changing needs and opportunities.

2.3.5.3 CMMI-DEV

CMMI-DEV
36
does not prescribe the use of a particular lifecycle model. Its approach to lifecycle
model selection for a project is very similar to that of ISO/IEC FDIS 12207:2007
26
.

A set of standard processes and other assets for the organisation are defined in the Organisational
Process Definition +IPPD process area (see Appendix B). These other assets include the lifecycle
models approved for use by the organisation. The models are defined based on the needs of the
organisation and its projects. CMMI-DEV
36
states that these lifecycle models will typically be found
in the published literature. The use of organisationally-defined lifecycle models corresponds with
ISO/IEC FDIS 12207:2007
26
. The Organisational Process Definition +IPPD process area also
involves establishing criteria and procedures for tailoring the organisation’s standard processes for
use in a particular project. CMMI-DEV
36
states that an example of such criteria would be criteria
for selecting processes and lifecycle models for a project. It states that examples of such
procedures would be a procedure to tailor the processes and lifecycle model to suit the project.
CMMI-DEV
36
states that tailoring a lifecycle model for a project could include modifying it, or
combining it with elements of another lifecycle model. This is hybridisation, as discussed in section
2.3.3. Lifecycle model tailoring is not clearly described in ISO/IEC FDIS 12207:2007
26
. It is
necessary however if no organisationally-approved model is completely appropriate for a project.
This situation could occur if the project’s circumstances differ from those of the projects the
organisation usually carries out. Such a tailoring process in ISO/IEC FDIS 12207:2007
26
could
take place during the Project Planning Process.

CMMI-DEV
36
does not specify lifecycle model selection criteria, but states that the following project
characteristics may affect lifecycle model selection:

 Project size;
 Staff experience and familiarity with the lifecycle model;
 Project constraints such as time and acceptable defect levels.

The processes and lifecycle model for a particular project are selected from the standard assets
and tailored to suit the project’s characteristics during the Integrated Project Management +IPPD
process area (see section Appendix B). The combination of the selected, tailored, processes and
lifecycle model is called the project’s defined process. The project’s defined process should satisfy
the project's contractual and operational needs, opportunities, and constraints
36
.
41
2.3.5.4 IEEE 1074-2006

ISO/IEC FDIS 12207:2007
26
states that IEEE 1074-2006
13
may be useful in implementing its Life
Cycle Model Management Process (see section 2.3.5.2). Unfortunately IEEE 1074-2006
13
could
not be obtained for this project. The IHS website
17
briefly describes its contents however. It states
that IEEE 1074-2006
13
defines a methodology for developing a software lifecycle process for a
particular project (referred to as a software project life cycle process, or SPLCP)
17
. This
methodology involves first selecting an appropriate software lifecycle model for the process
(referred to as a software project life cycle model, or SPLCM)
17
. Lifecycle processes are then
tailored to the software lifecycle model for the project. The result of this process is the SPLCP
17
.

This approach appears similar to that described in ISO/IEC FDIS 12207:2007
26
and CMMI-DEV
36
.
IEEE 1074-2006
13
does not mandate the use of a particular software lifecycle model
17
.

2.3.5.5 TickIT Guide

The TickIT Guide
5
does not mandate the use of a particular software lifecycle model. It does state
that an appropriate software lifecycle model should be defined for a project however
5
. It states that
this model should be tailored to the project’s characteristics and documented
5
. Lifecycle model
tailoring is consistent with CMMI-DEV
36
. The TickIT Guide
5
states that the software lifecycle
models used by an organisation will typically be documented as part of the organisation’s quality
management system. This is consistent with the use of organisationally-defined models in
ISO/IEC FDIS 12207:2007
26
and CMMI-DEV
36
. It also states that if an organisation uses more
than one software lifecycle model, criteria for the selection of a particular model should be
defined
5
. It identifies the following as factors in the selection of a model
5
:

 The size and type of system being developed;
 The project requirements as understood at the start of the project.

The TickIT Guide
5
states that in some situations a combination of several models may be
appropriate (i.e. hybridisation).

2.3.5.6 IEC61508

IEC61508 Part 3
8
allows the use of a software lifecycle model that is not the Software Safety
Lifecycle (see section 2.3.4.2), provided the following conditions are met:

 The lifecycle must be planned during the Software Quality Management System process
(IEC61508 Part 3
8
clause 7.1.2.1);
 “Quality and safety assurance procedures shall be integrated into safety lifecycle activities”
(IEC61508 Part 3
8
clause 7.1.2.2);
 Each software lifecycle phase of the must be divided into elementary activities with the scope,
inputs and outputs for each phase specified (IEC61508 Part 3
8
clause 7.1.2.3);
 “If at any stage of the software safety lifecycle, a change is required pertaining to an earlier
lifecycle phase, then that earlier lifecycle phase and the following phases shall be repeated”
(IEC61508 Part 3
8
clause 7.1.2.8).

The last point implies the software lifecycle model should allow iteration. The software lifecycle
model must also allow the objectives and requirements of all IEC61508 Part 3
8
clauses to be met
6
.
If any of these objectives and requirements is not met, this must be justified
6
. IEC61508 Part 3
8

also states that the software lifecycle model may be customised to suit the project and
organisation
8
.
42

IEC61508 therefore only provides a limited guidance on software lifecycle model selection. Given
the significance of this decision, more guidance or references to information sources where more
guidance can be found would be beneficial.

2.3.5.7 IEC61511

IEC61511
39
states that a software lifecycle model should be specified for developing application
software. It states that this should be done as part of the safety planning activity. This model must
integrate with the system lifecycle model. No software lifecycle model selection criteria are
mentioned.

2.3.5.8 DO-178B

DO-178B
34
does not specify the use of a particular software lifecycle model. It states that a
number of separate and different software lifecycle models may be used to develop different
components of a single software product
34
. This seems reasonable if the components have few
inter-dependencies and different characteristics, as a single model may not be appropriate for all.
This would increase project management complexity however. The use of more than one software
lifecycle model for a project is not described by ISO/IEC FDIS 12207:2007
26
, CMMI-DEV
36
or The
TickIT Guide
5
. This may however be due to the definition of the word project. If the development
of each component is regarded as a project in its own right, then this approach is consistent with
these references.

DO-178B
34
states that appropriate software lifecycle models should be selected during the
software planning process. These should be appropriate to the project’s characteristics. It
identifies the following as factors that should influence software lifecycle model selection:

 System functionality;
 System and software complexity;
 Software size;
 Requirements stability;
 Use of previously developed results;
 Development strategies;
 Hardware availability.

DO-178B
34
does not explain what is meant by development strategies. This could mean the
delivery strategy (i.e. conventional or incremental delivery). Alternatively, it could mean the choice
of methods and tools.

2.3.5.9 McConnell

McConnell
28
lists a number of questions that he states ought to be answered when selecting a
software lifecycle model for a project. He also defines a set of software lifecycle model selection
criteria against which the answers to these questions can be compared. A single criterion
corresponds to each question (with the exception of the “Has low overhead” criterion).
McConnell’s questions are reproduced below:

 “How well do my customer and I understand the requirements at the beginning of the project?;
is our understanding likely to change significantly as we move through the project?”;
 “How well do I understand the system architecture? Am I likely to need to make major
architectural changes midway through the project?”;
43
 “How much reliability do I need?”;
 “How much do I need to plan ahead and design ahead during this project for future versions?”;
 “How much risk does this project entail?”;
 “Am I constrained to a predefined schedule?”;
 “Do I need to be able to make midcourse corrections?”;
 “Do I need to provide my customers with visible progress throughout the project?”;
 “Do I need to provide management with visible progress throughout the project?”;
 “How much sophistication do I need to use this lifecycle model successfully?”.

The answers to the question “Do I need to be able to make midcourse corrections?” and the first to
questions in the list is linked.

McConnell’s
28
selection criteria and his assessment of a number of software lifecycle models
against these criteria are reproduced in Appendix F (the names of some of the models have been
changed to be consistent with those used in section 2.3.3). McConnell’s
28
COTS Software model
in Appendix F, relates to using COTS software instead of bespoke software. McConnell
28
does not
provide details of this model however. It is unclear how similar this is to the CBSE model
described in section 2.3.3.8. McConnell
28
uses the undefined qualitative terms “Poor”, “Fair” and
“Excellent” to rate each model against each criterion. He does not describe how these values have
been assigned to each model however. He states that using less coarsely-grained terms would be
meaningless. Given that these assessments do not appear to be based on quantitative data
derived from the real use of models, this seems reasonable.

McConnell
28
states that development speed will increase the more closely the software lifecycle
conforms to the Waterfall model, provided that this can be used effectively. He also suggests that
weaknesses identified in a particular model could be addressed by hybridising it with other models.

Some of the selection criteria proposed by McConnell
28
in Appendix F are only relevant in certain
project circumstances (e.g. “Works with poorly understood requirements” is only a relevant if the
requirements are poorly understood). Each of the proposed selection criteria is relevant in at least
some project circumstances however. While software’s reliability is influenced by the selected
software lifecycle model, it is also affected by the methods and tools used within each software
lifecycle model phase and by the umbrella activities, such as quality assurance. The criterion
“Produces highly reliable system” might therefore be better phrased “The model is appropriate for
producing highly reliable software”.

2.3.5.10 Alexander et al

Alexander et al
21
state that software lifecycle models are often selected on an ad hoc basis using a
set of undocumented and unjustified criteria. Given the general lack of detailed guidance provided
by the references reviewed in this literature survey, this seems a reasonable statement.

Alexander et al
21
propose twenty “criteria” for selecting an appropriate software lifecycle model for
a project. These criteria are actually a set of project characteristics (e.g. “Problem Complexity”).
They state that these criteria are derived from a literature review and informal observation of real
projects
21
. For a particular project, one of three possible qualitative values is assigned to each
characteristic (e.g. the “Problem Complexity” criterion can be assigned the value “Simple” to
indicate that the problem to be solved by the project is of low complexity)
21
. The criteria fall into
five categories:

 Personnel: criteria relating to software’s users and developers;
 Problem: criteria relating to the problem the software solves;
 Product: criteria relating to software itself;
 Resource: criteria relating to the resources available to develop the software;
44
 Organisational: criteria relating to the impact of organisational policies on software
development.

The criteria and their possible values, reproduced from Alexander et al
21
, are listed in Table 19 in
Appendix E.

In Alexander et al
21
, an actual criterion against which a software lifecycle model is assessed is a
question of the form: is software lifecycle model X appropriate for a project where criterion Y
equals value Z? If the answer is yes, the model receives a score of 1 for that criterion; otherwise it
receives a score of 0. These answers are made by considering the needs of the project and the
characteristics of the software lifecycle model under consideration. Alexander et al
21
identify
number of key factors to be considered when answering these questions:

 The opportunity to modify, validate and/or verify the software;
 The degree of functionality delivered and when;
 The level of activity with respect to time.

The scores for each particular software lifecycle model are recorded in an applicability matrix for
that model. Alexander et al
21
provide applicability matrices for the following models: Conventional
Development; Incremental Development; Evolutionary Development; Waterfall; Hybrid Prototyping;
Operational Specification. These are shown in Figure 14 in Appendix E. It appears that the
appropriateness of the different software lifecycle models in the different project circumstances has
been subjectively assessed by considering the characteristics of the different models and what is
needed in each circumstance. Some of the conclusions reached in these tables appear
questionable. For instance, Alexander et al
21
conclude that the Conventional Development
Delivery Strategy model is only appropriate for projects with stakeholders who are “Expert” in the
software’s application domain. The implication of this statement is that an incremental delivery
strategy is always required for stakeholders who are not “Expert”. The requirement for “Expert”
users does not take into account the possibility for using requirements elicitation and analysis
techniques in combination with the software lifecycle model to help capture requirements.

To select a software lifecycle model for a particular project, each Alexander et al
21
criterion is
assigned the value most appropriate to it for the project. The score (either 0 or 1) for this value of
the criterion in the model’s applicability matrix is then noted. The appropriateness of the software
lifecycle model to the project is then quantified by summing the model’s scores for each criterion.
The model with the highest score is the most appropriate. This approach is simple and
mechanistic, which increases the likelihood of its use on real projects. It has a several
shortcomings however:

 It does not help identify which, from a range of possible alternative models, will be the most
effective in a particular set of project circumstances. If two models satisfy a particular criterion,
they will both receive a score of 1 for that criterion. This is irrespective of whether one model
satisfies the criterion more effectively than the other. Additionally, the affect of a model on the
achievement of a criterion may be negative as well as positive. Negative effects are not taken
into account by the scheme. This is recognised by Alexander et al
21
, who suggest that their
approach may be improved by assigning a negative score to a criterion in this situation, which
seems reasonable;
 All criteria are weighted as having equal significance. No justification is given for this however.
Looking at the criteria, it would seem that some are more significant that others. For example,
if a model fails to meet the “Staffing Profile” criterion its use is probably not feasible as there
will be insufficient staff available to carry it out within project timescales. If a model fails to
meet the “Developers' Experience in Application Domain” criterion however, it may be possible
to mitigate this by providing developer training;
 The scheme assesses whether various software lifecycle models always meet each criterion in
three different project situations (e.g. for the “Users' Experience in Application Domain”
criterion, these situations are “Novice”, “Experienced” and “Expert”). This approach leads to
general conclusions such as “the Waterfall model is not appropriate for projects with novice
45
users”, “the Waterfall model is appropriate for projects with simple products”, etc. These
conclusions are intended to be applicable to all projects. Real software projects are highly
complex however. Making such general statements without considering the detailed
characteristics of each real project seems an over-simplification. For instance, the statement
“the Waterfall model is not appropriate for projects with novice users” may not be true if users
are available to work closely with the experienced developers to help capture requirements.
The scheme also makes these statements based on the assessment of individual criteria
independently of the assessment of other criteria. For instance, “the Waterfall model is
appropriate for projects with simple products” may be valid if its requirements are stable, but
not if its requirements change rapidly;
 The effectiveness of a software lifecycle model under a particular set of project circumstances
will be influenced by the methods and tools that are used in combination with it. This is not
taken into account in this scheme. This is relevant to the following criteria: “Users' Experience
in Application Domain”; “Users' Ability to Express Requirements”; “Developers' Experience in
Application Domain”; “Maturity of the Application”; “Problem Complexity”; ”Frequency Of
Changes”; “Magnitude Of Changes”; “Product Complexity”; “Funds Profile”; “Funds
Availability”; “Staffing Profile” and “Staff Availability”. The selected methods and tool need to
be taken into account when assessing whether these criteria are met for the following reasons:

 They affect the time and effort associated with making changes during the lifecycle (i.e.
rework). This will affect the ability to meet the ”Frequency Of Changes”, “Magnitude Of
Changes”, “Funds Profile”, “Funds Availability”, “Staff Availability” and “Staffing Profile”
criteria. Generally, the time and effort required for rework should tend to decrease with the
use of methods and tools that facilitate: the automated or semi-automated transformation of
lifecycle artifacts; identifying errors early in the lifecycle; identifying and analysing
stakeholder requirements early in the lifecycle; assessing the impact of changes on various
lifecycle work products. For instance, a late lifecycle requirements change will involve more
time and effort if specifications are created in natural language and code is written manually
in comparison to a situation where requirements and designs are created in a modeling
environment which supports automated or semi-automated transformation been models
and automated code generation (i.e. a form of Model Driven Development. It should be
noted that Model Driven Development is a software engineering paradigm principally
relating to the methods and tools used during the software lifecycle. It is not a software
lifecycle model according to the definitions listed in section 2.3.1). Time and effort will also
vary with the rigour of software lifecycle processes. For example rewriting module test
cases due to a design change will take longer if MC/DC coverage is required compared to if
only complete statement coverage is required. In the former situation, a greater number of
test cases will be required;
 They affect the level of uncertainty associated with stakeholder requirements and the
software’s technical solution. This will affect the ability to meet the “Users' Experience in
Application Domain”, “Users' Ability to Express Requirements”, “Developers' Experience in
Application Domain” and “Maturity of the Application” criteria. For example, the use of
effective methods during the lifecycle’s requirements analysis phase (e.g. Throwaway
Prototyping; simulation; domain modeling; requirements elicitation methods) can help
increase understanding of requirements early in the lifecycle, reducing the risk of late
requirements changes;
 They can help manage complexity. This will affect the ability to meet the “Problem
Complexity” and “Product Complexity” criteria. For instance, the use of design methods
that promote decomposing the design into cohesive, loosely coupled modules (e.g. object-
oriented design) will help reduce complexity by decomposing the problem.

Software lifecycle model selection criteria therefore ought to take into account the processes,
methods and tools that will be used during the project. This is not done by the any of the
selection criteria identified in this literature survey however.

An additional shortcoming of Alexander et al
21
’s approach for safety-critical software is that the
scheme always selects a software lifecycle model for a project, irrespective of how poorly it scores.
46
A model need only to score higher than the other assessed models for it to be selected. This is
unlikely to be appropriate for safety-critical software. None of safety references reviewed in this
literature survey describes a minimum set of attributes for a software lifecycle model for safety-
critical software. Common sense dictates that some should exist however, as many of these
criteria relate to product quality. For instance, it would seem inappropriate to develop IEC61508
SIL 4 software using a software lifecycle model that only scores 1 or 2 points in the scheme (i.e.
fails to satisfy most criteria). This raises the questions: is there a minimum set of software lifecycle
model attributes for safety-critical software? If so, what are they?

2.3.5.11 Davis et al

Davis et al
27
propose the following set of metrics for comparing alternative software lifecycle
models:

 Shortfall: the difference, at a particular point in time, between the user’s actual needs and the
user needs the software meets;
 Lateness: the time that elapses between a new user need arising and the software satisfying
that need;
 Adaptability: the rate at which software meets new user needs;
 Longevity: the length of time during which it is viable to modify the software. This is a measure
of the length of time from the software first entering service to it being withdrawn;
 Inappropriateness: the integral of shortfall over a period of time.

Davis et al
27
use these metrics to compare the use of the Waterfall model against the following:
Throwaway Prototyping; Incremental Development; Evolutionary Prototyping; Reusable Software
and Automated Software Synthesis. Davis et al
27
do not describe how they obtain the values
assigned to the metrics however. The assigned metrics generally reflect some of the software
lifecycle model’s strengths and weakness discussed in section 2.3.3.

Davis et al

s
27
approach has a number of shortcoming. Firstly, it makes general statements about
the relative effectiveness of various software lifecycle models without considering the project
circumstances in which they are applied. As discussed in section 2.3.5.1, project circumstances
need to be taken into account when selecting a software lifecycle model. Secondly, while their
statements about the relative strengths and weaknesses of various models seem valid, it is unclear
upon what evidence the values they assign to the metrics are based. Indeed, even if real project
data were available, it is questionable whether the comparisons made using this data would be
meaningful as project details are not taken into account. For example, the Lateness metric for a
project would be increased if there were a delay in the schedule due to integration testing being
delayed by another stakeholder. This issue is completely unrelated to the chosen software
lifecycle model however. Unlike Alexander et al‘s
21
approach, Davis et al’s
27
approach (in
principal) provides a means of comparing particular aspects of the relative effectiveness of different
models. The very broad-brush qualitative approach to comparision adopted by McConnell
28
(see
section 2.3.5.9) would seem a better approach however, given the level of uncertainty involved.

2.3.5.12 Comparison of Selection Criteria

Table 4 compares the software lifecycle model selection criteria identified in the references in
section 2.3.5. In Table 4, McConnell’s
28
“Requires little manager or developer sophistication”
criterion is interpreted as being equivalent to Alexander et al’s
21
“Developers' Software Engineering
Experience” criterion. Also criterion LS27 “The model allows the integration of safety assurance
procedures” is only applicable to safety-critical software.

47
ID Criterion
ISO/IEC FDIS

12
207:200
726
CM
MI-
DEV3
6
IEE
E 107
4-2
0061
3
Th
e T
i
ckIT Gu
id
e5
D
O
-17
8B34
IE
C6
15088
McC
onn
ell28
D
avis e
t a
l
27
A
lexander et al21
LS1
The model allows iteration between
software lifecycle phases




LS2
The model is organisationally defined





LS3
The model is appropriate for the
project’s scope


LS4
The model is appropriate for the
project’s size



48
ID Criterion
ISO/IEC FDIS

12
207:200
726
CM
MI-
DEV3
6
IEE
E 107
4-2
0061
3
Th
e T
i
ckIT Gu
id
e5
D
O
-17
8B34
IE
C6
15088
McC
onn
ell28
D
avis e
t a
l
27
A
lexander et al21
understood software architecture
LS22

The model facilitates changing the
software during the lifecycle




LS23

The model is appropriate to the
project’s level of reuse



LS24

The model is appropriate to the
availability of hardware



LS25

The model is appropriate to the
selected development strategies



LS26

The model allows the integration of
appropriate quality assurance
procedures




LS27

The model allows the integration of
safety assurance procedures


LS39

The model is appropriate to the
criticality of the software’s human
computer interface


LS40

The model is appropriate to the

49
ID Criterion
ISO/IEC FDIS

12
207:200
726
CM
MI-
DEV3
6
IEE
E 107
4-2
0061
3
Th
e T
i
ckIT Gu
id
e5
D
O
-17
8B34
IE
C6
15088
McC
onn
ell28
D
avis e
t a
l
27
A
lexander et al21
availability of project budget with
respect to time
LS41

The model is appropriate to the
available project budget

50
2.4 Conclusions

The following key conclusions can be drawn from the literature survey:

 Software lifecycle models are high-level abstractions of the real software lifecycle. They
represent the sequence and interrelationship of broad phases within the lifecycle. These
phases represent processes directly concerned with the production and verification of
software. Software lifecycle models typically do not represent processes that occur
continuously during the software lifecycle (i.e. umbrella processes). They also do not
represent the details of how each phase is performed (i.e. the methods and tools to be used);
 Software lifecycle models principally act as tools to help manage the software lifecycle. They
also promote a common understanding of the lifecycle amongst stakeholders and help to
promote process consistency and improvement;
 A number of alternative software lifecycle models have been proposed. Additionally, two or
more models can be hybridised. Each model has its own strengths and weaknesses. Some
models are more appropriate in certain project circumstances than others. No single model is
appropriate to all circumstances. The methods and tools to be used in combination with a
software lifecycle model affect its effectiveness. A software lifecycle model therefore must be
selected for each project based on the project’s characteristics and methods and tools to be
used;
 The selection of a software lifecycle model for a project is an important decision. The use of
an inappropriate model can be detrimental to project success and software quality;
 A number of software lifecycle model selection criteria have been proposed. No single
reference includes a comprehensive list of all criteria however. Only one safety-specific
selection criteria was identified. No references have been identified that systematically justify
which software lifecycle model selection criteria are applicable to safety-critical software;
 There are no widely-publicised systematic processes for selecting a software lifecycle model.
A systematic process has been proposed in an academic paper (Alexander et al
21
) but this is
unlikely to be well-known to practitioners. This process has good features but also limitations.



51
3 Proposal

3.1 Scope

This section defines the scope of this proposal.

As mentioned in section 2.4, the literature survey failed to identify any references that
systematically justify which software lifecycle model selection criteria are applicable to safety-
critical software. The literature survey did reveal a systematic process for selecting a software
lifecycle model for any piece of software (Alexander et al
21
). However a number of limitations were
identified with this process. Additionally, this process did not include a number of valid selection
criteria identified from other references. Finally, no information was found in the literature relating
to the applicability of this process to safety-critical software.

This proposal aims to start addressing these issues by systematically identifying a set of software
lifecycle model selection criteria for safety-critical software. This represents a contribution to the
field of safety-critical systems engineering. Due to this project’s time and space constraints, the
development of a process for applying these criteria that addresses the shortcomings of Alexander
et al’s
21
approach is further work.

The scope of project’s objective has been further limited as follows, to fit within the project’s
constraints:

 The proposed selection criteria only relate to Development Models (i.e. software lifecycle
models representing the activities to be performed to deliver an increment of the software. See
section 2.3.1). Delivery Strategy Models are not addressed. This is further work;
 Selection criteria that are specific to safety-critical software have only been identified from a
systematic analysis of the software lifecycle’s requirements phase (i.e. the lifecycle phase
concerned with the capture, analysis and specification of the software’s stakeholder
requirements). Further selection criteria specific to safety-critical software may be relevant to
other software lifecycle phases. Identifying these criteria is further work however.

Because of this proposal’s limited scope, the proposed set of selection criteria will be incomplete.
This proposal demonstrates a method that can be used to produce a complete set however.

3.2 Method
3.2.1 Overview

This section describes the method used to identify the proposed selection criteria. This is:

1. Systematically identify the set of attributes that must be possessed by safety-critical software
and whose achievement is affected by the selected software lifecycle model. For brevity, such
attributes will be referred to as safety-critical software attributes.

Safety-critical software needs to possess attributes that are not necessarily required by non-
safety-critical software (e.g. acceptable safety in a particular operating context). The selection
criteria for a software lifecycle model for safety-critical software need to ensure that the
selected model allows these attributes to be achieved. A necessary starting point to
identifying these selection criteria is therefore to identify the attributes that safety-critical
software must possess.
52

Within the scope of this proposal, only those safety-critical software attributes associated with
the software lifecycle’s requirements phase have been identified. This set of attributes needs
to be complete, correct and consistent with respect to this phase.

The method for identifying safety-critical software attributes is described in detail in section
3.2.2.

2. Define a set of software lifecycle model selection criteria for a safety-critical software project.
This set shall be composed of the following:

 The selection criteria for all types of software identified in the literature survey that are
judged by the author to be applicable to safety-critical software. The literature survey
selection criteria are listed in Table 4 in section 2.3.5.12. If felt necessary, these selection
criteria will be modified to improve them. The selection of these criteria and any
modifications will be justified;
 New selection criteria for all types of software identified by the author. The inclusion of any
new criteria will be justified;
 The selection criteria derived from the proposed safety-critical software attributes identified
in step 1. The relationship between a selection criterion and the safety-critical software
attribute from which it derives will be as follows: if a software lifecycle model satisfies the
criterion, then its use will allow software to be developed that possesses this safety-critical
software attribute.

The method for defining selection criteria is described in detail in section 3.2.3.

3.2.2 Identifying Safety-critical Software Attributes

This section describes in detail the approach taken to identify safety-critical software attributes (i.e.
step 1 of the method described in section 3.2.1). Possible alternative approaches are described
and the reasons for not adopting them stated.

The objective of this step is to identify a complete, correct and consistent set of safety-critical
software attributes associated with the software lifecycle’s requirements phase. A pertinent
question when considering how to achieve this objective is: is there such a thing as a complete,
correct and consistent set of safety-critical attributes? Consideration of the current safety
standards mentioned in the literature survey indicates that there is not. Each of these standards
describes a different set of attributes (although they share many in common). For example, CAP
670
14
requires arguments and evidence to be available to support five specific claims relating to
the attributes of safety-critical software. The availability of these arguments and evidence is
therefore a required attribute of software developed to comply with CAP 670
14
. These arguments
and evidence are not required for safety-critical software developed to comply with IEC61508 Part
3
8
however. The required attributes of safety-critical software also vary according to the legislation
and regulations with which it must comply. For example, safety-critical software used in the UK for
work-related activities must comply with the HSWA. To do this, the safety risks it poses must be
reduced to a level is either broadly acceptable and ALARP or tolerable and ALARP. Such risk
reduction is therefore a required attribute. In some legal jurisdictions outside the UK, the HSWA’s
ALARP principle may not apply however; in this case, reducing risk in this manner is not a required
attribute.

Another pertinent question is: from what sources of information can safety-critical software
attributes be identified? Current safety standards, legislation and regulations are all potential
sources. They are unlikely to describe all attributes however. The reasons for this are:

53
 Legislation is likely to be very high-level and technology independent. Therefore detailed
safety-critical software attributes are unlikely to be described;
 Some current safety standards relevant to software were written some years ago. Also
standards often have relatively long gestation periods. Given that software engineering is a
rapidly evolving discipline and safety-critical systems engineering is developing field, these
standards will not represent the state-of-the-art;
 Standards’ contents represent a consensus view. Some desirable attributes may be omitted in
a standard’s drafting process. These attributes may be recorded in other sources however;
 Some safety standards are specific to particular application domains (e.g. CAP 670
14
and DO-
178B
34
). They may not identify attributes required in other domains.

Other potential sources of information regarding safety-critical software attributes include:

 Relevant peer-reviewed literature (e.g. academic journal papers);
 Relevant non-peer-reviewed literature (e.g. books, web pages, user manuals);
 The knowledge and experience of safety-critical systems engineering practitioners.

These sources may describe safety-critical software attributes not be described in safety
standards, legislation and regulations, as:

 Academic journal papers will describe the state-of-the-art. Because of this, they may contain
safety-critical software attributes identified since the standards were written;
 Surveying safety-critical systems engineering practitioners draws on the respondents’
experience of real projects. Responses should reflect current practice, rather than a
consensus opinion reach at some point in the past (as is the case with standards). In
particular, useful information might be obtained regarding the application of new techniques
and technologies. Information from practitioners may also highlight practical problems
associated with achieving the safety-critical software attributes identified in standards,
legislation, regulations and academic journal papers.

For there to be confidence in the completeness, correctness and consistency of the proposed set
of safety-critical software attributes, these attributes need to be identified from all the sources
mentioned above. Within the constraints of this project however, it is impractical to identify
attributes from all these sources. A review of standards, legislation and regulations seems to be a
good starting point, as the attributes described in these sources need to be achieved in real
projects. Software lifecycle model selection criteria derived from these attributes would be of
practical benefit to practitioners. Additionally, current safety standards contain a large amount of
information relevant to safety-critical software. A standards review would therefore be an efficient
way of identifying a large number of attributes. Also within the time available for this proposal, it is
impractical to survey practitioners. Examining all current safety standards, legislation and
regulations relevant to software is also impractical however within the project’s constraints.
Because of this, the only following current safety standards will be reviewed: IEC61508 Part 3
8
;
DO-178B
34
; Def Stan 00-56 Issue 4
29, 30
; CAP 670
14
. IEC61511
39
will not be reviewed due to time
constraints. It was not selected as it is similar to IEC61508 Part 3
8
(of which it is a sector specific
implementation).

This approach is likely to miss relevant safety-critical software attributes. It demonstrates a
method that could be applied to other sources of information however.

3.2.3 Identifying Selection Criteria

This section describes in detail the approach taken to identify selection criteria for a software
lifecycle model for a safety-critical software project (i.e. step 2 of the method described in section
3.2.1).

54
The method for defining the selection criteria is described in section 3.2.1. This is fairly
straightforward and needs little further elaboration. To be of practical use to practitioners on real
safety-critical software projects, the selection criteria need to be quick and simple to use. They
must also, when applied, give a valid result. An overly-complex or time-consuming solution is
unlikely to be used, or may be used incorrectly. Selection criteria that produce incorrect results
could lead to an inappropriate software lifecycle model being selected. This could affect the
project’s timescales and budget and product quality, which could potentially adversely affect safety.
Because of this, the set of proposed selection criteria need to have the following attributes:

1. The proposed selection criteria can be applied by an experienced, qualified software engineer;
2. The proposed selection criteria should contain as few criteria as possible;
3. The proposed selection criteria should be self-contained, so that there is no need to reference
other material;
4. The proposed selection criteria should require as little reading as possible;
5. The method for applying the criteria should be as simple and intuitive as practicable.

Point 5 is outside this proposal’s scope due to the project’s time constraints. Defining this method
is further work.

3.3 Proposed Safety-Critical Software Attributes

This section proposes a set of safety-critical software attributes according to the method described
in section 3.2.2.

The proposed attributes are listed in Table 5. In Table 5, a tick indicates that an attribute is
described in a particular standard (a blank box indicates that the attribute is not described). None
of the standards being considered state whether particular software attributes affect software
lifecycle model selection. The attributes in Table 5 have therefore been selected based on the
author’s judgement. Justifications for these decisions are given in Table 6. Table 6 all lists the
sections of the standards in which each attribute is described. The section numbers are given in
brackets.

ID Attribute
IE
C61
508
Part

3
D
O
-17
8B
C
AP 6
70
SW 01
D
ef
Stan
00-5
6
Issue
4
AT1 Hierarchical



AT2 Implementation independent



AT3 Traceable



AT6 Hazardous Failure Modes





Table 5 Proposed Safety-critical Software Attributes

A word of explanation is required regarding the terminology used by the different standards before
reading Table 6. IEC61508 Part 3
8
calls the software lifecycle’s requirements phase the Software
Safety Requirements Specification phase. In DO-178B
34
, this phase is called the Software
Requirements Process. In IEC61508 Part 3
8
the requirements specified in the Software Safety
Requirements Specification phase are called software safety requirements
8
. In DO-178B
34
, these
requirements are called high-level requirements
34
. In IEC61508 Part 3
8
software safety
requirements are specified in the software safety requirements specification. In DO-178B
34
, the
55
high-level requirements are specified in the software requirements data
34
. In CAP 670
14
all
software safety requirement are called software safety requirements.

Attribute: AT1 Hierarchical
DO-178B
34
states that the high-level requirements should be arranged in a hierarchy with solution-
specific requirements that form the software design (5.0). It also states that requirements at lower
levels in the hierarchy should meet requirements at higher levels or derive from design choices
(5.0).

Def Stan 00-56 Issue 4 Part 1
29
states that some system safety requirements will be derived from
other safety requirements (11.1). The Def Stan 00-56 Issue 4 Part 2
30
guidance states that safety
requirements allocated to the software should be “progressively refined to a level of detail that is
sufficient to specify and perform verification and validation of the software” (16.3) and that “high-
level requirements have been met given that requirements derived from them have been met”
(11.2).

This attribute is related to AT3, which calls for traceability between requirements; traceability
implies a hierarchy. The selected software lifecycle model needs to support the specification of
safety requirements in the hierarchy in a top-down manner. For example in DO-178B
34
, for
solution-specific requirements to be traceable to high-level requirements, the high-level
requirements need to be specified prior to the solution-specific requirements that derive from them.
To achieve this, the software lifecycle’s requirements phase needs to be performed to identify high-
level requirements prior to performing the software lifecycle’s design phase. This is a rather
obvious statement however and no effective software lifecycle model would sequence the design
phase prior to the requirements phase. If this were done, the resulting software would have a high
likelihood of failing to satisfy stakeholder needs.

The achievement of this attribute therefore is affected by software lifecycle model selection. In
practice however, a useful software lifecycle model selection criterion will not be derived from this
attribute, as all effective software lifecycle models will allow this attribute to be achieved.
Attribute: AT2 Implementation independent
DO-178B
34
states that the high-level requirements should not describe design or verification detail
except for specified and justified design constraints (5.1.2).

The Def Stan 00-56 Issue 4 Part 2
30
guidance describes safety requirements as being independent
of the system’s solution (11.2.1).

The achievement of this attribute implies that the software lifecycle model should include a
requirements phase that captures implementation-independent stakeholder requirements, and that
this phase should be distinct from design lifecycle phases, which specify solution-specific
requirements. The achievement of this attribute is therefore affected by software lifecycle model
selection. It should be noted that performing a distinct requirements specification phase is good
practice (e.g. defined in ISO/IEC FDIS 12207:2007
26
) and that separate requirements and design
phases are included in all the software lifecycle models identified in the literature survey. A
software lifecycle model could conceivably merge these activities however.
Attribute: AT3 Traceable
IEC61508 Part 3
8
requires the software safety requirements to be expressed and structured to be
traceable back to the E/E/PE safety-related system requirements specification to the extent
required by the safety integrity level (7.2.2.6).

DO-178B
34
states that all high-level requirements, with the exception of derived requirements,
should be traceable to one or more system requirements (5.1.2). Additionally, each system
requirement allocated to software should be traceable to one or more high-level requirements
(5.1.2).

CAP 670
14
requires that all “safety requirements can be traced to the same level of design at which
their satisfaction is demonstrated” (3.2). The CAP 670
14
guidance states that each software safety
56
requirement should be traced to a system safety requirement (3.2).

Def Stan 00-56 Issue 4 Part 1
29
requires safety requirements to be traceable to their origin (11.2).
It also states that traceability should be maintained between hazards and potential accidents and
their associated safety requirements (10.7.4). The Def Stan 00-56 Issue 4 Part 2
30
guidance states
that showing traceability between derived safety requirements and top-level safety requirements is
essential for the safety case (section 11.2.2).

As discussed in AT1, the achievement of traceability places constraints on the sequencing of
phases within the software lifecycle. Safety requirements that derive from other safety
requirements or other information, such as the results of safety analyses, must be specified after
the requirements/information from which they derive has been specified. As stated in AT1, this
means that a software safety requirement must be specified in the software lifecycle’s
requirements phase before software safety requirements that derive from it are specified in the
software lifecycle’s design phase. It should be noted that this does not necessarily mean that the
entire requirements phase needs to be completed before the design phase can commence (i.e. as
in the Waterfall model). These phases could be performed iteratively. It is important however that
higher-level software safety requirements are specified before the requirements that derive from
them.

The requirement for software safety requirements to be traceable to system safety requirements
similarly implies that software safety requirements can only be specified in the software lifecycle’s
requirements phase once the system safety requirements from which they derive have been
allocated to software in the system lifecycle’s design phase. This is discussed further in AT5.

Traceability between software safety requirements and the hazards they mitigate, implies that
these hazards must be identified prior to these safety requirements being specified. The
software’s safety requirements as identified in the software lifecycle’s requirements phase may be
subject to a safety analysis to identify hazards the software needs to mitigate. In this situation, an
initial set of software safety requirements would need to be analysed. This may lead to the
identification of new hazards, or changes to existing hazards, that require the software safety
requirements to be changed to mitigate them. The changes to the software safety requirements
would then need to be reanalysed to check if the safety analysis is still valid. This cycle would
continue until the software safety requirements specification mitigates all identified hazards and the
hazard analysis has analysed the specified software safety requirements. To facilitate this, the
software safety requirements safety analysis and the software lifecycle’s requirements phase
would need to occur iteratively in parallel.

The achievement of this attribute is also affected by how requirements are uniquely identified
within the requirements specification and the means by which traceability is maintained (e.g. a
requirements management tool). These are details of the processes, methods and tools used in
the software lifecycle’s requirements phase however. Software lifecycle models do describe or
place constraints on the processes, methods and tools used in individual lifecycle phases however,
so this consideration is not affected by the selected software lifecycle model.

The achievement of this attribute is therefore affected by software lifecycle model selection. The
selected software lifecycle model must allow:

 Software safety requirements to be specified in the software lifecycle’s requirements phase,
prior to software safety requirements being derived from then in the software lifecycle’s design
phase (this point has been discussed in AT1);
 The software lifecycle’s requirements phase to integrate with the system lifecycle’s design
phase to allow system safety requirements to be allocated to the software in the system
lifecycle, prior to software safety requirements being derived from then in the software lifecycle
(this point is also discussed in AT5);
 Safety analyses to be performed in parallel with the software lifecycle’s requirements phase, so
that software safety requirements can be specified to mitigate hazards associated with the
57
software’s requirements (this point is also discussed in AT6).

Attribute: AT4 Competent Staff
IEC61508 Part 3
8
(referencing IEC61508 Part 1
6
) requires those involved in the Software Safety
Requirements Specification phase to be competent to carry out the activities for which they are
accountable (6.2.1).

The CAP 670
14
guidance states that analysis techniques should be applied by adequately qualified
and experienced staff (6.2). CAP 670
14
notes that staff are appropriately qualified and experienced
“if they understand the design notations, and the analysis approach, are experienced in using them
and understand the required software safety requirements attributes and the system context” (6.2).

The achievement of this attribute is affected by the project staff’s competence in the processes,
methods and tools used in the software lifecycle’s requirements phase. Software lifecycle models
do not place constraints on the processes, methods and tools used in individual lifecycle phases
however. The achievement of this attribute, with respect to this competence, is therefore not
affected by software lifecycle model selection.

The achievement of this attribute in the context of IEC61508 Part 3
8
is also affected by the project
staff’s competence in the use of the selected software lifecycle model. More sophisticated
software lifecycle models such as the Spiral model are likely to require more knowledge and
experience to use effectively. The project staff therefore should be competent in the use of the
selected software lifecycle model. In this respect therefore, the achievement of this attribute is
affected by software lifecycle model selection.
Attribute: AT5 Derived from System-level Safety Requirements
IEC61508 Part 3
8
states that the software safety requirements shall be derived from the E/E/PES
safety-related system’s safety requirements (7.2.2.2). The E/E/PES safety-related system’s safety
requirements are documented in the E/E/PES safety requirements specification. IEC61508 Part 3
8

shows this specification as an input to software safety requirements specification (Figures 4 and 5;
Table 1). It states that the E/E/PES safety requirements specification should be reviewed by the
software developer to ensure that it is adequately specified (7.2.2.4). IEC61508 Part 3
8
states that
the system architecture (the E/E/PES architecture) is also an input to the software safety
requirements specification (Figure 4, although it is not described by IEC61508 Part 3
8
as an input
in Figure 5 and Table 1 however).

DO-178B
34
states that the high-level requirements should be derived through analysis of the
system requirements, system architecture and hardware interface (5.0 and 5.1). These inputs are
developed in the system lifecycle (5.1). Part of system requirements are the safety-related
requirements. Safety-related requirements are specified within the system lifecycle’s system
safety assessment process from an analysis of the system design (2.1.1). These safety-related
requirements specify the desired immunity from, and system responses to system failure
conditions (2.1.1). It also states that the system functional and interface requirements allocated to
the software should be analysed for ambiguities, inconsistencies and undefined conditions (5.1.2).
Inadequate or incorrect inputs to the software requirements process should be reported to the
source process for clarification or correction (5.2.1). Software high-level requirements should be
specified to satisfy each system requirement allocated to the software (5.2.1). This should include
specifying high-level requirements to satisfy system requirements allocated to software that are
concerned with precluding system hazards (5.2.1, 11.9) and also system safety-related
requirements (11.9). Additionally partitioning requirements may be allocated to software (11.9).
The high-level requirements should specify how the partitioned software components interact and
the software level of each component (11.9).

CAP 670
14
states that it assumes that software safety requirements have been derived from a
system-level safety analysis, which involves the allocation of safety requirements to software from
the system level (1.3). CAP 670
14
provides no further details regarding this, but instead refers
readers to IEC 61508 Part 1
6
, ARP4754 “Certification Considerations for Highly-Integrated or
Complex Aircraft Systems”, and Def Stan 00-56
29
(1.3). The CAP 670
14
guidance states that the
58
software safety requirements should be a valid sub-set of the system-level safety requirements
(6.1).

The Def Stan 00-56 Issue 4 Part 2
30
guidance states that the safety requirements for complex
electronic elements (which include software) should be derived from the overall system
requirements (15.6).

To achieve this attribute, as discussed in AT3, the software lifecycle’s requirements phase needs