Increasing Security for DoD Systems, Through Specific Security Applications

whooshribbitΛογισμικό & κατασκευή λογ/κού

2 Δεκ 2013 (πριν από 3 χρόνια και 8 μήνες)

156 εμφανίσεις

1


Increasing Security for DoD Systems, Through
Specific Security Applications


Sarah Pramanik


Advisor: Edward Chow



A Ph.D. dissertation proposal submi
tted for Security Engineering
,
University of Colorado at Colorado Springs


June 17
, 2011


Abstract


There are three facets of security that are crucial building blocks to
ensuring that security is easily and completely integrated into a system. The
first facet is the Systems Engineering Lifecycle. It is crucial that security is
included in all phases o
f the lifecycle. The second facet is System Security
Architecting, which should be integral to the Systems Engineering Lifecycle,
and not a peripheral “specialty engineering” concept. The third facet is
programmatic
effects

involved with apply
ing securit
y measures, including: cost,
schedule and risk.

These three pieces are all critical to the development of
Department of Defense

(DoD)

Systems and applicable to many commercial
ventures as well.


Information Assurance is the art of protecting the informatio
n in a system, and
must be built into overall system s
ecurity
. Systems e
ngineering

should
incorporate
these principles into the system in the form of a Security
Architecture and throughout the development of the system.


There are serious programmatic e
ff
ect
s if security is applied incorrectly or too
late. These range from extra cost and schedule slips to
higher risks. It can also
lead to additional vulnerabilities in the system. The proposed research looks at
how to efficiently add in security to a syste
m to prevent some of the negative
programmatic effects through changes to systems engineering and by creating
a system security architecture methodology.






2


Table of Contents


I.

Introduction

................................
................................
................................
..........

4

II.

Information Assurance

................................
................................
.......................

5

III.

System Engineering Lifecycle Overview
................................
...............................

7

Requirements Analysis

................................
................................
................................
............................

11

STIGs

................................
................................
................................
................................
....................

14

Functional Definition

................................
................................
................................
...........................

15

Physical Definition

................................
................................
................................
...............................

16

Design Validation

................................
................................
................................
................................

16

Stage 1: Concept Development

................................
................................
................................
..........

17

Stage 2: Engineering Development

................................
................................
................................
.....

18

Stage 3: Post Development

................................
................................
................................
.................

19

IV.

System Security Architecting Methodology Overview

................................
........

20

Architecture Program

................................
................................
................................
.........................

24

Software Security

................................
................................
................................
................................

25

Other Considerations of the Architecture

................................
................................
..........................

26

V.

Program Effects

................................
................................
................................

27

VI.

Summary of Path to Dissertation Defense

................................
........................

28

Proposed Research Approach

................................
................................
................................
.............

28

Plan and potential risks

................................
................................
................................
.......................

29

Evaluation Plan

................................
................................
................................
................................
....

30

Success Criteria

................................
................................
................................
................................
...

30

Potential contributions

................................
................................
................................
.......................

31

References

................................
................................
................................
.................

32







3


Table of Figures


Figure 1: Layers of Information
Assurance [98]

................................
............................

6

Figure 2: SABSA Matrix

................................
................................
...............................

8

Figure 3: System Decomposit
ion [5]

................................
................................
...........

13

Figure 4. ISSAP Architecture Method [4]

................................
................................
....

22

Figure 5: Resea
rch Timeline

................................
................................
.......................

30






4



I.

Introduction


There is no perfectly secure system. Threats are evolving and vulnerabilities
are continually being exposed. The goal of system security is to reduce the risk
to an a
cceptable level.
Security is
changing

with technology. For every new
application or system, new threats, vulnerabilities and risks arise. When there
is a threat to a system, it can affect the system if a vulnerability exists.
Residual risk is the lefto
ver threats to the system when vulnerabilities can’t be
mitigated. When there are vulnerabilities in a system, they must be mitigated

to the extent possible
.

The application of security into a system can be seen
through the layered views of information a
ssurance.



In the commercial world there are certain information assurance laws and
standards, such as Sarbanes
-
Oxley that govern the security requirements that
must be applied to a system

[8, 80
]
. In the Department of Defense one of the
documents for
security i
s the DoD Instruction 8500.2 [40
].

There are multiple
other security documents that are used, but the framework from
the DoDI
8500.2 is fundamental. There is currently a move

[20
, 13
]

from the DoDI
8500.2 to the NIST SP 800
-
53 [
71
].
The supple
mental guidance provided in [71]
is very helpful to the system security engineer.
The documentation, rules and
requirements that must be followed are as ever changing as technology. In
some respect the system security engineer must think like a lawyer in
understanding the requirements
, laws, regulations

and their application to the
system.
In DoD systems, the contractor providing the system is on contract to
meet a certain set of requirements.
Proper application of security is not as
simple as applying a few security products, but involves a paper trail of
documentation and analysis.


The first part of this paper will present what Information Assurance is and its
relationship within the overall

security discipline. With the understanding of
what needs to be accomplished, t
he

research
will focus on

the systems
engineering lifecycle methodology and where security can be better integrated
so that there is a reduction to the overall lifecycle cost o
f security.

The goal of
this will also address the problems that security engineers face in the different
stages of the systems engineering lifecycle.

The second part of this
research

will
be focused on the area of security architecture. Although there ar
e common
frameworks, there is not a solidified methodology to help ensure that the
security architect has not left anything out. This leads into the need for a tool
to help address vulnerability and risk assessments for large, complex systems.
The goal o
f the proposed research will be to provide a tool to help in the
creation and maintenance of a security architecture and to allow for a
vulnerability and risk assessment to be done against the architecture. A
security architecture must be completed as par
t of the work done for the
systems engineering lifecycle, when security is correctly implemented. The last
5


part of the proposed research will show how the use of this tool in conjunction
with changes to the overall systems engineering life
-
cycle will redu
ce risk, cost
and schedule slips due to unforeseen security issues.

This part of the research
will also outline the areas of concern and possible risk items that a security
engineer needs to address as early in the lifecycle of a program as possible.

This

research will draw on lessons learned from past experience in working on
large DoD programs
, as well as current issues that are surfacing in the DoD
realm
.

The conclusion of this research w
ill result in
a primer for those wanting
to embark on becoming sy
stem security engineers or security architects in the
DoD world,
and new ways to look at security for those who have been in the
field for a while. M
any
of the concepts will apply

to the commercial world as
well.


II.

Information Assurance


Information Assurance (IA) is the practice of managing risks related to
information. IA primarily deals with the Confidentiality, Integrity, and
Availability of information in order to ensure proper business or mission
execution. In the commercial world,

business execution is critical. In the DoD
world, mission execution is paramount. Application of IA in a manner
consistent with defense in depth is
part

of system security.


Confidentiality limits the access of information to authorized personnel.
In
tegrity ensures data is not changed without proper authorization.
Availability guarantees the accessibility of infor
mation to approved personnel
[34
]. IA also is concerned with authentication and non
-
repudiation.
Authentication requires that correct ide
ntification occurs before an entity is
given access to a system. Non
-
repudiation guarantees that the identification of
the sender and recipient is known and cannot be faked, nor can the sender or
recipient deny involvement.


Information assurance can be
defined as measures that protect and defend
information and information systems by ensuring their availability, integrity,
authentication, confidentiality, and non
-
repudiation. These measures include
providing for restoration of information systems by inco
rporating protection,
detectio
n, and r
eaction capabilities [7
6
].


Including the elements of IA into a system throughout the life cycle is the
primary purpose of the system security engineer. The protection of the
innermost layers, such as software starts w
ith an understanding that the
systems cannot be physically manipulated
.

6



Figure
1
: Lay
ers of Information Assurance [9
8
]



Physical security provides this assurance. Physical Security as required by
the 8500.2

[19]

and in the newer SP 800
-
53 [71]
should include automatic fire
suppression system, automated fire alarms connected to the dispatching
emergency response center, voltage controls, physical barriers, emergency
lighting, humidity controls, a master power swit
ch, temperature controls (for
both equipment and humans), plan for uninterrupted power supply, disaster
recovery procedures, as well as the requirement for personnel to be inspected
(through badge inspection) and a second form of identification before allo
wed
access.


Next the boundary of the systems from a cyber security purview must be
assessed. Boundary Defense is a crucial aspect of the security posture. This is
the first line of protection for the cyber aspects of a system. Without identifying
and pr
otecting the boundary, unnecessary chinks in the armor will appear.


Once boundary defenses are established, IA must dive into the system to
protect data in transit. Data that flows within the system and to and from the
enclave must be available and the

confidentiality and integrity must be
protected.

7



Data in Transit is one of the largest concerns in Information Assurance. It
involves confidentiality, availability, integrity, and non
-
repudiation. Ensuring
that data is safely moved from point A to B and

that data integrity remains is
one of the priorities.



A potential problem for data in transit is a breach of confidentiality. If the
enemy can hijack the information it must be un
-
useable (although it is best if a
prevention of the hijacking can be don
e). Another area of concern is
availability. There is a need to prevent bottlenecks or single point of failures
within the system.


Each of the aspects of IA must be applied to a system in order to provide a
correct security posture. The question then re
mains, what is the best way to
apply these requirements to a system in such a way as to explain them to those
that must implement them. It is one thing for a system security engineer to
understand the requirements. It is something entirely different to ma
ke these
requirements accessible to the developers of the system.


One of t
he goal
s

of this research is to find an effective manner
of applying these
principles to a system within the system engineering lifecycle, efficiently and as
cost effectively as pos
sible
.


III.

System Engineering Lifecycle Overview


In today’s environment of security, it is necessary for the system security
engineer to understand all aspects of systems engineering. In the realm of
systems engineering, security engineering is considered

a specialty that can be
added into the design phase, if concurrent engineering is being employed, but
is too often an after
-
thought. Something that is expensive and to most, it
seems unnecessary. Although the percep
tion about its necessity is cha
n
g
ing,
the engagement of security engineering hasn’t moved far enough along.


Many security engineers understand the security issues of a system, but do not
recognize how they fit into the systems engineering
development lifecycle
. In
most cases the systems security engineer shouldn’t be considered a specialist,
but a core member of the systems engineering team. Every interface in a
system whether a functional flow or a physical flow should be examined by the
system security engi
neer for vulnerabilities.


Information assurance or Information System Security Engineering should not
be cornered into one piece of the system. Systems Engineering typically
considers security to just be a specialization that it should be done in parall
el
with the systems lifecycle. This doesn’t truly cover the magnitude of effects
that security has on a system, especially in the DoD realm. The mi
ndset of
8


security being just an add
-
on to the system as part of “specialization” can
increase risk to the o
verall project.

The idea that security is “specialty
engineering”
is brought to light in [5] in the author’s explanation of concurrent
engineering.

This view of security needs to be re
-
examined, as the
vulnerabilities and risks to systems increase.

The Na
tional Institute of
Standards and Technology (NIST) have developed

some insights into this, in
[85
], but this does not cover overall system implementation and development



This author
of [86
]

would agree that security engineers should be system
engineers,

or at least have an understanding of system engineering.

The other
issue exposed in [87
]

and [96
]

is that


t
here is no common framework in any
current evaluation scheme or criteria that directly supports measuring the
combined security effectiveness of
products.” This needs to be rectified. There
are multiple frameworks that are useful for Systems Engineering and some
h
ave put together frameworks for

security engineering, but these are not
enough to fully integrate security into a large DoD program [
37
,
75,
91
,
9
2
,
9
7
,

9
9
].



One of the security architecture frameworks

is the Sherwood Applied Bu
siness
Security Architecture [91
] which provides a matrix of questions that the
security engineer should be able to answer about the system, as seen in Figure
2.

This is a useful tool, but it can still be difficult for the security engineer to
ensure that every aspect has been covered at the granular level.



Figure
2
: SABSA Matrix



9


This portion of the research looks to address this issu
e: How can security be
effectively combined in the systems engineering lifecycle?

The outcome of this
research will be a proposed method for including security into each aspect of
the Systems Engineering Lifecycle Methodology
, such that it looks at the overall
system implementation
.



There is a great difference between functional models of a system and physical
models of a system. In understanding the system’s engineering lifecycle model
at each step, the system security eng
ineer should be involved.
It is necessary
that the system security engineer have a full understanding of a systems
functional and physical architecture. Without being fully integrated into the
systems engineering team, as well as in the integrated produc
t teams (IPTs), it
will be virtually impossible for information assurance to be baked in.



The term “baked in

IA

, attributed to Dan Wolf, is easily said, but difficult to
accomplish. There is a critical balance in employing security mitigations.
Eno
ugh of the system must be understood to see where the vulnerabilities are,
and what type of mitigations should be employed, yet it can’t be so far along
that the design is in stone and there is no way to add in the correct
mitigations. This balance seems
to be the most difficult part of a system
security engineer
’s job
. System security engineers tend to be zealous to protect
the system, without understanding the underlying cost and schedule issues
associated with certain types of mitigations. An example
of this is the use of a
cross domain solution.



A cross domain solution is employed when there is a need to bring unclassified
data out of a classified environment. A cross domain solution is only one of
several mitigations that can be used to separate d
ata, but there is a cost
associated with this. It can take a couple of years to certify and the type of
data allowed through is limited. Depending on the type of data, the rule sets
employed to allow for the transaction can greatly increase the cost of s
uch a
product. The system’s engineering perspective would be to look at the cost and
schedule impacts of using such a device. The security engineer’s perspective
typically looks at how well the device provides protection. It is necessary that
the two mi
ndsets be united in order to find the best approach.


In following the traditional systems engineering model, the security engineers
would provide the technical data to the systems engineers, who would perform
a trade study on the types of devices or pro
ducts that can be used to
accomplish a certain task. It is then typically the systems engineer that makes
the final decision as to which product should be used, based on cost and
schedule as well as on technical merit.
There are three concerns that the
s
ystems engineer will look at in these trades: Advancing technology,
specialization
, and competition
.


10


Advancing technology can
increase capabilities but adds developmental risk
.
It
leads to more innovation

and comes in the form of n
ew materials
, n
ew devic
es

or n
ew processes
. The use of new technology can allow

a company to match
competitor’s performance
. It can also aid a company in

keep
ing

their designs
from using obsolescent pieces
.

Advance technology is one of the reasons to
upgrade or implement a new
system, but it is one of the biggest risks a
program takes on
. If advanced technology

isn’t implemented, then a company
risks being behind the competition, or may not be able to meet all the needed
capabilities
.
This requires risk management, one of the pr
imary responsibilities
of the system engineer
. The use of advanced technology should be carefully
considered by the security engineer. New products may have a plethora of
vulnerabilities that are unknown. New materials may inhibit emanations
security and

new process may not include necessary steps to ensure security.


Specialization can be broken down into different types according to the
Systems engineering lifecycle methodology [5]. The t
ypes of specialization

are:

e
ng
ineering

specialization (safety and

security)
, hardware and s
oftware
, t
ools
,
m
anufacturing processes
, and systems i
ntegration
. The systems integration
specialization has given rise to two

relatively new fields
: i
nterfaces (physical fit
at component boundaries)

an
d i
nteractions (functional compatibility of
components)
.

The system engineer must partition the system and manage
their interfaces and interactions.
Although security is thought of as an
engineering specialization, it cannot be put into

such a neat littl
e box, as it
a
ffects every aspect of system development, or should if it is implemented
correctly.


The third concern is competition.
C
ompetition is broken into two pieces:
system level trade
-
off analysis to choose the best solution,
and

external
competiti
on
. During development the system engineer must determine which
solution is best. External competition can be anything from trying to get the
product out to market faster, to continuing to try and win bids based on cost
and the ability to meet schedule.

External competition between countries is
also a consideration for the DoD and for some commercial projects.


Trade

off analysis is
the systems engineer
’s tool to decide between competing
factors in solutions.
Without
a
proper trade off analysis, the pro
gram risks
spending extra money, choosing the wrong design, and not meeting schedule
.
In some cases, the security aspects are lost in the trade. Although cost and
schedule are critical, it is also important that the security engineer be involved
in the tr
ade to ensure that the protection level is adequate.


In the Department of Defense, approved products such as those tested by the
Common Criteria/ NIAP or NIST labs are required for use in National Security
Systems. Although the products are evaluated a
gainst a certain protection
profile and provide great security, they may not be a good fit for the system at
hand. This b
ecomes a problem for the system security

engineer. The security
11


engineers will come to the IPTs and tell them that if a product is be
ing used to
protect the system, it must meet a certain Evaluation Assurance Level (EAL). If
the product is being used to interface between segments, then not only must it
meet the technical needs of the IPT but the systems engineers must ensure
that it wi
ll work across the segments. In some cases, a specific device from an
approved list will work, but what happens when the list doesn’t contain a
product that can be used? The security engineer must find the solution. This
is why it is critical that the s
ecurity engineer must be a systems engineer. It is
the systems engineer responsibility to ensure that the interfaces between
segments are correct and that the flow of data will meet the operational need.
It is the security engineer’s responsibility to pr
otect the data in each of the
segments, as well as in the flows from segment to segment, and especially
those flows that breach the boundary of the system.


Security should not just be an add
-
on to a functional flow, or on top of the
design work that an
IPT is doing, but should be integrated into each decision.
The systems engineer should not be driving design, but t
he security engineer
will
.



Systems engineering does not delve into the technical depths of individual
components, but rather creates a hig
h
-
level roadmap, providing a structure for
the overall development of the system
. This is where security engineering
differs. Security engineering must be involved in not only the high
-
level design,
but all the way down to the component and in some cases,

part level of the
system. Following the systems engineering methodology will allow the security
engineer to interact with the system
at the high level, but there are some
refinements that need to be made for this methodology to work for security
engineers
.

According to [5] there are
four

main activities that the system
engineer performs:
Requirements analysis
,
Functional definition
,
Physical
definition
, and
Design Validation
.
These are performed in three phases
consisting of a total of nine stages.
The sec
urity engineer must be involved in
each one of these activities.


Requirements Analysis


According to [5] in the Systems Engineering Methodology, the goal of the
requirements analysis phase is to assemble and organize all input conditions,
which include:
requirements, plans, milestones, models from the previous
phase, identify “whys” of requirements explaining operational needs,
constraints, environment, and other higher
-
level objectives, clarify the system
requirements of what it must do, how well, and th
e constraints and finally
correct inadequacies and quantify the requirements wherever possible.


As mentioned previously there are specific sets of security requirements that
are used in the Department of Defense.

For the system security engineer
12


working in the DoD world, it is these requirements that must be broken down
into unambiguous, testable requirements.

In the realm of DoD, there are
several documents that security engineers draw upon for security
requirements. Aside from DoD instructions
and directions [19, 24], Defense
Information Systems Agency (DISA) provides Security Technical
Implementation Guides (STIGs) some of these STIGs are found in [21, 22, 23,
24].
In the commercial world there are also requirement sets seen, stemming
from Sar
banes
-
Oxley and Health Insurance Portability and Accountability Act
(HIPAA)

which provide a high level set of security requirements

[11]
.



The difficulty in the past with security requirements is that on the surface they
can be v
ery vague and so
can lead
to misapplied security. There are also
different philosophies on how to apply the requirements, which means that two
different security engineers on the same project may have very different ideas
about how the requirements need to break down. These are som
e of the
challenges that

the security engineer must overcome.


There is a change coming to the DoD, in that there is a move to the NIST SP
800
-
53 which has more information than in the 8500.2
, which provides some
clarification as to where the requirement
s should apply
. In a way it is
reminiscent of the rainbow series

[100
]
.

The rainbow series was created by the
NSA to provide information on how to create a trusted and secure computing
base.

The rainbow serie
s is no longer actively used
for DoD

programs, but it
does provide good background information and a history on how the security
requirements have developed. These also help the security engineer in
understanding how to correctly apply the 8500.2 requirement set.



Both the 8500.2 and the SP

800
-
53 have various levels of requirements,
depending on the type of information or system that is being protected.

In the
8500.2 there are three mission assurance categories

(MAC)
: I, II and III, and
each

one of these are identified with classified, sens
itive or public as its
confidentiality level. MAC I requires high availability and high integrity. MAC
II requires high integrity and medium availability. MAC III requires basic
integrity and availability. These associations are based on the CIA triad:
Confidentiality, Integrity and Availability as explained previously in the
information assurance section.


Depending on the type of information that the system will be processing, the
government will determine what category the system will fall into. Once
this is
assessed, this will be added as an overarching requirement as part of the
program contract.

It is then up to the system security engineer to apply these
requirements to the system. The requirements should be viewed from a system
level, but as the

requirements are being decomposed, they will be flowed to the
various subsystems.


13




Figure
3
: System Decomposition [5]


The systems engineers decompose the requirements. As shown in the figure
below, the systems engineering
methodology breaks the system into pieces. As
part of the security requirements decomposition, the system security engineer
has to determine how these requirements will apply at the system and
subsystem level. On some programs the systems engineer breaks
the
requirements down

to the component level, but on other programs, the IPTs
take over the requirements decomposition and continue the decomposition
down to the part level.



This is where security engineering takes a different turn than that of systems
engineering. The security requirements cannot stop at the subsystem level. It
is critical that the requirements get correctly flowed, or this can cause rework
later on in the development lifecycle. This can also be very costly if certain
mitigations, su
ch as type
-
1 cryptography

have to be added in at a later date
because the requirements were not flowed correctly.

This means that either
the system security engineer must work with each IPT to develop the lower
level requirements, or each IPT must have th
eir own security expert to help
drive the lower level requirements.



One of the outcomes of this part of the research will be a tutorial on how to
apply the 8500.2 requirements. This will explain what to look for in the system
to determine whether or not

a requirement will apply.
This is part of the
research on the security architecture methodology.
One other piece to the
requirements breakdown in the DoD environment are the use of STIGs.

14



STIGs


STIGs cannot easily be broken into individual requirements,

although there is
a requirement to follow them.

This is because in some cases, the entirety of
the STIG will not apply, depending on how the technology is being used.
This
leads to

difficulties for both the IPT and the security engineer. There are tool
s
such as Gold Disk (which is being phased out) and e
E
ye Retina that provide
automated ways of seeing if a system
is configured to meet

the STIG. The
application of a STIG is specific to the technology being used. The STIG list is
continually updated, wh
ich can cause difficulties for a program that needs a
baseline in order to test. This can lead to programmatic risk, if not handled
carefully.


Typically the Designated Approval Authority (DAA) will not accept the risk level
of a system that doesn’t ha
ve the newest STIGs applied. This means that when
the system certification and accreditation (C&A) documentation is handed off to
the DAA, there is a possibility that the certification may not be approved.

This
leads to the programmatic risk of not receiv
ing an approval to operate. This
means that it is critical for the security engineer to ensure that the STIGs are
appropriately applied.


During the proposal phase if the contract is not written appropriately, this can
make development unstable if there
are high schedule pressures.

If a contract
is written that the STIG baseline freezes until the end of development, it will
make development easier, but testing more difficulty.

Freezing the baseline will
allow developers to test their individual pieces wi
thout continual changes.
However, once the system is integrated and the new STIGs have to be applied,
regression testing will have to occur to ensure that any changes do not affect
the system adversely.

STIG sizes vary, from 10 pages to 50 and more. DIS
A is
in the middle of a revamp of STIGs which is also causing some difficulty.

In
short term contracts, this may not be a problem, but in a contract that has a
greater span of time this must be a serious consideration from the beginning of
a program.



After requirements analysis the next phase is functional definition. This
depends on the lifecycle model however. In the academic world, requirements
definition should be completed before functional definition, but in reality this
isn’t the case.

With in
creasing schedule pressures and clamp down on
spending, many programs are shifting to a very tight spiral model, where the
spirals seem to overlap.

This is one area that this research will focus on, how to
ensure that the correct security requirements are
applied and still make sense
even after/during functional definition.




15


Functional Definition



There are three factors that the system engineer must review, according to [5]
during functional definition. These are significance, singularity and
commonality. Significance is defined as each functional element performing

a
distinct and significant function
. Singularity provides that e
ach functional
element falls mainly within the technical scope of a single technical discipline
.
The last factor is c
ommonality. This is defined such that t
he function
performed by each element can be found in a wide variety of system types


In the security purview these elements do play a part in how the system
security engineer must look at the system.
The functional
definition is not the
same as the physical definition
, and so the security engineer must be careful
with the amount of weight given to this model of the system.



The
Department of Defense Architecture Framework (
DoDAF
)

model
is built
upon the Zachman fram
ework and
is used to create the functional flows for
DoD programs
[4,
97, 99
]
. This is one of the areas of research for the security
architecture, which is in the second

part of this paper. The DoDAF is broken
into several
primary
views of the system: ope
rational views, system views,
technical views
. The goal of the DoDAF is to provide a framework to bring
together all of the information necessary for the architecture of a program.



The operational views are a very high level view. They pictorially depi
ct how
different pieces of the system should communicate. These flows do not show
any component level information. They show the flows between various
organizations, specifically activities, rules, states, events and interoperability
flows.


The system v
iews start to breakdown the flows into system level components.
The flows show services, interfaces and data exchanges. This is also the view
that is supposed to start to break down the system into a physical schema


These views do not always show actual

protocols. This means that if one
segment is planning on using TCP/IP for connection and another is planning
on UDP this will not necessarily be evident in the views. It truly depends on
how an organization chooses to use the framework.


This can lead

to confusion between segments. Many times these details are left
up

to the individual designers of each segment. If there is no communication
between the designers then the segments won’t communicate. Some of the
concerns on this particular topic will
be discussed further in the lessons
learned section.


The technical views explain the technology standards that should be used in
the system, and the emerging standards. These can be difficult to pin down. A
16


system cannot be on contract for an incomplete

or unsigned standard. In a
system where the development will take several years to complete, it is
unknown when the project begins which draft documents will become
standards. Typically, this means that if a program lasts for any substantial
amount of t
ime then the system will not be created using the newest
standards, just those that the developers are on contract for.


The all views provide summary information and a glossary. These are helpful
in defining the language in which the developers will use
to communicate. This
can be one of the most useful tools in the framework for developers. The
system can only be created if all the designers, developers and architects can
communicate clearly. The use of a system wide dictionary can help facilitate
thi
s communication.


In each of the views, it is possible to add security attributes. However, this
may not adequately express the security considerations that must be
addressed. The security issues can easily be lost in the overall enterprise
architecture.

This is one of the reasons that for some systems, it becomes
important to break out the system security architecture out of the enterprise
architecture and bring it into its own architecture. This also must be done
carefully, to ensure that the security

architecture aligns with the system
architecture. The following section considers this concept.




Physical Definition


The physical definition phase in the systems engineering lifecycle is used to
c
reate various alternative system components consisting
of differing design
approaches to implement the functions, focusing on the simplest practical
interactions and interfaces among subdivisions
. During this phase the systems
engineer s
elect
s

a preferred approach through trading
-
offs using prede
fined
and prio
ritized criteria known as
M
easures of Effectiveness (MOEs). The
systems engineer should be l
ooking for the best “balance” among performance,
risk, cost, and schedule
. The physical definition is somewhat iterative and will

c
ontinue
until
the design
is at

t
he necessary level of detail
.


The physical definition is the most crucial for the system security engineer to
be involved in. Simple changes in the physical design can either greatly
enhance or destroy the security of a system. One of the most common sayings
in the DoD security world i
s “Defense in Depth.” This refers to the layering of
defenses in a system.


Design Validation



Design validation is the system engineer’s consideration of the system before
implementation begins. This is the last chance to make major changes to the
system. This includes d
esign models of the system environment reflecting
17


significant aspects of the requirements and constraints
: l
ogical
, m
athematical
,
s
imulated
, and p
hysical
. The system engineers should ensure that solutions
have been s
imulate
d
, test
ed

and analyze
d

against the above models
. The
system engineer in working with the IPTs will r
evise the system or
environmental models and system requirements (if too stringent for a viable
solution) as necessary, until the design and requirements are fully c
ompatible.


The Systems engineering life
-
cycle model has t
hree s
tages, broken in to nine
phases. The first stage is
Concept Development

which consists of: the
Needs
Analysis

phase,
Concept Exploration

phase and
Concept Definition

phase. The
second stage
is
Engineering Development
, broken into the
Advanced
Development
,
Engineering Design

and the
Integration and Evaluation

phases.
The last stage is
Post Development

which includes
Production
,
Operation and
Support

and finally the
Phase out and Disposal

phases.


Stage 1: Concept Development


The purpose of concept development is to formulate and define the “best”
system concept such that it satisfies a valid need
.
This requires an analysis of
needs
. Once a valid need is decided upon, the systems enginee
r must explore

various designs

and finally choose

the “best design
.



Needs a
nalysis e
stablish
es

a valid need for the new system that is technically
and economically feasible
.
Needs a
nalysis encompasses o
perations analysis
,
f
unctional analysis
, f
easibility

definition

and ends with n
eeds validation
. If
there is not a valid need, or the requested addition goes outside the bounds of
the requirements or intent of the customer, then this can lead to scope creep,
which will increase the need for money, lengthen t
he schedule and perhaps
increase risk.



In the concept exploration phase of concept development, the systems engineer
should e
xamine and analyze potential system concepts. Then formulate (and
validate) a set of system performance requirements
. In order f
or this to occur
the systems engineer must perform operations requirements a
nalysis
,
performance requirements f
ormulation
, implementation c
on
cept e
xploration

and performance requirements v
alidation
. Many of the security requirements
fall into performance
requirements rather than system requirements. This is
another vital aspect of the systems engineering development lifecycle in which
the security engineer should be involved.


The last phase of concept development

is concept definition.

Concept definition
involves s
elect
ing the “best” system concept, defining

functional
characteristics, and then develop
ing

a detailed plan for engineering,
production, and operational deployment of the system
. In order for this phase
to be complete the sy
stem engineer must performance r
eq
uirements a
nalysis

as explained earlier, functional analysis and f
ormulation
, then the final concept
18


must be selected and validated.

Throughout this phase and the other phases in
this first stage of systems engineering, th
e expertise of a security engineer
should be employed to ensure that security is truly “baked in.”


Stage 2: Engineering Development


The purpose of the second stage, engineering development, is to t
ranslate the
system concept into a validated physical
system design. The design must meet
operational, cost, and schedule requirements
. It is in this phase that engineers
d
evelop new technology
, c
reate the design

and f
ocus on
the
cost of production
and use
. This is the most intense stage for the IPT engineer
s, and this is the
stage in which the system materializes. This stage has three phases.


The beginning phase is a
dvanced development
, in which the system engineers
d
evelop cutting edge technology for
the
selected system concept and
then
validate its capab
ilities to meet requirements
. This requires that the system
engineer perform requirements a
nalysis
, functional analysis and d
esign
,
prototype d
evelopment

and finally development t
esting
.

As explained
previously, it is imperative that the security engineer

be involved with advanced
technology choices. It is also necessary that the security engineer understand
the functional analysis and design. Perhaps even more importantly the security
engineer should be involved in the prototype development and testing as
pects
of this phase. If prototypes are built to show functionality, if the security
measures are not part of the prototype, it can skew results and cause problems
later on in development. Testing is key as well, to assure the customer, whether
DoD or comm
ercial, that the security aspects of the system
will be

correctly
incorporated.


The next phase is
e
ngineering design
. This phase can include
the development
of a prototype system satisfying performance, reliability, maintainability, and
safety requirement
s
.

Without the creation of a prototype, it is still in this phase
that the engineers must show how the system will meet the performance
requirements. As stated earlier many of the security requirements are
performance based, and so this phase requires extr
a consideration on the part
of the security engineer.

Again in an iterative fashion the systems engineer
must perform requirements a
nalysis
, functional analysis and d
esign
,
component d
esign

and design v
alidation
.


The last phase is integration and evaluati
on. The systems engineer must show
the ability for

economical production and use then demons
trate operational
effectiveness

and suitability of the system
. With respect to the security world,
this is where certification and accreditation activities are at
their peak.



This last phase also includes t
est
p
lanning and
p
reparation
. This will include a
s
ystem requirements review to accommodate:

c
hanges in customer
requirements, technology or program plans
. The systems engineer must pay
19


s
pecial attention to all
the test items:

m
anagement oversight of the T&E
functions
, t
est resource planning, equipment and facilities
, system i
ntegration
,
developmental system t
esting

and then operational test and e
valuation
.

On
some programs a separate IPT is created for testing a
ctivities, although a
systems engineer is included on the team. It is also necessary for a security
engineer to assist as well. Test activities allow the security engineer to provide
verification that all the security elements have been incorporated corr
ectly.


It is also possible at this stage for a penetration testing team to become
involved. This team can attack the system from a black box or white box
method. If the penetration team has no previous knowledge of the system, this
is considered a bla
ck box test and can be useful in providing an understanding
of how easy an outside attack would be. White box testing may be even more
useful at this stage. The white team would be given knowledge of the system
before the attack, which could allow them t
o more easily exploit any lingering
vulnerabilities. This is a useful tool for risk assessments.


Operational testing for DoD programs is performed either by the government or
by a third party, it is typically not done by the contractor or builder of the
system.

Documentation gathered throughout the life cycle of the system is put
into a package. This is currently done through the DIACAP process. This
package is given to the government for use in determining if the security
controls have been met.

If ther
e are any discrepancies, the

security engineer
must work with the IPTs to rectify the problem.


It is at the end of this stage where the security engineer should be comple
te
with the security mitigation implementation
. However, as will be shown below
there

are always changes necessary

and that security must be incorporated
through the entire cradle to grave lifecycle
.


Stage 3: Post Development


The last stage

is to p
roduce, deploy, operate, and provi
de system support
throughout the system’s

useful life and provide for safe system phase
-
out and
disposal
. There are three phases production, operation and phase out.


During the production phase, the goal is to e
conomically produce, develop, and
manufacture the selected system. The chosen system
must meet the
specifications
. It is in this phase that the engineers must begin e
ngineer
ing

for
production
. This necessitates a t
ransition from development to production
.
Design and production require two very different types of engineering. The
producti
on team must know how p
roduction
is going to operate and they must
have a
knowledge base

on how to produce the system.

This is especially critical
if the system is going to be mass produced. In the case where only one or two
systems will be created, the p
roduction operation is going to look quite different
from that of the mass produced system.

During the production time frame
20


there will be security changes. New vulnerabilities will be found, new attack
vectors will be formed and additional threats to th
e system may arise. While
the system is being produced, the security engineer must continue to research
and understand the ever evolving security issues.

Without the security
engineer being involved, during production the security posture of the system
wi
ll change.


The operational phase is when the system is being used. Typically it is the
system administrator that is assigned the security tasks during operation.
Instruction and manuals should be provided by the security engineers to the
system administr
ator to ensure that the security posture stays intact. During
operation there is a need to continually update the security policies and
configurations, and to patch the system for new vulnerabilities. It is
discouraged to apply patches into a system with
out first testing them in a lab
to ensure that the new patches will not adversely affect the system.


This also involves supply chain security issues, how does the system
administrator know that the patches can be trusted, when they are sent from
the lab?
Or if downloaded from the same site, how can it be assured that the
same versions are used? There are a few techniques, such as digital signatures
and hashes that should be incorporated into this patching lifecycle. One area
of research for this project w
ill look at the mechanisms that should be included
in the patching life cycle to ensure that the security posture of a system is not
degraded over time.


The last phase is disposal. This can lead to vulnerabilities for other systems.
When a system is bein
g disposed of, information can still remain. Whether this
information is encrypted or in the clear, it needs to be removed. Also reverse
engineering of old products can provide attackers insight into how to attack
similar systems.


The replacement syst
em will most likely be built upon existing technology.
This means that if any of the products in the system that is being disposed of
provide insight into the new system, or if the same or similar technology is
being used, the new system will have a high
risk of being exploited.

The
security engineer should be involved in the disposal of the system to provide
protection for other systems, and for the replacement system.
This is another
area of research that is proposed to be included in the final disserta
tion. There
have been accounts on the news of military equipment ending up on E
-
bay and
in the hands of attackers.
New methods of disposal or new policy should be
researched and recorded to help mitigate this vulnerability.



IV.

System Security Architecting
Methodology

Overview


21


System Security is critical in this new era of cyber war. The war rages, not just
for militaries and governments, but also for businesses. The attacks seem
endless, and as the complexity of systems grow, so do the vulnerabilities wi
thin
the system. One of the greatest problems is identifying the vulnerabilities
when a system is being created. How does an architect determine what the
risks, vulnerabilities and threats are with respect to a system? These
questions must be answered a
nd to some degree quantified in order to be sure
that all aspects of a system are protected.


An early look at what a security architectur
e should contain is found in [89
]
which describes a few of the areas that were critical at that time.

This was
writte
n

based on the rainbow series [100
].

These areas are still of concern and
need to be attended to with even greater assiduity. Some of these areas are
communications security, access control, identification and authentication.


[85
] describes what a secur
ity architecture should encompass:



Security architectures should be in line with NIST guidelines consisting of
security control families outlined in NIST SP 800
-
53

([53])

with regard to
protecting the confidentiality, integrity, and availability of federal information
and information systems. A comprehensive security architecture acknowledges
current security services, tools and expertise, outlines forecasted business need
s
and requirements, and clearly articulates an implementation plan aligned with
the agency’s culture and strategic plans. Usually, the security architecture is
supplemented with an integrated schedule of tasks that identifies expected
outcomes (indications

and triggers for further review/alignment), establishes
project timelines, provides estimates of resource requirements, and identifies key
project dependencies.



There are several thoughts on how to approach a security architecture to
ensure a correct se
curity posture for a system. The first approach is through
adding security into a normal Enterprise Level architecture. The second is to
have a separate system security architecture. Another area that must be
considered
as part of a security architectur
e
is threat modeling.


The DoD now requires that those performing security architecting, must obtain
the Information System Security Architecture Professional (ISSAP) certification
from ISC
2
. In their official book on Security Architectures
[4]
they propo
se the
following method as shown in the Figure below. Although this method could be
integrated into the overall systems engineering lifecycle, it still does not dictate
a method for the creation of a security architecture, only the pieces that it
should co
ntain.

22



Figure
4
. ISSAP Architecture Method

[4]


There are many papers written on security architectures and frameworks.
These architectures attempt to give a specific architecture to solve a problem
,
such as transient trust [15],
email [69]
or public computers with trusted end
points [42]
.

One interesting paper is on architectures for fault tolerant systems
[46], because it may be possible to abstract some of the principles and turn
them around for s
ystems that cannot tolerate faults.

[46] covers several
important aspects
, such as cryptography and the trust of portions of a system,

that should be included in the methodology for the creation of security
architectures.


[68] discusses

security architectures for networks, although it is
primarily focused on what should be taught to computer science students for
them to better understand security for distributed systems.

It also covers some
of the attach vectors that could be used to exp
loit a system.

These are also
aspects that should be included in a security methodology.



The System Security Architecture must be totally aligned with the system
architecture. The security architecture is a detailed explanation of how the
requirements

need to fit within the system. This needs to be done in such a
way as to ensure that all vulnerabilities are covered, and not just by one layer
23


of defense.
There are two pieces of this. This first is to find the vulnerabilities
and the other is to prote
ct the system from allowing these vulnerabilities from
being exploited.


Identifying vulnerabilities must be done throughout the lifecycle of system
development. There are several ideas on how to identify vulnerabilities [
82, 84
,
94
].

During development,
the architect needs to design the system to reduce
vulnerabilities. The software development team must create software that has
minimized vulnerabili
ties. Once the pieces of the system go into integration,
which is done in stages, tests for vulnerabilitie
s must continue.


Many security architectures methods have been suggested
[12
]
, [34
], [
3
7], [
68],
[76
], [47],
[71
],
[97], [98], [99], [92
]. Each of these methods for creating an
architecture

have a similar goal: build security into the system as a part of the
architecture. Many times security is bolted onto the end of system
development, or after the system is completed
.

There are a few methods
specifically geared towards multi
-
level securi
ty [
6
]. This

type of architecture
should be woven into the overall security architecture and is just one more
facet that the security engineer must consider when building a system.


The biggest complaint made in the system security architecture documents i
s
that security is done in an ad hoc manner [
3
7]. If an overall security picture of
the system is not developed in the beginning, technologies and procedures will
be thrown at the system in hopes that something will stick. That leads to the
hope that the stuff that sticks is good enough. How can yo
u know what is good
enough, without a strategic understanding of the system, its environment, its
operations, users and the like. Simply, you can’t. It is not possible to do a risk
assessment of a system, if all of the vulnerabilities, threats and mitiga
tions are
not known. A security architecture shows these things.


The frameworks provide a means for asking the right questions, but do not
provide a methodology for creating a security architecture. How does one get
started in creating an architecture?

One of the primary focuses for the
proposed research is on the creation of this methodology. The methodology
would work within the systems engineering methodology lifecycle, but will help
the security engineer ensure that there has not been anything left
out. A
skeletal outline for some of the issues that the methodology will cover is as
follows:



1.

Define The boundaries

a.

Typically
the security engineer looks at the
accreditation boundary
but must also understand environmental boundaries

and cyber
boundaries

as well.

b.

Physical security is a critical component of this.

2.

Define Flows

24


a.

In and out of the system (across the boundaries)

b.

The system

i.

Trace final flows across system

ii.

Define physical flows

iii.

Trace all/entry and exit points

3.

Determine Edge Points

a.

Determine
which devices/products touch the edge

b.

Know how the devices interact with the edge

i.

Full I
nput/
O
utput
, Limited I
nput/
O
utput
, Total Block

4.

Determine how edge products Interact with Internal Products

a.

Full Connectivity, Limited I
nput/
O
utput
, Total Block

b.

HW/SW/FW

interactions

c.

Which Protocols

d.

Which Ports

e.

Which Services

f.

Which Commands

i.

Determine Command Structure

g.

Web Services

i.

Browsers

ii.

Languages

iii.

Restricted, Open or Controlled Access

iv.

Mobile Code

5.

Define User Interaction

6.

Secure the Software

7.

Train Personnel within the dev
elopment organization and in the
operational organization


A ful
l methodology will be the result of the proposed research. In conjunction
with the methodology, there needs to be a way to verify changes. An automatic
way of seeing how one change affects the

system is a need for security
engineers and architects.


Architecture Program


Automating the ability for an architect to see how a change will affect the
security posture of the system would be a useful tool for those working on large
system of systems.


There are multiple

tools

that have been created

to help model and simulate
security in systems or to provide a tool for vulnerability analysis [6
]
,
[
9
], [28],
[30], [36], [38], [39], [41], [43], [77], [101], [102], [103
].

Each of these tools has a
particu
lar focus.
Some focus on finding vulnerabilities in software, or a
specific product. Some are focused on assessing specific types of architectures.

These are two aspects of security engineering that need to be combined.


25


One of the areas of focus for this

research will be the creation of a
new
program
to help automate the vulnerability analysis

on security architectures
. A model
of the system will be created through the input of
system
parameters
such as
flows, equipment, and boundary identification.
This

information will be
correlated with the vulnerability database and a set of security practices to
provide a mitigation plan.

In [
1], [18], [32], [88
]

the authors discuss various
vulnerability databases and their construction.

The vulnerability database i
dea
in conjunction with some of the other types of analysis tools will be the basis
for the construction of the tool that will be created as part of this research.



System effectiveness models are used in systems engineering to analyze
concepts of perfor
mance. It would seem then that creating a specific
effectiveness model for security would be a reasonable way to mitigate risk and
show how security mitigations affect the system. In creating a tool that would
allow for key parameters to be entered and a
n algorithm of how specific
mitigations affect the system, a concept of how a security architecture is going
to interact with the system could be generated. This could perhaps allow for
an understanding of where more mitigations need to be applied, and wh
ere a
trade
-
off analysis should be done to determine if a less costly mitigation might
be possible, without lessening the security posture of the system.


Once an effectiveness model is created, it should be compared to the actual
security implementation.

This could aid in the development of threat modeling
and risk assesment against a system. Instead of looking at "best practices,"
this approach would provide a more concrete and quantifiable way of showing
how protected a system is. This approach could a
llow for an effective
communication tool between developer and customer as to where security is
being applied and why. In the DoD arena, it could be used as a means of proof
that the security posture will protect the system at the appropriate level.


There

are other papers on vulnerability assessments of specific types of
systems such as
power systems and control systems
[2
]
,
[
3
]
,
[
9
]
,
[
14
]
,
[
31
]
,
[
35
]
,
[
43].

In looking at the commonality between the assessments and the tools
mentioned previously, it seems it may be possible to create the next generation
of tool that would allow multiple functionalities. The first function would be
the generation of a simulate
d system security architecture, the second function
would take the simulated architecture and perform a vulnerability assessment.
This would allow an assessment to be done on an architecture and find flaws
before the system is completed, which would allow

for changes. This would
help solve one of the most
critical

parts of security engineering.


Software Security


Software security is a critical element in the security architecture. Software
security and security software are not the same. Security soft
ware can be
insecure if the software is not built securely. Writing code that is secure is
26


vital to producing code that is more resistant to vulnerabilities. There is
increasing ability for attackers to take advantage of code flaws, with the advent
of po
int and click attack tools. Previously only elite attackers had the
knowledge and skill set to exploit seemingly obscure code vulnerabilities. In
today’s culture though, script kiddies can use a list of tools to attack without
understanding the underlyin
g software construction. This leads to the need for
software that is built to withstand the barrage of attacks that it will encounter.
So then, what is secure software? “Secure software is software that is able to
resist most attacks, tolerate the majori
ty of attacks it cannot resist, and recover
quickly with a minimum of damage from the very fe
w attacks it cannot
tolerate”
[40
].

That is to say, when attacked, the code will still behave in the way it was
intended. Not only should the code be correct, reliable, and fault tolerant, but
it should still do these things under duress. There are several areas in the
software lifecycle
that must be modified to produce secure code.


There are language specific issues as well as general practices that will
increase the security of code.
On large programs multiple languages are
sometimes
used
. This means that the security engineer needs to

be able to
work with the software team to reduce software vulnerabilities. C and C++ are
popular languages for embedded systems, such as used for flight control.

There
are both compiler protections and language precautions as found in [48
-

67].

Java als
o has protections that must be looked at, such as those in [16
]
,
[81]
,
[87
]. There are more general methods as well, such as security patterns [10
]
,
[44], [90
].


One effective tool is static code analysis such as Fortify 360, if used
appropriately as part
of an overall lifecycle approach
.
Another area of proposed
research is software security and its incorporation not only into the software
development lifecycle, but into the overall systems engineering lifecycle.



Other Considerations of the Architecture


Aside from requirements, methodology, and architecture, the use of approved
products in the system is important. There are types of products such as
intrusion detection, firewalls
, and cryptography that system engineers go to for
protection. These
products are always changing
, with new research [17].
Access control is one of the biggest considerations, both for phys
ical access
and cyber access [79
]. In the cyber realm, one of the protections is
cryptography, although cryptography
is used in other w
ays as well.


Cryptography is used primarily for confidentiality, but is also used for non
-
repudiation, access control, and integrity. There are different
certification
authorities depending on where the cryptography is being used. The National
Institute

of Standards and Technology (NIST) and the National Security Agency
(NSA) are the two certifying authorities for cryptographic products and provide
27


specific requirements for the use and implementation of cryptography [25
]
,
[
26
]
,
[
27
]
,
[
72
]
,
[
74].




The creation of a key management plan, as well as the use of cryptographic
products in a system must be considered early in the lifecycle of a program.
Cryptographic products can take two years or more to be certified, and have to
be “blessed” in order for

them to be used in DoD systems. This is a long
process and can have great impact on a system. If the system can be protected
without the use of cryptography, in some cases this might be a preferred
choice, as cryptography can be costly.


For example, th
e protection of classified material in a system must be either
protected or sanitized. If the choice is cryptographic protection, it must be done
using a type
-
1 product, which can cost a million dollars or more. Instead if
the memory can be sanitized, by

perhaps using volatile memory instead of non
-
volatile memory, then the cost can be reduced to thousands of dollars, instead
of millions.


In DoD systems,
TEMPEST
is typically a requirement
[73]
. This provides
protection between signals that are processin
g information at two different
levels of classification. This is also known as red
-
black separation. Formal
TEMPEST requirements would not be used in the commercial world, but
similar principles may apply. This is another area of consideration that must
be on the forefront of the security engineer’s thoughts when designing security
into the system.


The last major consideration is p
rotection against reverse engineering
. This
applies to both commercial and DoD systems. In DoD systems, the military
must en
sure that other countries cannot reverse engineer the system and gain
similar advantage. In the commercial world, if a system can be easily reverse
engineered, the company can lose revenue and can be destroyed if its products
can be developed more cheaply

and easily by another company.

This can have
a large affect on programs.


V.

Program Effects


Cost, s
chedule,

and r
isk

can make or break a program.

Part of this research
will be looking at how to best incorporate the following into the program
schedule to reduce risk and cost:
Security Mechanism
choice and c
ertification
,
c
rypto
graphic certification
, cross domain solutions and certifications and then
s
ystem certification and accreditation.

The affects of these can greatly increase
cost, induce schedule delay and increase risk. It is important that a security
engineer be involved to help mitigate unwanted affects.


28


The other part that can affect the prog
ram is the division of labor. In most large
DoD programs, the teams are divided such that there is a systems integration
team and other IPTs. The security engineers are typically a part of the systems
integration team. One of the areas of research that is

being proposed as part of
this dissertation, is to see the effects of changing how the security team is
integrated into a program. Although it is necessary for the security engineers
to be fully integrated into the system engineering team, it is also nec
essary that
there are security engineers involved with each IPT. This research will answer
the question: what is the best construction of teams such that security is fully
integrated?



VI.

Summary of Path to Dissertation Defense


Proposed Research Approach


The inclusion of security in a system is a difficult and long process for a large
system. It covers many aspects. There does not seem to be a cohesive process
or body of literature for the entire process. There are papers on one aspect or
another, but n
othing to detail from beginning to end, how security should be
integrated. Once a system is identified, meaning, the engineers are told what
they are bidding to build, security should start to influence the system.
Everything from IA requirements at a hi
gh level, then being broken down into
functional and implementable requirements, all the way to designing,
implementing, and testing the requirements must be included. Security
cannot stand alone, it must rely on many of the capabilities a system already
possesses. With this in mind, there are several areas of research I would be
interested in pursuing in order to create a conglomeration of literature that will
help other security
engineers

to build large secure systems.




1.

The integration of security int
o the systems engineering lifecycle
methodology.
Answer how
security
should
best be integrated in
to each
phase of the lifecycle.


2.

Creation of a system security architecture methodology.


3.

Creation of a Security Architecture Vulnerability Analysis Program. T
his
program will incorporate the findings of some of the research presented
herein as well as information gathered throughout the course of the
dissertation research process. This program will be used to help the security
engineer architect the system, und
erstand where changes affect the system,
and provide a way to perform a vul
nerability analysis on
a model of the
system.


29



4.

Secure Coding standards

as part of the software security architecture
.
There are many static code analysis tools which look at the s
ecurity of code,
but how do you get the engineers to use these best practices in the first
place. Even in CMMI 5 environments, the understanding of what is safe to
use in code and what is not, does not seems to be widely understood. This
is an area that
I am currently research to help the software engineers on my
team. This is fundamental in protecting a system. If the code can be easily
manipulated, then the best outside defenses do little good. How can this be
accomplished? What steps must be taken?

I am working with engineers in
5 languages, how does that affect code security? Can the intermingling of
languages cause a problem for security (ie can it allow for one area of the
code to manipulate another, because of the languages it is written in?)


5.

There are multiple
prote
ction mechanisms that require correct insertion into
the overall schedule of a program
.
There is a process for

certification, but
they aren’t included in most literature on security architecture. I would like
to investigate how bes
t to incorporate these processes into the overall
systems engineering lifecycle as part of adding security into the systems
engineering lifecycle methodology.



Plan and potential risks


I have begun research the systems engineering lifecycle, security architectures,
and software security. In the Spring of 2011, I would like to try and publish
some of the research on security architectures and methodologies, as well as
software security.
I

have published some of the research in newsletters and a
software symposium, but they are not peer reviewed journals. My goal for this
current semester is to publish at least one paper, and one the following
semester.
The research schedule is seen in Fi
gure 5.


One of the potential risks is that the DoD and the Navy may have problems
with publication, not because the material is sensitive, but because of my
involvement with a DoD program. Any potential publication will have to go
through both my company

and through the Navy program office this could lead
to complications for publication. I have alerted my program office to the fact
that I will be publishing, so that I can begin a working relationship with them
to smooth the way for publication.




30


Figure
5
: Research Timeline

Evaluation Plan


Through each phase of the research, it w
ill be applied to both a toy system and
my current system. I will be using the research to further the security on my
current program. However,
in order to ensure that there is not cross
-
over, a
mock system will be created to show how the methodologies apply, and to show
the validity of the security architecture vulnerability

program
. At the end of
2011 and the beginning of 2012, we will be perfo
rming a penetration test on
the system under development. This will validate the security methodology
,
but these results cannot be disclosed. Therefore, I propose to create a mock
system, apply the system security architecture methodology

as part of the
systems engineering lifecycle methodology
, and the resulting security
mitigations. Once the
mock
system is setup, then I will perform penetration
testing on that mock system. Perhaps part of the evaluation would be to give
the mock system to a group of stu
dents to penetration test, so that it is a fair
test.
Also, as part of the publications, I will be working with security architects
on other programs, and provide them the research. I will have them assess the
information and see if it is applicable to the
ir programs as well.


Success Criteria


1.

Demonstrate a working program that can assess a security architecture
for vulnerabilities.

2.

Provide a system security architecture methodology, that covers all
aspects of creating a useful security architecture, and
show that it can
be used in other programs and on a variety of complex systems.


Aug-09
Feb-10
Sep-10
Mar-11
Oct-11
Apr-12
Software Security Research
Software Security Application
Security Architecture Methodology
Vulnerability Analysis and Architecture Tool
Development
Experimentation and fine tuning of tool
Application of Tool for Real System
Changes to system due to tool results
Analysis of Data from Penetration Testing
Enhanced Systems Engineering Lifecycle
Methodology
Duration
31


3.

Provide a process and method for incorporating security into the overall
systems engineering methodology
and show that it can be used in other
programs and on a variety of comp
lex systems.


Potential contributions


1.

Provide a new methodology for the creation of security architectures on
large complex systems.

2.

Provide a new methodology for the incorporation of security into the
overall systems engineering lifecycle.

3.

Provide a new tool to assess
the vulnerabilities in a system, based on a
model before a system is built.




32


References
:


[1]

Adrian Arnold, Bret Hyla, Neil Rowe, “Automatically Building an
Information
-
Security Vulnerability Database,” Proc of 2006 IEEE
Worksho
p on Information Assurance US Military Academy, West Point,
NY, 2006 © IEEE.

[2]

Ahmed M. A. Haidar et al. “Vulnerability Assessment and Control of
Large Scale Interconnected Power Systems Using Neural Networks and
Neuro
-
Fuzzy Techniques,” 2008 Australaisan Un
iversities Power
Engineering Conference.

[3]

Ahmed M.A. Haidar, Azah Mohamed, and Aini Hussain, “Vulnerability
Assessment of Power System Using Various Vulnerability Indices,” 4th
Student Conf. on Research and Development, Malaysia, 27
-
28 2006 ©
IEEE.

[4]

Alex Gol
od, et al.,
Official (ISC)2 Guide to the ISSAP CBK
, Auerbach
Publications Taylor & Fracis Group, Boca raton, FL, 2011 © Taylor and
Francis Group, LLC.

[5]

Alexander Kossiakoff and William N. Sweet,

Systems Engineering
Principles and Practice
, 2003 © John Wile
y and Sons Inc.

[6]

Asa Elkins, Jeffery Wilson, Denis Gracanin, “Security Issues in High
Level Architecture Based Distributed Simulation,” Proceedings of the
2001 Winter Simulation Conference, 2001 © WSC’01.

[7]

Behrouz A. Forouzan,
Introduction to Cryptography
and Network
Security
, McGraw
-
Hill , New York, NY 2008.

[8]

Birgit Pfitzmann,”
Multi
-
layer Audit of Access Rights
,” W. Jonker and M.
Petkovi´c (Eds.): SDM 2007, LNCS 4721, pp. 18

32, 2007.

[9]

Chee
-
Wooi Ten, Chen
-
Ching Liu, Manimaran Govindarasu, “Vulnerability
Ass
essment of Cybersecurity for SCADA Systems Using Attack Trees,”
2007 © IEEE.

[10]

Chess, McGraw, Tsipenyuk,“Seven Pernicious Kingdoms: A Taxonomy of
Software Security Errors,”fortifysoftware

[11]

Chien
-
Ding Lee, Wei
-
Bin Lee
,

“A Cryptographic Key Management
Solution
for HIPAA Privacy/Security Regulations,” IEEE Transactions on
Information Technology in Biomedicine, Vol. 12, No. 1, January 2008.

[12]

Clive Blackwell, “A Multi
-
layered Security Architecture for
Modelling Complex Systems,” CSIIRW’08 May 12
-
14, 2008 © ACM.

[13]

Committee on National Security Systems Instruction No. 1253,
2009 October“Security Categorization and Control Selection for National
Security Systems, Version 1,” Committee on National Security Systems.

[14]

Cuijiao Fu, “Research of Security Vulnerability in th
e Computer
Network,” 2010 © IEEE.

[15]

Cynthia Irvine et al., “A Security Architecture for Transient Trust,”
CSAW’08 October 31, 2008 © ACM.

[16]

Dan Wallach et al. “Extensible Security Architecture for Java,”
1997 © ACM

33


[17]

David Brumley et al. “Towards Automatic Gener
ation of
Vulnerability
-
Based Signatures,” Proc. 2006 IEEE Symposium on
Security and Privacy, 2006 © IEEE.

[18]

David McKinney,” Vulnerability Bazaar,” IEEE Computer Society,
2007 © IEEE.

[19]

Department of Defense Instruction 8500.2, February 6, 2003.

[20]


Department of

the Navy (DoN), “Security Control Mapping,”
SECNAV DON CIO, 1000 Navy Pentagon, Washington, DC.

[21]

DISA, 2008 July 24, “Application Security and Development STIG
Version 2, Release 1,” DISA Field Security Operations.

[22]

DISA, 2006 January 17, “Application Servi
ces STIG Version 1
Release 1,” DISA Field Security Operations.

[23]

DISA, 2007 September 19, “Database STIG Version 8 Release 1,”
DISA Field Security Operations.

[24]


DISA, 2008 March 28,”DoDI 8500
-
2 IA Control Checklist


MAC 1
-

Classified, Version 1 Release 1.4,”

DISA Field Security Operations.

[25]

Elaine Barker, Don Johnson, and Miles Smid,



Recommendation
for Pair
-
Wise Key Establishment Schemes Using Discrete Logarithm
Cryptography
,” NIST SP
-
800
-
56A, March 2007.

[26]

Elaine Barker, et. al “
Recommendation for Key Managem
ent Part 2:
Best Practices for Key Management Organization
,
” NIST Special
Publication 800
-
57, March 2007.

[27]

Elaine Barker, et. al “
Recommendation for Key Management, Part 3:
Application Specific Key Management Guidance
,” NIST Special Publication
800
-
57, A
ugust 2008.

[28]

Fanglu Guo, Yang Yu, Tzi
-
cker Chiueh, “Automatedand Safe
Vulnerability Assessment,” Proc. 21st Annual Computer Security
Applications Conference, 2005 © IEEE.

[29]

Foundstone®, Inc. and CORE Security Technologies, 2000

Security in the Microsoft® .NE
T
,” Available:
Frameworkhttp://www.foundstone.com/us/resources/whitepapers/dot
net
-
security
-
framework.pdf

[30]

Gang Zhao, Xiao
-
hui Kuang and Weimin Zheng, “An Emulation
Environment for Vulnerability Analysis of Large
-
scale distributed
System,” 2009 8th Int. Co
nf. on Grid and Cooperative Computing, 2009
© IEEE.

[31]

Geraldine Vache, “Vulnerability Analysis for a Quantitative
Security Evaluation,”3rd Int. Symposium Empirical Software Engineering
and Measurement, 2009 ©IEEE.

[32]

Gu Yun
-
hua, Li Pei, “Design and researched o
n Vulnerability
Database,” 2010 3rd Int. Conf. on Information and Computing, 2010 ©
IEEE.

[33]

Hermann Hartig, “Security Architectures Revisited,” EW 10 Proc.
10th Workshop on ACM SIGOPS European workshop, 2002 © ACM.

34


[34]

Heru Susanto, Fahad bin Muhaya, “Multimedia

Information
Security Architecture Framework,” 2010 © IEEE.

[35]

Hu Rao, Chen Chao Tian, “Information Security Vulnerability
Analysis System Based on Dynamic Cooperation Mechanism,” World
Congress on Software Engineering, 2009 © IEEE.

[36]

HyunChul Joh, Jinyoo Kim,
and Yashwant K. Malaiya, “Fast
Abstract: Vulnerability Discovery Modeling Using Weibull Distribution,”
19th Int. Symposium on Software Reliability Engineering, 2008 © IEEE.

[37]

John Sherwood, “SALSA: A Method for Developing the Enterprise
Security Architecture

and Strategy,” 1996 © Published by Elsevier
Science Ltd. doi:10.1016/S0167
-
4048(97)83124
-
0

[38]

Ju An Wang et al., “Measuring Similarity for Security
Vulnerabilities,” Proc. 43rd Hawaii Int. Conf. System Sciences, 2010 ©
IEEE.

[39]

Jun Yoon and Wontae Sim, “Impleme
ntation of the automated
Network Vulnerability Assessment Framework,” 2008 © IEEE.

[40]


Karen Goertzel, “Software Security Assurance: A State of the Art
Report,” IATAC, Defense Technical Information Center.

[41]

Liu Rui, “Optimization of Hierarchical Vulnerability
Assessment
Method,” Proceedings of IC
-
BNMT2009, 2009 © IEEE.

[42]

Lu Zhuang, Chen Li and Xing Zhang, “A Novel Architecture for
Trusted Computing on Public Endpoints,” 2010 2nd Int. Conf. Networks
Security, Wireless Communications and Trusted Computing, 2010 ©
I
EEE.

[43]

Lutz Lowis, Rafael Accorsi, “Vulnerability Analysis in SOA
-
based
Business Processes,” IEEE Transactions on Services Computing, 2010 ©
IEEE.

[44]

M.A. Hadavi, et. al., “Software Security; A Vulnerability
-
Activity
Revisit,” 3rd International Conf. on Availab
ility, Reliability and Security,
2008 © IEEE.

[45]

McGraw,Gary,”Twelve rules for developing more secure Java code,”
Javaworld, Dec. 1, 1998.

[46]

Michael Reiter, Kenneth Birman, Robbert Van Renesse, “A Security
Architecture for Fault
-
Tolerant Systems,” ACM Transacti
ons on
Computer Systems, Vol. 12, No. 4, November 1994, Pages 340
-
371

[47]

Michelle Oda, Huirong Fu, and Ye Zhu, “Enterprise Information
Security Architecture A Review of Frameworks, Methodology and Case
Studies,” 2009 © IEEE.

[48]

Microsoft Corporation © 2010 “Chec
ked (C# Reference),” Available:
http://msdn.microsoft.com/en
-
us/library/74b4xzyw.aspx

[49]

Microsoft Corporation © 2010 “Dangerous Permissions and Policy
Administration,” Available: http://msdn.microsoft.com/en
-
us/library/wybyf7a0.aspx

[50]

Microsoft Corporation © 2
010 “Detecting and Correcting Managed
Code Defects,” Available: http://msdn.microsoft.com/en
-
us/library/y8hcsad3.aspx

35


[51]

Microsoft Corporation © 2010 “How to: Run Partially Trusted Code
in a Sandbox,” Available: http://msdn.microsoft.com/en
-
us/library/bb76304
6.aspx

[52]

Microsoft Corporation © 2010 “Permission Requests,” Available:
http://msdn.microsoft.com/en
-
us/library/d17fa5e4.aspx

[53]

Microsoft Corporation © 2010 “Securing Method Access,” Available:
http://msdn.microsoft.com/en
-
us/library/c09d4x9t.aspx

[54]

Microsoft Co
rporation © 2010 “Security Best Practices for C++,”
Available: http://msdn.microsoft.com/en
-
us/library/ee480151.aspx

[55]

Microsoft Corporation © 2010 “Secure Coding Guidelines,”
Available: http://msdn.microsoft.com/en
-
us/library/8a3x2b7f.aspx

[56]

Microsoft Corpora
tion © 2010 “Securing State Data,” Available:
http://msdn.microsoft.com/en
-
us/library/39ww3547.aspx

[57]

Microsoft Corporation © 2010 “Securing Wrapper Code,” Available:
http://msdn.microsoft.com/en
-
us/library/6f5fa4y4.aspx

[58]

Microsoft Corporation © 2010 “Securit
y (C# Programming Guide),”
Available: http://msdn.microsoft.com/en
-
us/library/ms173195.aspx

[59]

Microsoft Corporation © 2010 “Security and On
-
the
-
Fly Code
Generation,” Available: http://msdn.microsoft.com/en
-
us/library/x222e4ce.aspx

[60]

Microsoft Corporation © 201
0 “Security and Public Read
-
only
Array Fields,” Available: http://msdn.microsoft.com/en
-
us/library/ms172409.aspx

[61]

Microsoft Corporation © 2010 “Security and Race Conditions,”
Available: http://msdn.microsoft.com/en
-
us/library/1az4z7cb.aspx

[62]

Microsoft Corpora
tion © 2010 “Securing Exception Handling,”
Available: http://msdn.microsoft.com/en
-
us/library/8cd7yaws.aspx

[63]

Microsoft Corporation © 2010 “Security and Remoting
Considerations,” Available: http://msdn.microsoft.com/en
-
us/library/82wf1hcz.aspx

[64]

Microsoft Corp
oration © 2010 “Security and Serialization,”
Available: http://msdn.microsoft.com/en
-
us/library/ek7af9ck.aspx

[65]

Microsoft Corporation © 2010 “Security and Setup Issues,”
Available: http://msdn.microsoft.com/en
-
us/library/sw5swefy.aspx

[66]

Microsoft Corporation ©

2010 “Security and User Input,” Available:
http://msdn.microsoft.com/en
-
us/library/sbfk95yb.aspx

[67]

Microsoft Corporation © 2010 “Unsafe Code and Pointers (C#
Programming Guide),” Available: http://msdn.microsoft.com/en
-
us/library/t2yzs44b.aspx

[68]

Mohamad Neilf
oroshan, “Network Security Architecture,” 2004 ©
Consortium for Computing Sciences in Colleges.

[69]

Munawar Hafiz, “Security Patterns and Evolution of MTA
Architecture,” OOPSLA’05, 2005 © ACM.

[70]

Murdoch, “
Security Measurement
,” PSM Safety & Security TWG, 13
Jan
2006.

36


[71]

National Institute of Standards and Technology Special Publication
800
-
53 Revision 3, 2009 August 2009, includes updates as of 05
-
01
-
2010” Recommended Security Controls for Federal Information Systems
and Organizations,” NIST, Gaithersburg, MD.

[72]

Nat
ional Security Agency 2009 November 2,”

NSA Suite B
Cryptography
,” Available:
http://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml.

[73]

National Security Agency, “
NSTISSAM TEMPEST/1
-
92,
Compromising Emanations Laboratory Test Standard,
Electromagnetics
,”15 December 1992.

[74]

National Security Agency,


Suite B Implementer’s Guide to NIST SP
800
-
56A
,” July 28, 2009.

[75]

National Security Agency

Information Assurance Solutions
Technical Directors, “Information Assurance Technical Framework,”
Releas
e 3.0, September 2000.

[76]

“National Information Assurance Glossary”, CNSS Instruction No.
4009, 26 April 2010.

[77]

Nian Liu, Jianhua Zhang, “Vulnerability Assessment for
Communication Network of Substation Automation Systems to Cyber
Attack,” 2009 © IEEE.

[78]

North A
merican Electric Reliability Council, “
Control System


Business Network Electronic Connectivity
,” NERC, May 3, 2005.

[79]

North American Electric Reliability Council
, 2002 June 14,


Cyber
-
Access Control,

Version 1.0

NERC.

[80]

North American Electric Reliability
Council, 2004, “
NERC Cyber
Security Activities
,” Available:
http://www.nerc.com.

[81]

Oracle, “Secure Coding Guidelines for the Java Programming
Language, Version 3.0,” Available:
http://java.sun.com/security/seccodeguide.html.

[82]

Peter Mell, sasha Romanosky, Kare
n Scarfone,”Common
Vulnerability Scoring System,” IEEE Computer Society, 2006 © IEEE.

[83]

Public Safety Wireless Networking Program, “
Key Management Plan
Template for Public Safety Land Mobile Radio Systems,” February

2002.

[84]

Ratsameetip Wita, Nattanatch Jiamnap
anon, and Yunyong Teng
-
amnuay, “An Ontology for Vulnerability Lifecycle,” 3rd Int. Symposium
on Intelligent Information Technology and Security Informatics, 2010 ©
IEEE.

[85]

Richard Kissel et al, National Institute of Standards and
Technology Special Publicati
on 800
-

64, Revision 2, 2008 October,
“Security Considerations in the System Development Life Cycle,” NIST,
Gaithersburg, MD.

[86]

Richard MicAllister,” Information Systems Security Engineering,
The Need for Education,” Workshop for Application of Engineering

Principles to System Security Design, 2002. Available:
http://www.acsac.org/waepssd/papers/05
-
mcallister.pdf

37


[87]

Richard S. Hall, 2005 May 13, “Oscar Security” Release version:
1.0.5 Available: http://oscar.ow2.org/security.html

[88]

Robert Martin, “Integrating Y
our Information Security Vulnerability
Management Capabilities Through Industry Standards (CVE&OVAL)”
2003 © IEEE.

[89]

Robert Shirey, “Defense Data Network Security Architecture,”
MITRE Corporation.

[90]

Sabah Al
-
Fedaghi, “System
-
Based Approach to Software
Vulnerab
ility,” IEEE intl. Conf on Social Computing/ IEEE Int. Conf. on
Privacy, Security, Risk and Trust, 2010 © IEEE.

[91]

SABSA Ltd. 2010 ©SABSA, “
The SABSA Matrix,
” Available:
http://www.sabsa.org/the
-
sabsa
-
method/the
-
sabsa
-
matrix.aspx

[92]

SABSA Ltd. 2010 ©SABSA, “
The
SABSA Method,
” Available:
http://www.sabsa
-
institute.org/

[93]

Secretary of the Air Force, “Air Force Instruction 33
-
203, Vol. 1,”
October 31, 2005

[94]

Shyuan Jin, “A Review of Classification Methods for Network
Vulnerability,” Proc. Of 2009 IEEE Int. Conf. on Sys
tems, Man and
Cybernetics, San Antonio, TX, October 2009 © IEEE.

[95]

Suhair Hafez Amer and John A. Hamilton, Jr., “Understanding
Security Architecture,” SpringSim, 2008.

[96]

Timothy Levin et al., “Analysis of Three Multilevel Security
Architectures,” CSAW’07, 2007

© ACM.

[97]

Wikipedia 2010 December 14, “
Department of Defense Architecture
Framework,
” Available:
http://en.wikipedia.org/wiki/Department_of_Defense_Architecture_Fram
ework

[98]

Wikipedia 2010 December 10, “
Information Assurance,
” Available:
http://en.wikipedia.org
/wiki/Information_assurance

[99]

Wikipedia 2010 December 10, “
Zachman Framework,
” Available:
http://en.wikipedia.org/wiki/Zachman_Framework

[100]

Wikipedia 2010 October 3, “Rainbow Series,” Available:
http://en.wikipedia.org/wiki/Rainbow_Series,

[101]

You
-
Chun Zhang et al.

“Research on the Architecture of
Vulnerability Discovery Technology,” Proc. 9th Int. Conf. on Machine
Learning and Cybernetics, Qingdao, 11
-
14 July 2010 © IEEE.

[102]

Yu Shun, Shuang Kai, Yang Fang Chun, “An Attack Based IMS
Vulnerability Validated Platform,” 2
010 Int. Conf. Communications and
Mobile Computing, 2010 © IEEE.

[103]

Zhi
-
Yong Li et al., “A Network Security Analysis Method Using
Vulnerability Correlation,” 2009 5th Int. Conf. Natural Computation,
2009 © IEEE.