THE ENDURING COMMON OPERATING PICTURE

goatishspyMobile - Wireless

Dec 10, 2013 (3 years and 9 months ago)

109 views




THE ENDURING COMMON
OPERATING PICTURE



































4/16/2013

Plans & Strategy

Document

The Enduring Common Operating Picture



Page
1

Table of Contents

DOCUMENT HISTORY

................................
................................
................................
..........

2

HISTORICAL PERSPECTI
VE

................................
................................
................................
...

3

BACKGR
OUND DOCUMENTS

................................
................................
..............................

4

INTRODUCTION

................................
................................
................................
...................

5

COMMON OPERATING PIC
TURE SUPPORT DECISIO
NS

................................
......................

5

Contracted Support
................................
................................
................................
................................
..

5

Organic Support

................................
................................
................................
................................
.......

5

Recommendation

................................
................................
................................
................................
.......

6

COMMON OPERATING PIC
TURE

................................
................................
.........................

6

UL/UC2

................................
................................
................................
................................
......................

7

WebEOC


Mapper Professional 3.0

................................
................................
................................
..

7

Air Force Incident Manager (AFIM)

................................
................................
................................
.......

8

Installation Incident Manager (Proposed)

................................
................................
............................

9

Recommendation

................................
................................
................................
................................
....

10

OTHER CONSIDER
ATIONS

................................
................................
................................

10

External Data Access

................................
................................
................................
............................

11

Mobile Data Collection

................................
................................
................................
........................

11

Data Standards

................................
................................
................................
................................
.....

11

Data Backups

................................
................................
................................
................................
.........

12

Redundancy

................................
................................
................................
................................
............

12

More Considerations

................................
................................
................................
.............................

12

DEPLOYME
NT TIMELINE

................................
................................
................................
....

13

INTERFACE EXAMPLES

................................
................................
................................
......

13

Security Forces Incident C2

................................
................................
................................
..................

14

Fire Department Incident C2

................................
................................
................................
...............

15

ADDITIONAL AREAS OF
INTEREST

................................
................................
...................

16

PROPOSED INFRASTRUCT
URE LAYOUT

................................
................................
............

17

VMWare

................................
................................
................................
................................
.................

17

SharePoint Dashboard

................................
................................
................................
.........................

17

Data R
edundancy

................................
................................
................................
................................
..

17

SIPR Deployment

................................
................................
................................
................................
...

18

CONCLUSION

................................
................................
................................
....................

18




The Enduring Common Operating Picture



Page
2

DOCUMENT HISTORY

Version

Date

Description

Author

Approved By

1.0

April 16, 2013

Initial Concept & Plan


Pending Approval




The Enduring Common Operating Picture



Page
3

HISTORICAL PERSPECTI
VE

The
COP

is the primary source of spatial information when responding to any incident for an installation.
While the name has changed over the years, the commander’s goal for any COP has remained the same, to
obtain the best picture possible.

In September of 2009, most installations COPs were found to be de
ficient to meet the intent

of the Emergency
Commun
ication Center Enabling Concept memo published by Col Curt A. Van De Walle

then Chief of
Readiness and Emergency
Management

Division DCS/Logistics Installations & Mission Support, and Col Tracey
M. Heck then Deputy Chief of Force Protection and Operations Division DCS/Logistics Installations & Mission
Support. This memo directe
d Civil Engineer (CE) Unit and Security Forces (SFS) commanders to establish
virtual or co
-
located Joint Emerge
ncy Communication Centers

(ECC)

where Medical, Fire, and Security Forces
personnel operated hand in hand. Many installations did not either have

the space, funding, or both to meet
the requirements outlined in Phase 2, within that fiscal year or for several to come, but Phase 1’s requirements
were to leverage existing information

systems

technologies
to establish a virtual ECC within 90 days of th
e
memo being published. The attachment to the memo, ECC Enabling Concept

white paper
, directed the
improvement of communication flow during an emergency situation by ensuring installation leadership was
provided a single point of information, a
dashboard
that

included a single COP shared among commanders.

In the past, emergency command centers such as the Base Defense Operations Center (BDOC) would run
several different applications to meet their requirements, none of which were compatible with one another

nor
met all the demands the using agency required. Several applications were studied during the 90 days
countdown to meet initial operating capabilities (IOC) to includ
e TBMCS, UL/UC
2, WebEOC
, SRC
-
Lite, and
others. The largest drawback of any one system

was predominately that it required new information systems
to be installed, contracted support, did not meet the demands of all stake holders for emergencies and
wartime footings simultaneously, and could not be accomplished within the 90 day period.

A r
ay of hope
resided within the ECC Enabling Concept white paper however, GeoBase was included in a listing of
capabilities essential for successfully enabling the concept.

Most commanders took this literally and directed their GeoBase support offices to beg
in preparation to
provide the COP for the emerging ECCs. Installations

within ACC were split between using so
ftware solutions
such as ArcGIS which required a tremendous amount of training, but was immensely flexible to whatever
situation presented itself.

PACAF installations were split between using TBMCS, UL/UC2, ArcGIS, and a newly
developed capability called the Air Force Incident Manager (AFIM) which road upon the existing GeoBase
internet map services.

In 2010, two articles discussing competing COP i
deas
were

published by AFCE Magazine. In volume 18 issue
number 3, articles published by HQ USAF/A7CXR and PACAF installations pointed out several common
approaches

and concepts. However, of the core issues that were being discussed in one of the article
s, it
failed to cover the basic problem for most installations which was having the capability to
r
econfigure the
system on the fly by properly trained personnel to handle any situation be it a peacetime natural disaster
that took an unexpected turn or a w
artime scenario where asymmetric warfare was being taken to a whole
new level never anticipated by the COP designers. This, in our opinion, should be the single most important
point, aside from compatibility between other systems, used to decide system re
igns superior and essential for
a proper COP.

The remainder of this document seeks to find a balance between a flexible system that meets todays
emergency response and war fighting demands, while remaining cost effective for the foreseeable future.


The Enduring Common Operating Picture



Page
4


BACKG
ROUND

DOCUMENTS

Document

Date Published

Source

File

ECC Enabling Memo w/
Attachments

September 2009

HQ USAF/A7

ECC_Enabling_Conce
pt_22%20Sep%2009%20(2).pdf

AFCE Magazine Vol 18 Issue 3

2010

AFCESA

AFD-101230-015.pdf

Regional Incident Management
Status Report

March 2010

374 CES

RIM%20Status%20
Report.pdf

Yokota C2 Application FY10

October 2010

374 CES

Yokota%20C2%20A
pplication%202010(small).pptx

MacDill BDOC Training

December 2009

6

SFS

BDOC_TRAINING_D
RAFTv8.pptx

AFGSC GeoBase Standup


IOC

May 2010

AFGSC/A7RT

AFGSC%20GeoBase
%20Standup%20-%20IOC.pptx




The Enduring Common Operating Picture



Page
5

INTRODUCTION

In the following sections of this document, we will layout many of the
discussions that should occur when
choosing the future common operating picture for the installation.
Each discussion will contain a bit of
background, the options available and a recommendation. The aim of the writer
s

will be provide a balanced
approach

that neither sacrifices on capabilities nor treasure especially in these fiscally austere times.

COMMON OPERATING PIC
TURE SUPPORT DECISIO
NS

Before choosing which type of common operating picture one should pick for the installation, it is important to
dec
ide what level of support you will need to ensure that you are can meet all the demands necessary for
today and tomorrow. Organic support is often the most cost effective way of getting what you need,
however with today’s demands in terms of information s
ecurity, additional duties, and normal flows of
operations, this may not be a feasible solution. Contracted support is also another option, however the
drawback there is that it often comes with strings attached to include hidden agendas that can often co
st the
installation more than it bargained for.

Contracted Support

Contracted support can come in one of two different flavors. Much like the GeoBase program, the installation
could chose to have contracted support on type from the inception of the new
COP program through its
completion and beyond, or chose to only request contract support to further develop and maintain the system
once it is in place. Each method will usually yield a great product, but with the ever declining funds in our
budgets, the
duration of this support will be limited to the interest level of the installation. As contractors are
well aware of this, they have a tendency to either suggest installation of systems and or extra capabilities
that make it difficult if not impossible to

cancel the contract without havin
g the system being mothballed as
such as the case with TBMCS and UL/UC2, essentially the same software package just with a newer facade
and some minor performance improvements.

At present, while GeoBase does have contrac
ted positions, they will more than likely not be authorized to
work
on forked systems

such as the one this document is intended for.

Support Type

Estimated Cost

Contract to develop/implement and sustain COP system.

-
$180K initial / $120K annual
thereafter
.

Contracted to sustain implemented system. System
developed/implemented by 379 ECES.

-
$140K annually


Organic Support

Organic support can manifest itself in many different ways. Military personnel are typically the best type of
organic support as they are capable of picking up new roles and responsibilities easily, however in this case
time and training
is

major consider
ation.
As an ex
peditionary installation in transition to
enduring, a GS
-
11 or
GS
-
12
GeoBase

specific position should be offered to ensure the continued
continuity and allow for someone
to be trained specifically to handle the care and upkeep of the propos
ed COP. This position would not only
reduces the contractor support costs
, but allow

for someone who is trained in proper spatial database
management to come in and consistently maintain the installation’s data as it needs to be rather than every 6
The Enduring Common Operating Picture



Page
6

months

a new military lead would come in and depending on experience level and/or interest may or may
not be able to perform the same tasks. As a core aspect of this positions duty is to maintain the COP, it shall
be made clear to
whoever

holds the positions th
at the system is to be modified and maintained as the military
needs evolve.

At present, much of the training necessary for military and GS position effective management in a meaningful
way is available at low cost and thus would not require a large inve
stment of capital. Training would also
need to be performed on an infrequent basis to occur only when software upgrades become available, an
expansion of capabilities is required and/or when personnel rotate out of positions. While this does look
incredi
bly attractive in our fiscally constrained environment, it should be understood that these professionals
may take longer to implement changes to the end product than a contracted support member, it will be
entirely more sustainable even in these fiscally c
onstrained times.

Should this option be chosen, it is our recommendation to have
1

military
member

and 1 GS employee trained

ideally
.
1 m
ilitary
member

would ensure that
regardless of time of day

there is one qualified person on
base
who could respond to
the emergency operations center (EOC) whenever the need arises. The 1 GS employee
should be charged with maintaining the system, monitoring the extra training for assigned military personnel
and seeking sustainment funding.

Support Type

Estimated Cost

Organic develop/implement and sustain system.

-
$0 in additional salaried costs

-
$
7
K in training (annually)

-
$10K in TDY funding for
conferences, etc.

Total

$17
K annually

Recommendation

It is our recommendation to leverage our organic support and train
them appropriately for the foreseeable
future

to develop, sustain

and maintain the Common Operating Picture for this installation. While the startup
of such a program may have
more

bumps in the road as the personnel will be learning as they go, the

cost
b
asis and

quality of what the end product will be is dramatically superior to that of a contracted product as
the personnel chosen will have a vested interest in ensuring its success for the installati
on that far exceeds the
possible monetary v
alue associat
ed to a contract. It is important to remember however, that training will be
required and military support will be essential for its future success. Without those two variables, we will be
ensuring the need for more expensive means of maintenance and sup
port.

Support Type

Estimated Cost

Organic develop/implement and sustain system.

-
$0 in additional salaried costs

-
$7
K in training (annually)

-
$10K in TDY funding for
conferences, etc.

Total

$17
K annually


COMMON OPERATING PIC
TURE

The Enduring Common Operating Picture



Page
7

As the COP will be the
first stop for all command centers on an installation as we
ll as quite possibly anyone
who could be responding to any situation, it is important to take into consideration many of the possibilities
that are available from Commercial Off The Shelf (COTS) so
ftware or Government Off The Shelf (GOTS)
software solutions. Whatever choice is made, the limitations associated to that choice must be fully
comprehended and most importantly accepted at all levels.

UL/UC2

While it has been recently rebranded as UL/UC2, this system is essentially the same core system once dubbed
the Theater Battle Management Core Systems (TBMCS). The primary difference between the previous system
and the newly named is a new map interface,
the main component of interest for our discussion. The biggest
drawback however, is that while the graphical user interface may be updated and more modern, the nuts and
bolts of the application remain the same and thus susceptible to many of the same prob
lems. Strides have
been made though in areas centering around deployment on non
-
classified networks making it an attractive
solution to installations who work with outside agencies on a regular basis.

Despite this improvement, the Air Force has finally
realized the main problem with UL/UC2, that being it is
cost prohibitive. Each workstation where the software is installed must have a license, a license that is not
owned by the Air Force. For each installation where UL/UC2 is deployed, support must be
obtained to ensure
all servers and work stations supporting the overall COP are operational at all times.
Training is essential as
the system is not designed to be as intuitive as could be. However, it should be noted that nearly all the of
the legwork n
ecessary for approval of the system from the A6 perspective has been accomplished and thus
should be relatively easy to obtain approval, slashing our initial operating capability time line of 18 months
to 3 to 6 months.

The drawback to this, however, is t
hat there is a certification for the system that already has
predefined limitations in place in terms of security and capabilities.

It should also be noted that while this system is quite robust in its C2 capabilities, it was developed from a
Cold War era
though process and remains heavily focused on symmetric warfare.

Data sharing without UL/UC2 interfaces are not possible due to the Certificate To Operate (CtO).

Description

Estimated Cost

Servers (NIPR & SIPR)

$100
K

Work Station licenses x 100

&

upgrade mx

$10K

Support Staff of 2 techs and 1 trainer (Contracted)

$
210K

Total

Initial


$32
0K

Total Annual

sustainment

$230K

Total 5 year

sustainment

cost

$1.2M

*Special Note: Servers, PCs, and network equipment have a recommended replacement time frame of 5 years at most.

WebEOC



Mapper Professional 3.0

WebEOC’s software uses ERSI’s ArcGIS Server to host and display data collected within its systems.
WebEOC

has been the choice of a great number of cities and states as well has been talked about quite
extensively within the emergency management community within the Air Force as one of the possible systems
we could turn to for the Common Operating Picture. Th
e biggest drawback that we have seen in regards to
the software is that it is primarily setup to support civilian agencies. In other words, there is no war time
The Enduring Common Operating Picture



Page
8

capabilities initially built into the system from the onset of deployment. This means that th
e software must
either be modified by the installation or by purchasing support services from ESi Acquisitions each time the
software is upgraded. This should also be taken in account for each time ESRI upgrades its software as well
which is generally eve
ry one to two years based on past performance. If WebEOC does not maintain its
support of newer versions, there are potential security liabilities that a future COP system deployed on our
installation could assume due to
running older software.

WebEOC is
a proven quantity for emergency management

and has been successfully used in a wide variety
of situations
. It is one of the highest rated tools available on the local market and has a professional staff
who

understands the needs of its potential clients and can support their needs easily.

At present, the backbone software required to run Mapper Professional 3.0, ArcGIS Server, is possibly free
for our use in unlimited quantities according to a new end user l
icense agreement between the Air Force and
ESRI. However, should that change in the future the costs for deployment would be significant. I will include
these costs below as a worst case scenario to ensure a full picture is available for the best possibl
e decision.

Data sharing with outside agencies is 100% possible as the infrastructure leverages standard sharing
capabilities. In order to take advantage of sharing our data and accepting outside data sources, the
requirement would have to be outlined wit
hin our Certification and Accreditation package and develop some
sort of token based access methodology to ensure that only authorized consumers and data contributors were
accessing our systems.

Description

Estimated Cost

Servers NIPR

$
5
0K

ESRI ArcGIS
Server Enterprise

4 cores

w/ Additional
4
Cores

$66K

WebEOC (Licenses w/ Training)

$100K

Mapper Professional 3.0

$80K

Contracted Support Modifications

for wartime support

$180K

Total Initial

$
47
6
K

Total Annual

sustainment

$140
K

Total 5 year
sustainment
cost

$
70
0K

*Special Note: Servers, PCs, and network equipment have a recommended replacement time frame of 5 years at most.

Air Force Incident Manager (
AFIM
)

This system is the present solution deployed at our installation. It is a
software extension that rides upon the
existing GeoBase internet mapping infrastructure custom built by PACAF/A7RT and modified for installation
use. AFIM requires GeoCortex Internet Mapping Framework (discontinued software) and ESRI’s ArcIMS
(discontinue
d software) or ArcGIS Server to operate. At present this software suite resides on
several
package servers

hosted
as
business systems and is limited by what those systems were designed to handle.

There is gaining momentum for a general GeoBase server cons
olidation effort to culminate in FY14 with the
consolidation of all AF GeoBase assets being centrally stored and managed at DISA. The biggest
disadvantage from our perspective would be that none of the servers would remain at our installation, the
contrac
t support associated to the maintenance of those servers would leave, and support for the product
would end as it was a GOTS solution. At this time, there is no present guarantee that map services being
rendered at our installation will be ported to the D
ISA implementation. Additionally, purely from a C2
The Enduring Common Operating Picture



Page
9

standpoint, having information systems outside of our network are susceptible to outages should circumstances
be correct as in earthquakes and volcanoes where land based cabling could be easily disabled a
nd
atmospheric

interference from volcanic ash and/or sand
could interfere with our satellite capabilities.

The below cost estimates are in relation to keeping things at status quo, with the understanding that there will
be a sunset to what we have and a so
lution must be found prior to that date.

Data sharing with outside agencies is 100% possible as the infrastructure leverages standard sharing
capabilities. In order to take advantage of sharing our data and accepting outside data sources, the
requirement
would have to be modified within
AFCENT
s Certification and Accreditation package and develop
some sort of token based access methodology to ensure that only authorized consumers and data contributors
were accessing our systems. This will be harder though
as time progresses as HAF/A7 is presently
consolidating all GeoBase C&A packages into one AF wide package and for any deviation, our request
would need to be approved at HAF/A7 level.

Description

Estimated Cost

Server

NIPR

$0K (AFCENT

Funded)

ESRI ArcGIS

Server Enterprise w/ Additional Cores

$0K (AFCENT

Funded)

Contract Support

$0K (
AFCENT

Funded)

Total Initial

$0
K
(AFCENT Funded)

Total Annual sustainment

$0
K
(AFCENT Funded)

Total 5 year sustainment cost

$0
K
(AFCENT Funded)

*Special Note: Servers, PCs, and network equipment have a recommended replacement time frame of 5 years at most.

Installation

Incident Manager (Proposed)

After the first emergency management exercise, operational readiness exercise, and response to a re
al world
even, it was pretty clear what
the installation

needs to have installed in order to ensure its success for today’s
events and beyond. The system in mind needs to be responsive to changing environments, quickly modified by
EOC personnel to meet un
anticipated events,
intuitive, capable of consuming r
aw data from external
agencies
, capable of providing a Google Earth like experience w/ a Power Point edge but most importantly
cost effective for the ne
ar and long term. The system would reside on a sin
gle server hosting 2 to 4
virtualized servers, depending on situation, providing a web based map service customized to the unit it
supports with a single database server to hold installation data and events as they happen.

Any

C&A package submitted for s
uch a system should take in account its need to consume and provide data
outside of
installation’s

firewalls (VPN access probable), be able to easily integrate with existing and future
mobile technologies such as Android (DoD’s future mobi
le operating syst
em of choice), and should include
considerations for users outside of our DoD support. While the installation should keep outside access closed
at all times, exercises a
nd actual emergencies should
be considered within the C&A to ensure that access can
be

turned on at a moment’s notice without requiring additional review or security changes that
half step
potential liabilities.
While contracted support would be idea
l

to develop, implement and maintain such a
system, it is our recommendation that organic r
esources be fully utilized to limit our future financial liabilities
and keep additional resources and funds available for the development of the system to help reach its full
potential.

The Enduring Common Operating Picture



Page
10

At present, the backbone software required to the proposed solution
, ArcGIS Server, is possibly free for our
use in unlimited quantities according to a new end user license agreement between the Air Force and ESRI.
However, should that change in the future the costs for deployment would be less than $80K, still a signifi
cant
savings over other potential solutions.

Description

Estimated Cost

Se
rver

NIPR

$
5
0K

ESRI ArcGIS Server Enterprise 4 cores w/ Additional 4 Cores

$66K

VMWare

vSphere

Software

(Included in Server Purchase

Dell
)

$0
K

Organic develop/implement and
sustain system.

$30K

Total Initial

$
14
6
K

Total Annual sustainment

$
30
K

Total 5 year sustainment cost

$
150
K

*Special Note: Servers, PCs, and network equipment have a recommended replacement
time frame of 5 years at most.

Recommendation

It is our recommendation to develop our own
Installation

Incident Manager COP system while using our own
organic capabilities to establish, develop and maintain the system. While this opinion does stray from the
current AF interpretation of the capabiliti
es within certain career fields, it should be noted that given the
chance to expand, the target career field often does in ways never anticipated. While the development of
the system would be consuming during the initial 12 to 18 months due to the require
ments of building and
obtaining initial certification and accreditation will be consuming, once the system is deployed, regular
maintenance work would take less than 8 hours weekly at most. Expansion of the system to include additional
capabilities is whe
re the time factor usually occurs, but by following the recommendations provided here to
leverage COTS solutions as intended as much as possible, modifications should be quick and simple to make
as the interface does most of the work for us.

Description

Estimated Cost

Server NIPR

$50K

ESRI ArcGIS Server Enterprise 4 cores w/ Additional 4 Cores

$66K

VMWare vSphere Software (Included in Server Purchase Dell)

$0K

Organic develop/implement and sustain system.

$30K

Total Initial

$14
6K

Total Annual
sustainment

$30K

Total 5 year sustainment cost

$150K

*Special Note: Servers, PCs, and network equipment have a recommended replacement
time frame of 5 years at most.

OTHER CONSIDERATIONS

The choice of COP is but one of many choices and considerations that need to be made during the C&A
submittal process. The choice of Mission Assurance Categories is paramount as it helps ensure rapid response
from the appropriate Com agencies to solve a p
otential outage that extends beyond our own expertise.
Their expertise doesn’t include our software suites of choice, but they can help trouble shoot issues associated
to authentication to the system, group policies, and network infrastructure related iss
ues.

The Enduring Common Operating Picture



Page
11

MAC 1


Systems handling information that is determined to be vital to the operational readiness or

mission effectiveness of deployed and contingency forces in terms of both content and timeliness.

The consequences of loss of integrity or
availability of a MAC I system are unacceptable and

could include the immediate and sustained loss of mission effectiveness. Mission Assurance

Category I systems require the most stringent protection measures.

MAC II


Systems handling information that is
important to the support of deployed and contingency forces.

The consequences of loss of integrity are unacceptable. Loss of availability is difficult to deal with

and can only be tolerated for a short time. The consequences could include delay or degradat
ion

in providing important support services or commodities that may seriously impact mission

effectiveness or operational readiness. Mission Assurance Category II systems require additional

safeguards beyond best practices to ensure assurance.

MAC III

Sys
tems handling information that is necessary for the conduct of day
-
to
-
day business, but does

not materially affect support to deployed or contingency forces in the short
-
term. The

consequences of loss of integrity or availability can be tolerated or
overcome without significant

impacts on mission effectiveness or operational readiness. The consequences could include the

delay or degradation of services or commodities enabling routine activities. Mission Assurance

Category III systems require protectiv
e measures, techniques, or procedures generally

commensurate with commercial best practices.


Additional areas of concern would be: external data access, mobile capabilities, data standards, data and
software backups, redundancy, and more.

External Data

Access

As part of our C&A submittal, we should include our intentions to open ports within the firewall to allow for our
server to consume external spatial data provided by outside agencies as well as provide specific pieces of
information we wish to shar
e with them from our own systems. Typically the best method to do this would be
through VPN access, however, as that is typically a token based authentication method ie. CAC card, this may
prove problematic as we would want this external tap to be as auto
mated as possible. Our server should be
capable of support an additional virtualized server that can be made accessible when needed to hold only
the data we would want. This server would only house the necessary data needing to be shared and would
not ha
ve any other data stored within it at any point. As this server would be fully virtualized it could be
moved to other servers easily or isolated from the network if compromised without damaging our ability to
operate the COP at any given point.

Mobile Dat
a Collection

As we have chosen to operate a stock version of ESRI’s ArcGIS Server, there are free mobile applications
written by ESRI for the iOS and Android devices that can be harnessed by our personnel and their
dependents after an emergency. These mob
ile sensors could provide us detailed information about what is
happening in a matter of minutes vs. the hours it would take for us to discover all the potential issues around
base. If direct access to our servers is unwise, we can leverage free services
from Google that would allow
similar capabilities while we would consume the data directly from Google’s Servers. The Kansas City Fire
Department experimented with a similar system to great success.

Data Standards

While data standards play somewhat fast

and loose in the civilian community, we should maintain the same
standards at all times as HQ AF/A7. This will ensure that we can continue to consistently consume data from
our GeoBase personnel while at the same time, providing a template that has been
accepted for use by most
The Enduring Common Operating Picture



Page
12

Federal Agencies. While it will not fix incompatibilities between us and our State counterparts, it will go a
long way to ensuring a consistent standard is followed and maintained.

Data Backups

It is not known at this time how Com

currently backs up its servers within its Network Operations Center, but
we need to ensure that we are backing up our data daily if not every 8 to 12 hours. In addition to backing
up our data, the choice of a virtualized server enables us to take snapsho
ts of that server and its data
throughout the day. We should balance those snapshots with our performance needs vs. our level of tolerance
for performance degradation during a crisis. This ensures our systems are continually backed up at a rate
that is a
cceptable to our potential loss.

Spatial data could be further backed up by our EOC and UCC representatives through desktop clients.
While it is a bit cumbersome, it is
something that can be made as part of their operating instructions.

Redundancy

While the quotes above only take into consideration the purchase of a single server to handle the needs of
the installation, it would be prudent to purchase 2 servers to ensure adequate redundancy exists and can
continue to support the installation through
out any event. The databases between the two servers could be
constantly updating ensuring that no data would be lost save for a few minutes of activities that occurred
between snapshots.

There are network related complications that would need to be suppo
rted by Com to ensure that when a
failover event does occur, that the DNS entries to the installation shift appropriately ensuring that a mapping
interface remains available at all times. The end user would only notice a momentary lack of refresh on their

screen or the need to reopen their mapping interface to see the latest and greatest COP.

The cost of a second server would double the projected costs on the previous page, but it provides the
greatest level of security possible during an emergency.

More

Considerations

The formation of a working group to pull in all the requirements from CE and all other base agencies would
be critical to ensuring the successful deployment of any COP solution. These agencies may have other
capabilities that can be contri
buted to the COP such as weather data, inventories of facilities, and more that
could be crucial to making fast and correct decisions by commanders using the COP. Additionally, there are
more tools provided by the software manufacturer that could support
the installation in terms of performing
spatial analysis and/or replaying events through temporal tool sets.

These tools could help predict looming
disasters before they occur d
uring an incident.



The Enduring Common Operating Picture



Page
13

DEPLOYMENT TIMELINE

The following grid is the anticipated

timeline associated to deploying
Installation
’s Incident Manager COP.
Other solutions, if chosen, would take a similar development track. Deviation occurs after fielding where
customization may take longer via contracted support.

Phase

Activity

Start /
End Time

1

Development of requirements package. This includes discussion of
capabilities needed to be fielded to meet installation needs.

Month 0


䵯湴栠ㄮ1

1

Begin development of certification & accreditation packages

Month 1


䵯湴栠ㄮ1

2

Develop
System Identification Profile to include security and cont
rols
package for Implementation Plan

Month 1.5


䵯湴栠4

2

Submit Form 9’s for servers, software, and contracted support (if
牥煵楲敤rK

剥煵敳琠㘠浯湴桳m瑯t慷慲搠慬汯睩湧a景爠v敲楦楣慴楯渠楮i
瑲楰汩捡瑥t慳a䌦䄠灡捫慧攠摥癥汯灳d

䵯湴栠ㄠ


䵯湴栠1

7.5 Months Elapsed

3

Purchased equipment arrives

Month 9

3

DIACAP Implementation Plan Completed

Month 8

3

iCTO Received…begin installation

䵯湴栠㄰1⡡琠敡牬楥獴e

4

Begin DIACAP Validation

Month 11 (at earliest)

4

Approval to Operate Decision

Month 13 (at earliest)

5

COP development

Month 13


䵯湴栠ㄸ

18 Months Elapsed


If all goes according to plan,
the Installation’
s COP will be online NLT the
middle

of FY14 if we started today.

INTERFACE EXAMPLES

While there are many different types of systems available, we have chosen to provide interface examples
from a locally developed system within the Air Force that has been i
n operational since 2009.
The system
itself rides upon a COTS solution, but has been extensively expanded to be able to provide point and click
functionality to its users. The system was first brought online to meet the demands of the virtual ECC
enviro
nment and thus was originally designed to handle more Air Force Incident Management focused items,
but quickly evolved to allow for wartime and even Public Works related capabilities to be available at a
moment’s notice.

The lion share of development was n
ot done via contractor, but by a military member who learned how to
manipulate the interface by reading the manuals and using the support message boards for the software
developer. Once the system was brought online it required little to no weekly support

unless there was a
problem between Com and the server itself.

Most installations who have deployed similar systems continue to use it to this day, to include Korea and
Japan. While Korea continues to use UL/UC2 as its primary C2 system,
it also leverages

AFIM as a backup
and more flexible solution when it needs to respond to incidents such as Typhoons.


The Enduring Common Operating Picture



Page
14

Security Forces Incident C2

The following screenshot is from a solution deployed at Yokota Air Base, Japan.


At present, the above interface is
depicting fire at a facility. FD was able to plot a cordon around the facility
which automatically generated suggested optimal traffic control points. SFS was able to establish its ECP,
plot locations of its responders and view anything FD was plotting t
o include where its trucks would be
stationed in response.



The Enduring Common Operating Picture



Page
15


Fire Department Incident C2


You are seeing a combined data view containing data from SFS and FD shared between each other. The FD
is viewing the status of one of its hydrants for potential us
e, while simultaneously being capable of seeing
where traffic control points have been established in order to relay that information to the on scene incident
commander. Additionally, should the incident evolve, FD could potential review the HAZMAT conten
ts of each
facility in order to provide optimal protection to all emergency responders.



The Enduring Common Operating Picture



Page
16

ADDITIONAL AREAS OF
INTEREST

While the deployed C2 solution
the installation

has been UL/UC2 for quite some time, the same problems of
its predecessor remain. The sy
stem does not conform to the conventions of Spatial Data Standard for
Facilities Infrastructure and Environment (SDSFIE) which consistently makes UL/UC2 a laborious process to
update, even more so as changes to expands to new versions. Additionally, unlik
e a GeoBase type system,
UL/UC2 cannot consume ACES tabular data and on an autonomous basis updates its own database.


The above is a proposed method by which GeoBase autonomously update its tabular real property data
during periods of lower activity. Th
is ensures GeoBase hosted data or in this case, the COP tabular data is
accurate to Real Property

records every 24 hours.



The Enduring Common Operating Picture



Page
17

PROPOSED INFRASTRUCT
URE LAYOUT


VMWare

Using VMWare vSphere

allows us to operate multiple servers within a single box, thus reducing our
infrastructure costs. Traditionally, especially with high bandwidth applications such as
SharePoint

and COPs,
we would find them residing on separate systems in two different bo
xes. When one system crashes, the other
would remain unaffected, however, the entire system would remain down until it could be restored. Within a
VMWare environment, each server is a virtualized component and thus when it
crashes;

the last known good
co
nfiguration could be immediately brought online with minimal disruption in services. This approach not only
allows us to provide redundancy on many levels, it allows to move servers and capabilities wherever they are
needed so long as you have a bootable
PC.

SharePoint Dashboard

With the introduction of SharePoint as the missing component of the UL/UC2 like interface, the base can host
its own dashboard providing the same capabilities in an more flexible environment that can change as the
situation needs.

The current UL/UC2 system is inflexible as relies heavily on a custom made application code
base and thus when changes are needed, the company must be contracted to do so. In our scenario, we can
change the system to meet our needs any time without permi
ssion or additional contracted costs.

Data Redundancy

While the use of a SAN would be preferable in most situations to allow for optimal data redundancy, the
costs of deploying a SAN are prohibitive at best. Our proposal leverages the VMWare

solution to take
snapshots in time of the servers when they are performing optimally. Should a crash occur, it could be easily
restored as previously stated. However, we go a step further to ensure maximum redundancy. Leveraging a
The Enduring Common Operating Picture



Page
18

capability built into

Oracle, our database solution of choice for storing Spatial Databases such as those
needed for a COP, we choose to employ a method called Synchronous Change Data Capture which
essentially means, as information changes in one database, it gets update in an
other. The drawback
associated to this method typically is higher overhead for the server processing transactions such as adding
incidents to map. However, with properly specifying the frequency that this occurs in, for example every five
minutes or so,
resources can be saved ensuring that optimal database efficiency is available at the right times.

The deployment of two server solutions meets the MAC I systems environment, however deploying under a
MAC II environment would not require the added hot stand
by server, but would cut our capability to provide
data redundancy entirely.

SIPR Deployment

The same system architecture described above can be deployed within the SIPR environment easily.

CONCLUSION

The biggest issue we will run into while attempting to
standup a COP for
the installation

will be the constantly
shifting rules that govern Com. They present unexpected delays and those making decisions are not at our
installation and thus cannot be counted on to make hasty decisions that may or may not work
in our favor.
Much of each submittal is sent back for clarification especially with GeoBase systems are they are not as
straight forward the typical systems that are deployed in the Com community. However, with patience and
openness to the process, it ca
n be achieved.

Thank you for your consideration of this proposal and should you have

any questions, please contact us

at
your earliest convenience.