Using A Low Resolution Entity Level Modeling Approach

climbmoujeanteaSoftware and s/w Development

Dec 13, 2013 (3 years and 10 months ago)

84 views


1

Using A Low Resolution Entity Level Modeling Approach


Major Donovan Phillips

Mr. Jack Jackson

US Army Training and Doctrine Command Analysis Center

PO Box 8695

Monterey, CA 93943
-
0695



Abstract
:
Current entity
-
level simulations are becoming increasingly

complex. In
fact, the rate at which the complexity is growing in these simulations often outpaces gains
in computing power. While this level of complexity is necessary for many applications,
we contend that a lower resolution approach to entity
-
level sim
ulation is also necessary
,
providing a more robust modeling, simulation and analysis toolkit.
Our rationale is that
a
nalysis for concept exploration and studies often involves examining a very large
parameter space. Time constraints frequently limit the nu
mber of high
-
resolution
simulation runs that can be completed and in turn the number of parameters and
parameter settings that can be investigated. Low resolution screening tools can help
identify parameters and parameter settings of interest.
Dynamic Allo
cation of Fires and
Sensors (
DAFS
)
, a low
-
resolution, constructive entity
-
level simulation framework
designed for combat analysis at the brigade level and below, is one such tool. Because of
its low
-
resolution approach, DAFS runs fast and is relatively eas
y to set

up. In addition,
DAFS is
designed to use selected results of

high
-
resolution models

as input
, enabling the
analyst to trace DAFS inputs back to accepted data and models.


I. INTRODUCTION

Today’s entity
-
level force
-
on
-
force combat simulations are
quite complex.
Requirements for analyses of the complex components of the Army’s Future Force are
driving these simulations to ever
-
increasing levels of complexity.
For example,
CASTFOREM (Combined Arms and Support Task Force Evaluation Model) has been
t
he Army’s standard constructive (non
-
human
-
in
-
the
-
loop) simulation for brigade and
below analyses since the mid
-
1980’s. As originally designed, CASTFOREM was
intended to model no more than 60 minutes of combat
[
Mackey 2001
], but by 2000,
modeled combat ti
me had grown to 33 hours. This increased even further to 44 hours by

2

2003 [
Mackey 2004
]. Figure 1 provides a snapshot of the startling complexity growth in
CASTFOREM and the
scenarios modeled with it.

Figure
1
. CASTFOREM Scena
rio Growth
[Mackey 2004]


Figure 1

shows

a three
-
fold increase in the number of entities represented (
which
results
in
an eleven
-
fold increase in the number of possible
shooter
-
target

pairs

to be
adjudicated
); a six
-
fold increase in the number of decision
tables, which control the
actions of battlefield entities; a 52
-
fold increase in the amount of output data; and a 24
-
fold increase in the amount of virtual memory required. All of these factors, coupled
with algorithmic
additions
to CASTFOREM (e.g., fusio
n algorithms, urban target
acquisition algorithms) contribute to the overall complexity of the model. Using
computer run time as a reasonable surrogate for overall model complexity, we see that
the time to complete one replication increased from 4.5 hours

in 2000 to 60 hours in
2003. Adjusting for mission time (or modeled combat time) and processor speed
differences, this represents roughly a
30
-
fold increase

in complexity over this three
-
year
period!
This level of m
odel complexity, while
desirable in ma
ny

cases,
does not permit a
timely, thorough

exploration of the parameter space for most problems of interest to the
Army.


II. TRADEOFFS

CASTFOREM Complexity Increase
+
Balkans Stryker Bde*
Caspian UA**
Scenario Development Time
2.5 mos x 3 personnel
6 mos x 8 personnel
Number of blue soldiers/systems
1,887
3,694
Number of threat soldiers/systems
922
5,221
Number of decision tables
2,224
13,479
Number of combat orders
9,537
62,464
Mission time
33 hrs
44 hrs
Computer run time (1 rep)
4.5 hrs
60 hrs
Output file size
3 Gb
158 Gb
Machine speed
1 GHz
3 GHz
Virtual memory required
128 Mb
3 Gb
* BBS 21.0, Stryker
Bde
Scenario, Developed 2000
** Caspian 20.0, FCS UA Scenario, Developed 2003
+ Source: Mackey, Doug. “Overview of Network Models,” presente
d at the CASTFOREM Users Group
Meeting, 13 July 2004, White Sands Missile Range, NM.
2000
2003
CASTFOREM Complexity Increase
+
Balkans Stryker Bde*
Caspian UA**
Scenario Development Time
2.5 mos x 3 personnel
6 mos x 8 personnel
Number of blue soldiers/systems
1,887
3,694
Number of threat soldiers/systems
922
5,221
Number of decision tables
2,224
13,479
Number of combat orders
9,537
62,464
Mission time
33 hrs
44 hrs
Computer run time (1 rep)
4.5 hrs
60 hrs
Output file size
3 Gb
158 Gb
Machine speed
1 GHz
3 GHz
Virtual memory required
128 Mb
3 Gb
* BBS 21.0, Stryker
Bde
Scenario, Developed 2000
** Caspian 20.0, FCS UA Scenario, Developed 2003
+ Source: Mackey, Doug. “Overview of Network Models,” presente
d at the CASTFOREM Users Group
Meeting, 13 July 2004, White Sands Missile Range, NM.
2000
2003

3


This is not intended to be a criticism of CASTFOREM or other high resolution
models; indeed, this dramatic increas
e in model and scenario complexity resulted because
there was
(
and continues to be
)

demand for greater and greater fidelity and detail in our
analytical simulations. Modeling many hours of
joint
, network enabled operations in a
system of systems framework
is orders of magnitude more complex than modeling an
hour of combined arms operations. In response to the question regarding how the analyst
ought to use exponential gains in computing power predicted by Moore’s Law, Lucas
[
2003
] suggests two alternatives:

examine more cases with the same model, or examine a
similar number of cases with increased model resolution. Figure 1 suggests the existence
of a third alternative: examine
fewer

cases than before with
greatly

increased model
resolution. The fact tha
t the 30
-
fold increase in model complexity far outpaced the three
-
fold increase in computing power


as measured by processor speed


makes it evident
that this was the alternative chosen.



TRADOC Analysis Center (TRAC) leaders did not choose this alterna
tive
blindly. The fact that such a choice was made highlights one of many tradeoffs faced by
analysts and the decision
-
makers they support


the tradeoff between simulation
complexity and the number of cases (and replications for each case) that can be
ex
amined. When time is constrained, one is gained at the expense of the other.



The Precision Munitions Mix Analysis (PMMA), a current TRADOC study, is a
good example of the effects of this tradeoff. Restricted to scenarios of sufficient
complexity to e
ffectively portray the depth and breadth of factors necessary for
comparing the effects of various mixes of Army and Joint precision munitions,
the
study
leaders
were
limited to fewer high
-
resolution simulation runs (in CASTFOREM) than
desired. To their c
redit, though, the study leaders included a screening analysis

using a
linear programming approach to narrow down the set of factors to be examined in
CASTFOREM

[
TRAC 2004
].


III. SOME STATISTICAL REMEDIES


4


Advanced statistical methodologies have been a
pplied to ameliorate the tradeoff
between simulation complexity and the number of times a simulation can be executed in
a given time period. The two quantities that determine the number of simulation
executions are the number of cases to be examined and t
he number of replications to be
executed for each case. An approach that reduces the number of cases that need to be
examined involves more efficient experimental designs. The most robust experimental
design would be a full factorial design,

which consid
ers

every possible combination of
levels and factors. A full factorial design for an experiment with four factors, each
having two levels, would require 2
4

simulation runs. In the case of the PMMA discussed
in the previous paragraph, there are 30 munitio
ns being considered, each having two
“levels” (in the mix or not in the mix). A full factorial experiment in this case would
require 2
30

runs

per warfighting scenario used in the study



quite unrealistic for a
simulation that takes 60 hours per
replicat
ion
!
The
problem is compounded further

in the
case where multiple levels (represent
ing

quantities

of munitions in the mix)
are desired in
the experiment.
Recent research into nearly orthogonal Latin hypercube designs [
Cioppa
2003
] has demonstrated that m
uch more efficient designs can achieve nearly the same
granularity as in a full factorial design with far fewer

runs (
in some cases, as few as the
number of factors plus one
)
.



Replications

of a particular case are all run with the same levels for each fa
ctor
and are performed in sufficient quantity so as to ensure statistically meaningful results.
Of course, multiple replications per case are only necessary for stochastic simulations.
TRAC normally attempts to achieve 21 replications per
CASTFOREM

run
,
but recent
research has demonstrated the utility of as few as five replications when a technique
known as bootstrapping is applied to the results of each replication [
TRAC
-
Monterey
2004
].
Bootstrapping calls for the resampling of elements from a small set

of replications
to produce a larger data set,
making it possible to obtain statistically significant results.



Even after applying these statistical tools to reduce the total number of
replications required to obtain useful analytical results, the amount

of simulation time
required may still
be

unacceptable. For example,
assume

that
we were able to reduce the

5

number of runs required for the PMMA study from 2
30

(for the full factorial design) to 35
through efficient experimental design. Furthermore,
assu
me

that bootstrapping allows
using

only five replications per run,
resulting in

a requirement of 175 total replications
for the study.
If we are using CASTFOREM at

60 hours per replication, this equates to
1.2
years

of simulation time along with the assoc
iated effort required to analyze each of
the 175 replications.


IV. A LOW RESOLUTION ENTITY LEVEL MODELING APPROACH


We contend that a low resolution approach to entity
-
level simulation can
compliment
,

but not replace
,

existing high
-
resolution simulations
.

Low
-
resolution
modeling approaches are part of the rich tradition of combat modeling and have been
featured in aggregate and entity level models for decades. One ubiquitous example in
entity level simulation is the use of probability of hit based on wea
pon type, range, and
amount of target presented coupled with probability of kill based on munitions type,
target, aspect angle, etc. This approach is widely accepted because it is based on
authoritative data derived from field testing and engineering level

models. Furthermore,
this approach is still necessary to obtain reasonable run times in high
-
resolution entity
level combat simulations.


Similar probabilistic approaches have been used for other computationally
intensive aspects of military operations t
hat are explicitly represented in high
-
resolution
combat models like CASTFOREM today. One example is line of sight. Explicit
representation of high
-
resolution terrain details necessitates computationally expensive
line of sight calculations. In a low
-
resol
ution modeling approach we may substitute
probability of line of sight for these calculations. Data to support this approach is derived
from experimentation using high
-
resolution models. If explicit representation of line of
sight is necessary even in the
low resolution simulation to address the issues at hand then
a lower resolution terrain representation would still allow for a less computationally
expensive line of sight calculation. This highlights a critical advantage of a low
-
resolution simulation app
roach

the ability to select the appropriate level of resolution
necessary for the issues under analysis.


6


Low
-
resolution entity level models can be implemented rapidly and constructed to
focus directly on the analysis questions of interest. Extensive exper
ience over the past six
years at the Naval Postgraduate School has demonstrated that graduate students can
quickly produce useful low
-
resolution entity level combat simulations to investigate a
wide variety of issues of interest to combat analysts. These m
odels attempt to represent
the salient features of combat necessary to study the phenomenon of interest. Large, high
-
resolution combat models are, by their very nature, general purpose tools. Recognizin
g
that the validity of student

models suffered from a
lack of authoritative data, TRAC has
labored recently to ensure that students conducting sponsored thesis research for the
Army use algorithms and data taken directly from current combat models or derived
directly from those models.


V.
AGENT
-
BASED MODEL
S


ONE
LOW RESOLUTION APPROACH


Agent
-
based
models


a class of
low resolution
applications



are simulations
wherein each entity, or “agent,” behaves autonomously, making decisions based on
information gathered by organic sensors or received via communic
ations links.
Originally intended to model complex adaptive systems, Brown,
et al,

[
2004
] argue that
agent
-
b
ased
models
are well suited to serve as tools for conducting screening analysis in
support of high
-
resolution simulations.
Agent
-
based models

tend

to represent scenarios
with far less complexity than a typical high
-
resolution simulation and lend themselves to
rapid construction of new scenarios, making it possible to
run many

more replications
than would be possible
using
a high
-
resolution simulatio
n. This enables the analyst to
adequately explore the entire factor space, making it possible to then select interesting
sub
-
spaces for deeper exploration with the limited high
-
resolution simulation runs
available. In this way, we see the low
-
resolution
simulation complementing the high
-
resolution simulation, together forming a more robust analysis approach.


Agent
-
based models

are not without their critics. While any new technology will
attract doubters, serious criticism seems to center on two factors:

1) the “black box
-
like”
penalty functions that determine the outcomes of decisions and 2) the inability of the
analyst to trace results back to certified
system performance data
. Both of these speak to

7

the model’s validity. Furthermore,
most
agent based

models are oriented toward
s

exploring
the effects of
human factors like leadership and morale
on combat outcomes
.
This limits their utility for exploring many issues of interest in a typical study.


VI.
DYNAMIC ALLOCATION OF FIRES AND SENSORS (DAFS)


DAF
S is a low
-
resolution, constructive entity
-
level simulation framework being
developed by a partnership between TRAC
-
Monterey and the MOVES Institute of the
Naval Postgraduate School. Like
agent
-
based models
, DAFS is designed to enable rapid
scenario const
ruction and be fast running, and is likewise suited for use as a screening
tool in support of high
-
resolution simulations. This capability was demonstrated recently
when a prototype version of DAFS was used to support a non
-
line
-
of
-
sight (NLOS)
weapons mi
x study conducted by the Depth and Simultaneous Attac
k

Battle Lab at Fort
Sill. The results produced by DAFS proved beneficial to informing decisions on
appropriate ratios and quantities of mortars, NLOS cannons, and NLOS Launch Systems
for the Future Forc
e Unit of Action (UA).



DAFS’ major component algorithms are all relatively inexpensive
computationally. For example, terrain is not represented explicitly, but its effects


in
terms of line of sight


are represented probabilistically. Instead of trad
itional,
computationally expensive, line of sight (LOS) algorithms found in most high
-
resolution
simulations, DAFS adjudicates LOS between entities using an experimentally derived
probability that LOS exists, or PLOS. The actual LOS adjudication is a rand
om draw
based on this probability, saving much computational time.



Target acquisition is also modeled probabilistically in DAFS, currently with two
possible levels of resolution. At the lowest level of resolution, sensors are modeled as
“cookie cutter
s” with probability of detection equal to one within a given radius. A
somewhat higher resolution algorithm models detections as occurring according to a

gamma

distribution with experimentally derived
parameters
. In both cases, the data
required by the a
lgorithm


be it a detection radius or a detection rate


are obtain

8

through direct manipulation of certified data (as in the first case) or through
experimentation in high
-
resolution simulation (as in the second case).



Finally, munitions effects are mod
eled with a commonly accepted probability of
kill (P
k
) approach, though at a lower level of resolution than one would find in, say,
CASTFOREM. For example, factors like weather, obscurants, aspect angle, and damage
levels (mobility kill, firepower kill, e
tc.) are not modeled currently in DAFS.



Like
agent
-
based models
, DAFS is well suited for conducting screening analysis
in support of high
-
resolution simulations.
The PMMA study, for example, is an ideal
application for a screening tool like DAFS. Bec
ause it enables quick setup and running
of scenarios, DAFS could have been used in Phase One of the study (instead of the linear
programming approach
chosen
) to narrow
down the field of candidate munitions to a
small enough set so as to enable further, mor
e detailed, analysis using a high resolution
simulation.
Unlike
agent
-
based models
, though, all data to support the algorithms in
DAFS is either provided directly from certified sources or derived experimentally using
high
-
resolution simulation
,

which use

data from certified sources. This provides the
ability to trace results directly back to inputs


something not possible with
agent
-
based
models
.


VII. SUMMARY


In this paper we have identified the analytical challenge
s

associated with
analyzing
a large

parameter space using high
-
resolution entity level combat simulations.
We reviewed statistical techniques to improve experimental designs and reduce the
number of simulation runs required to explore the space and concluded that these
techniques are necess
ary, but not sufficient to address the problem. We recounted how
lower resolution models including agent based models have been used to explore a large
parameter space and thereby focus the high resolution model on parameters and settings
of particular int
erest. We also noted that many agent based models have limited validity
and focus on human factors in combat limiting their utility as a screening tool for many
analytical questions. We proposed and outlined the use of low
-
resolution entity level

9

combat mo
dels as companions to high
-
resolution entity level simulation. Central to this
approach is modeling in the low resolution simulation the salient aspects of combat
required to answer specific analytical questions of interest, and deriving scenarios,
algorit
hms and data from the high resolution model and other authoritative sources.
Finally, we provided an example of the successful application of this approach with
DAFS.



10

REFERENCES


[1]

Brown, Lloyd P., Cioppa, Thomas M., Lucas, Thomas W. “Agent Based Simulat
ions
Supporting Military Analysis.” PHALANX, Vol 37, No 3, September 2004, p. 29.


[2]

Cioppa,Thomas M. “Advanced Experimental Designs for Military Simulations.”
Technical Document TRAC
-
M
-
TR
-
03
-
11, US Army TRADOC Analysis Center,
Monterey, California, Febru
ary 2003.


[3]

Lucas, Thomas W. and McGunnigle, John E. “When is Model Complexity Too
Much? Illustrating the Benefits of Simple Models with Hughes’ Salvo Equations.”
Naval Research Logistics, Vol. 50, 2003, pp. 197
-
217.


[4]

Mackey, Douglas, Dixon, David, Lonca
rish, Thomas. “Combined Arms and Support
Task Force Evaluation Model (CASTFOREM) Update: Methodologies.” Technical
Document TRAC
-
WSMR
-
TD
-
01
-
012, US Army TRADOC Analysis Center, White
Sands Missile Range, NM, February 2001.


[5]

Mackey, Douglas. “Overview o
f Network Models in CASTFOREM.” Presentation
to CASTFOREM Users’ Group, White Sands Missile Range, NM, 13 July 2004.


[6]

TRAC
-
Monterey. “Reducing Simulation Replications for Future Combat System
Analysis.” Presentation slides. US Army TRADOC Analysis Cent
er, Monterey, CA,
5 May 2004.


[7]

TRAC
-
WSMR. “Precision Munitions Mix Analysis (PMMA).” Study plan
coordinating draft. US Army TRADOC Analysis Center, White Sands Missile
Range, NM, 30 August 2004.