GEWEX Aerosol Panel

disgustedtukwilaInternet and Web Development

Dec 14, 2013 (3 years and 6 months ago)

245 views

GEWEX Aerosol Panel:

A critical review of the efficacy of
commonly used aerosol optical
depth retrieval



GEWEX Panel:
Sundar Christopher, Richard Ferrare,
Paul Ginoux, Stefan Kinne, Gregory G. Leptoukh,
Jeffrey Reid, Paul Stackhouse


Programmatic support:
Hal Maring, Charles Ichoku,
Bill Rossow

Comments on this presentation: jeffrey.reid@nrlmry.navy.mil


bottom line up front


Since our Last Briefing (Aug 2010), the GEWEX Aerosol
Assessment Panel (GAAP) has nearly completed the first
report.


By late October we expect to send sections to the instrument
science teams for “fact checking.”


Report will layout the nature of the aerosol problem, with a
synopsis of the literature and commentary on verification
methods and findings.


Phase 2
-
detailed independent evaluation, will not start until
MODIS collection 6 and MISR v 23 is officially released.


the aerosol problem


The aerosol field has recently grown exponentially, with
literally dozens of both products and applications.


But, most products are in the twilight zones of “research,”
“development” and “production.”


This is reinforced with the funding situation where money
for product development, maintenance, and verification is
limited. Developers spend more time “using” than
“supporting” their products.


By the time the wider community figures out how a
product is doing, a new version is released.


Situation: Confusion and some rancor in the community
as to the actual efficacy and appropriate application of
these data sets


Response: Reform the GEWEX Aerosol Panel (GAP)

the GAP mandate


NASA HQ wished the development of a
comprehensive evaluation of the current state
-
of
-
the
science performed within the GEWEX framework.


Team is to be small and well rounded. Team size is to
be no larger than what is necessary (e.g., no large
committees).



Members are to be ‘jurors drawn from accomplished
peers’ to examine the 7 most common global
products: AVHRR (GACP and NOAA), MISR, MODIS
(Standard & Deep Blue), OMI, and POLDER.


the GAP mandate


Phase 1: Perform comprehensive literature review
and evaluation. Deliverable will be a report on the
state
-
of the science, the application of satellite
aerosol data, and the identification of shortcomings,
and broad recommendations to the field for future
development and verification needs.


Phase 2: Based on Phase1 examine in detail specific
issues in the generation of retrieval and gridded
products.


Peer review of the peer review: Findings given to
teams for comment before release. After an iteration,
team rebuttals can be made public record.


Where are we now? Nearing the end of phase 1.
Report is 100+ pages and growing…

customers and
issues


Often aerosol products are thought of as climate
products.


However, all of the world’s major Numerical Weather
Prediction (NWP) have aerosol assimilation programs
and aerosol data has worked its way into numerous
applications.


There are aerosol observability issues


reliable and timely delivery


bias (contextual, sampling) understanding / removal


error characterization (essential for assimilations)



opinion


data

product

evaluation,

validation,

and

verification
.



Reliable

and

timely

delivery

of

satellite

aerosol

and

fire

products

is

only

half

the

challenge
.

If

products

are

to

be

integrated,

then

biases

need

to

be

removed

through

careful

product

evaluation

and

verification
.

Contextual

&

sampling

biases

need

to

be

understood
.

Because

of

possible

degradation

in

model

performance

through

data

assimilation,

aerosol

product

error

characterization

has

been

emphasized

more

in

the

operations

than

climate

communities
.

Indeed,

despite

popular

misconceptions,

operational

data

characterization

requirements

are

often

more

strict

than

what

is

commonly

used

in

the

climate

research

community
.


Reid, J. S., Benedetti, A., Colarco, P. R., Hansen, J.A., 2011. International operational aerosol observability workshop,
Bulletin of the American Meteorological Society, 92, Issue 6, pp. ES21
-
ES24 doi: 10.1175/2010BAMS3183.1

panel members

HQ: Hal
Maring

and Charles
Ichoku


Sundar
Christopher

(UAH): chair, algorithm development,
multi sensor products


Richard
Ferrare

(NASA LaRC): lidar, field work, multi
-
sensor products


Paul
Ginoux

(NOAA GFDL): Global modeling and aerosol
sources


Stefan
Kinne

(Max Plank): GEWEX Cloud, AEROCOM


Gregory
Leptoukh

(NASA GSFC): Level 3 product
development and distribution


Jeffrey
Reid

(NRL): co
-
chair, aerosol observability, field
work, verification, operational development


Paul
Stackhouse
: GEWEX radiation, atmospheric
radiation and energetics.

report outline

1.
Introduction

2.
Nature of the Problem

3.
Overview of Assessed Satellite Products

4.
Evaluation of Verification and
Intercomparison Studies

5.
Phase 1 Synopsis and Recommendations

relative levels of efficacy required


(Approximate and not meant to offend…)

Imagery/
Contextual

“Advantage of
Human Eye”

Parametric Modeling
and Lower Order
Process Studies

Correlations de
-
emphasize bias

Trend Climatology

Need to de
-
trend biases
in retrieval and in
sampling

Higher Order
Process Study

Push multi
-
product
and satellite data

Seasonal
Climatology

Basically want to
know were stuff
is. Can do one
-
up
corrections

Model Aps,
V&V, Inventory

Have stronger time
constraints and
need spatial bias
elimination.

Data
Assimilation

Quantify bias &
uncertainty
everywhere and
correct where you
can.

Studies

V&V statistics must speak to these applications!


Hence, there is no “one size fits all” error parameter.
Sorry….

bias examples
(1)

global average, time
-
series over water

global AOD
difference
between
sensors

Mischenko et al., 2007

differences are a mix of
radiometric bias

cloud bias

microphysical bias

sampling differences

contextual bias
.

Zhang and Reid, 2010

satellite retrievals

tend to overestimate

AOD (at low AOD)

over oceans
especially MISR

bias examples
(2)

more


consideration of

what the satellite
actually sees
” is
often overlooked


basic matchup

between sensors
is
not trivial
.


core
retrieval biases

related to clouds,
lower boundary
condition and
microphysics are
non
-
random
, and
spatially / temporally
correlated

ASO
clear sky bias
, Zhang and Reid 2009

MODIS vs.

AERONET

-

slope
-

diagnostic versus prognostic error models

MODIS over ocean example

From Shi et al., 2010, ACP

better here

worse here

If we knew AOD then we would not
need MODIS. All we have is
MODIS’s own estimate of AOD….

AERONET AOD

MODIS AOD

RMSE (MODIS,AERONET)

RMSE (MODIS,AERONET)

verification

(1)


Since you can measure AOD, all aerosol science
projects drive to validate to AOD, whether it is
appropriate or not.


There is no shortage of validation studies. But, they
tend to be direct regression based, have important
details missing, and are conducted over limited
periods of times and/or locations. Hence, they tend
to be of limited utility.


While there are many cases of satellite cal
-
val
components from field missions, analyses are
usually not repeated for new product versions.


Even well designed third party studies are generally
not utilized or cited by the production teams.

verification

(2)


Over ocean, there tends to be remarkable
consistency both in AOD and in correlated bias
across sensors. Cloud masking is still a problem.


Over land, there is strong regional and temporally
correlated biases across both algorithms and
sensors, largely due to the lower boundary condition.


Radiance calibration is a significant problem and
pops up in indirect ways. NASA is working on it.


Demonstration of diversity in aerosol products has
little barring on relative product efficacy.


bottom line
: The difference between “face value
statistics” and an error bar for an individual retrieval
is vast. How does this effect you? Depends on what
you do with the data.

key recommendations

(1)


Algorithms need better documentation. The ATBDs
are a good start, but they need to be kept current
and perhaps even expanded.


Better strategies for “level 3 products” need to be
devised and supported. One size fits none…..


One size fits all verification does not work either. But,
there is a total lack of agreement on key verification
metrics. The USER community needs to agree on
what they think is important.


It should be a programmatic requirement of the
science teams to develop prognostic error models as
part of any mass produced and distributed product.
Program offices need to fund this.

key recommendations

(2)


AERONET and MPL
-
net are clearly backbone
networks for verification and we strongly endorse
their financial support as a critical community
resource. Similarly, targeted aircraft observations
should be encouraged.


Developers and outside entities to work more
together in verification studies.


Field work needs to be better utilized. After a first
round of verification studies, next generation
algorithms do not typically make use of older studies.
Field work should focus more on verifying higher
level products.