"Formulate" Scope Description (ASD) - QI-Bench

stalliongrapevineBiotechnology

Oct 1, 2013 (3 years and 10 months ago)

141 views

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

1

of
18

QI
-
Bench
Formulate

Scope Description


December

2011

Rev
0.2






Required Approvals:

Author of this
Revision
:

Andrew J. Buckler





Project Manager:

Andrew J. Buckler






Print
Name


Signature


Date



Document Revisions:

Revision

Revised By

Reason for Update

Date

0.1

AJ Buckler

Initial version

June 2011

0.2

AJ Buckler

Updated purpose and scope

December 2011































QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

2

of
18

Table of Contents

1.

EXECUTIVE SUMMARY

................................
................................
................................

3

1.1.

A
PPLICATI
ON
P
URPOSE

................................
................................
................................
..........

4

1.2.

A
PPLICATION
S
COPE

................................
................................
................................
..............

6

1.3.

T
HE REASON WHY THE AP
PLICATION IS NECESSA
RY

................................
...............................

6

1.4.

T
ERMS
U
SED IN
T
HIS
D
OCUMENT

................................
................................
..........................

7

2.

PROFILES
................................
................................
................................
...........................

8

2.1.

I
NFORMATION
P
ROFILES

................................
................................
................................
........

9

2.2.

F
UNCTIONAL
P
ROFILES

................................
................................
................................
..........

9

2.2.1. Example “Task” Definition: Predictive Biomarkers for Response Assessment

..........

10

2.2.2. Approach to Assess Clinical Utility of Biomarkers

................................
......................

12

2.3.

B
EHAVIORAL
P
R
OFILES

................................
................................
................................
.......

13

3.

CONFORMANCE ASSERTIO
NS

................................
................................
..................

14

4.

REFERENCES

................................
................................
................................
..................

14

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

3

of
18

1.

Executive Summary

There is a large and growing body of knowledge at the cell
ular

level increasing
utilization of
computational

methods in drug discovery and development. Likewise, there
is a large and growing body of knowle
dge at the organism level
enabling
applications as
computer aided detection, diagnosis, and
targeted therapies

[
1
-
5
]
. However, there is
comparatively
little available technology to bridge these two bodies

of knowledge,
compromising the effectiveness of both. Technology linking these scales through
quantitative analytic processing of acquired imaging and non
-
imaging data in
translational research

[
6
]
, c
oupled with the interpretation of the data through multi
-
scale
models

offers means to comprehend disease processes pre
-
symptomatically and after
clinical manifestations at multiple levels of
abstraction
[
7
-
10
]
. For example, changes in
activities of the receptor tyrosine kinase EGFR can be vis
ualized and quantified through
optical imaging of reconstituted luciferase [8]
. Positron Emission Tomography enables
detection and quantification of molecular processes such as glucose metabolism,
angiogenesis, apoptosis and necrosis. Radiolabelled Annexin

V molecule uptake by
apoptotic and necrotic cells is developed to measure apoptosis, necrosis and other
disease processes using PET
[
1
1
,
12
]
. Chelated gadolinium attached to small peptides
recognizes cell receptors and quantify receptor activities using magnetic imaging
techniques. Similarly, microbubb
les and nanobubbbles attached to antibodies such as
anti
-
P
-
selectin may be used to image targeted molecules associated with inflammation,
angiogenesis, intravascular thrombus, and tumors
[
5
]
.

Currently the a
pplication of
quantitative imaging techniques

suffers from the lack of a
standardized
representation of image features and content
[
13
-
16
]
.

The concept of
“image biobanking” as an analogue to tissue biobanking has great promise
[
17
,
18
]
.
Tools have begun to be available for handling the compl
exity of genotype
[
19
-
23
]
, and
similar advancements are needed in appreciate phenotype, especially as derived from
imaging
[
15
,
24
-
31
]
.

Publicly accessible resources that support

large image archives
provide little more than file sharing and
have so far not yet merged into a framework that
supports the collaborative work needed to meet the potential of quantitative imaging
analysis. With the availability of tools for automatic ontology
-
based annotation of
datasets with terms from biomedical on
tologies, coupled with image archives and
means for batch selection and processing of image and clinical data, we believe that
imaging will go through a similar increase in capable analogous to what advanced
sequencing techniques have br
ought to molecular
biology.

Imaging biomarkers are developed for use in the clinical care of patients and in the
conduct of clinical trials of therapy. In clinical practice, imaging biomarkers are intended
to (a) detect and characterize disease, before, during or after a cou
rse of therapy, and
(b) predict the course of disease, with or without therapy. In clinical research, imaging
biomarkers are intended to be used in defining endpoints of clinical trials. A precondition
for the adoption of the biomarker for use in either se
tting is the demonstration of the
ability to standardize the biomarker across imaging devices and clinical centers and the
assessment of the biomarker’s safety and efficacy. Currently
qualitative imaging

biomarkers are extensively used by the medical commu
nity. Enabled by the major
improvements in clinical imaging, the possibility of developing
quantitative

biomarkers is
QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

4

of
18

emerging. For this document “Biomarker” will be used to refer to the
measurement

derived from an imaging method, and “device” or “test” re
fers to the
hardware/software

used to generate the image and extract the measurement.

Regulatory approval for clinical use
1

and regulatory qualification for research use
depend on demonstrating proof of performance relative to the intended application of
the biomarker:



In a defined patient population,



For a specific biological phenomenon associated with a known disease state,



With evidence in large patient populations, and



Externally validated.

The use of imaging biomarkers occurs at a time of great pressure on the cost of medical
services. To allow for maximum speed and economy for the validation process, this
strategy is proposed as a methodological framework by which stakeholders may work
toge
ther.

1.1.

Application Purpose

Driving biological q
uestions
addressed by this proposed work include questions
a
ssociated with formal
imaging biomarker qualification such as “
is CT volumetry better
than
unidimensional measures for
RECIST
assessment of

cancer re
sponse
?”
, or “
is
FDG
-
PET a surrogate marker for survival?
” Biological

research questio
ns include “
can
we find a method to compare preclinical and clinical imaging?” “Can we find quantitative
links between
imaging
mouse and man?” “Can we optimize the freque
ncy of

serial

imaging?” “What is the influence of location on tumor development?” “Can we build a
preclinical
in vivo

imaging database with translational value, such as supporting the
Mouse Models of Human Cancer Center (MMHCC)?” “How well can we compare
across imaging modalities?
” Each of these problem areas ultimately depends on precise
identification of the concepts i
n each question and an ability to bring data resources to
bear that can help answer the question.

QI
-
Bench

is created

to
aggregate
large
-
scale datasets of
evidence relevant to
characterizing and optimizing

imaging biomarkers.
W
e are developing resources th
at
enable many parties to better
utilize

available data and a neutral broker resource that
will

provide developers and regulators with unbiased and objective data demonstrating
imaging biomarker efficacy
.
QI
-
Bench is composed of five linked applications, (
1)
Specify

(detailed description of the context for use and assay specifics), (2)
Formulate

(gather relevant datasets), (3)
Execute

(perform batch image analyses), (4)
Analyze

(statistical analysis of image analyses), and (5)
Package
(compile evidence for
regulatory filing) downstream (Fig. 1)
.
The vision is to
deploy the software both as web
-
accessible resource
s

for collaborative community effort

or as
instances that
may
be
used within individual organizations for their own purposes.


QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

5

of
18



Figure
1
. Left: Access to QI
-
Bench, which is composed of five linked applications, is at
www.qi
-
bench.org
.
Downstream applications include
Execute
,
Analyze
, and
Package
. Right: The current prototype of
Specify

includes a que
stion
-
answer paradigm driven capability (right hand side of the screen) using BioPortal to
create a triple store (showing in left window on screen). The prototype also incorporates the AIM Template
Builder.

Specify

creates RDF triples (subject
-
predicate
-
object units of knowledge) stored in a
database based on using the QIBO to guide a question
-
answer paradigm for naturally
interacting with domain experts to create an early version of what this proposal would
turn into W3C
-
compliant SPARQL endpoints (query

interface)
[
32
]
. Specifically,
Formulate

is developed to allow users to:





Assemble applicable reference data sets.



Include both imagi
ng and non
-
imaging clinical data.


There are four innovative aspects of the proposed work:

1.

We make precise semantic specification uncomplicated for diverse groups
of experts that are not skilled proficient in knowledge engineering tools.

The key is to bring a level of rigor to the problem space in such a way as to
facilitate cross
-
disciplinary teams to function without requiring individuals to be
experts in the representation of knowledge, inferencing mechanisms, or
computer engineering a
ssociated with grid computing or database query design.

2.

We map both medical as well as technical domain expertise into
representations well suited to emerging capabilities of the semantic web.

The proposed project captures functionality improved from the
currently available
infrastructures such as caGrid and data integration approaches supported by
these infrastructures such as caB2B/caIntegrator of caBIG. It
explores how a
linked data interface can be created from an object
-
oriented data interface based
o
n a UML model, annotated with CDEs according to the ISO
-
11179 metadata
registry metamodel standard. The experience could illuminate best practices for
combining a semantic web approach on the data interface layer with a model
-
driven approach for software d
evelopment, especially since Common Data
Elements are widely used to annotate Case Report Form (CRF) templates for
clinical research
.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

6

of
18

3.

We address the problem of efficient use of resources to assess limits of
generalizability.
Determining the biological rel
evance of a quantitative imaging
read
-
out is a difficult problem
.
For example, having direct tumor volumetry data in
the lung and the pancreas, do these results extend to the liver?
First
,

it is
important to establish to what extent an intermediate
marker

is in the causal
pathway

to a
true

clinical
endpoint.

Second, given the combinatorial complexity
that arises with
multiple

contexts for use
, multiple imaging protocols, etc.
, a
logical and mathematical framework is needed to establish how extant study data

may be used to establish performance in
clinical
contexts that
were not explicitly
part of the original studies. E
xisting
tools

rarely

if ever

relate the logical world of
ontology with the biostatistical analyses that characterize
diagnostic or prognostic

performance.

E
xisting tools do not permit the extrapolation of statistical validation
results
to semantically related situations
. Despite decades of using statistical
validation approaches, there is no methodology to formally represent the
generaliza
bilit
y of a validation study.

4.

We provide this capability in a manner that is accessible to varying levels
of collaborative models, from individual companies or institutions to larger
consortia or public
-
private partnerships to fully open public access.

1.2.

Applica
tion
Scope

Formulate

refers to the part of the project that most closely associated with caBIG
tools, comprising such capabilities as caB2B, caIntegrator, the PODS data elements,
a
nd the NBIA connector into
Execute
.


Our ideas associated with the “Linked D
ata
Archive” live here, including the extension to clinical data.

Formulate

would be packaged in two forms: 1) as a web
-
service linking to the
databases on the project server dev.bbmsc.com; and 2) as a local installation/instance
of the functionali
ty for m
ore sophisticated users.

1.3.

The reason why the application is necessary

Clinical performance assessment of a quantitative imaging biomarker starts with an
imaging test that has undergone technical validation and has been approved for use
under some initial
claim. The initial intended use need not indicate a mechanism of
action, and generally does not make a “strong” claim of clinical relevance, pending the
accumulation of clinical data.

Imaging tests are sometimes referred to as imaging assays to emphasize a

similarity
with their non
-
imaging counterparts and may be utilized in different ways. “
Integral
assays refer to tests that must be performed for the trial to proceed, whereas integrated
assays include assays that will be performed on all samples or cases
(for imaging
studies) but are not required for the trial to proceed and will not inform treatment
decisions or actions within the current trial
.”
2

Integral assays are typically associated
with inclusion criteria or primary endpoints, while integrated assa
ys are typically
associated with secondary endpoints or exploratory analyses. The requirements for
QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

7

of
18

integrated

assays are generally less restrictive, but they should also be handled as
rigorously as practical, since they may, in fact, become
integral

at som
e future point.

We utilize concepts

and language from the
current
FDA
process

for the qualification of

biomarkers,
3
,
4
,
5

to make clear the specific steps necessary for a sponsoring
collaborative to use it for qualification
of putative quantitative imaging
biomarkers.

Biomarker reproducibility in the clinical context is assessed using scans from patients
that were imaged with the particular modality repeatedly and over an appropriately short
period of time, without intervening therapy. The statistical approa
ches include standard
analyses using intraclass correlation and Bland
-
Altman plots for the assessment of
agreement between measurements.
6
,
7

However, more detailed determinations are also
of interest for individual markers. For example, it may be useful to
determine the
magnitude of observed change in a marker that would support a conclusion of change
in the true measurement for an individual patient. It may also be of interest to determine
if two modalities measuring the same quantity can be used interchang
eably.
8

The
diagnostic accuracy of biomarkers (that is, the accuracy in detecting and characterizing
the disease) is assessed using methods suitable to the nature of the detection task,
such as ROC, FROC, and LROC. In settings where the truth can be effect
ively
considered as binary and the task is one of detection without reference to localization,
the broad array of ROC methods will be appropriate.
9
,
10

Since the majority of imaging
biomarkers produce measurements on a continuous scale, methods for estimatin
g and
comparing ROC curves from continuous data are needed. In settings where a binary
truth is still possible but localization is important, methods from free
-
response ROC
analysis are appropriate.
11
,
12
,
13


1.4.

Terms Used in This Document

The following are terms

commonly used that may of assistance to the reader.

AAS


Application Architecture Specification

ASD


Application Scope Description

BAM


Business Architecture Model

BRIDG

Biomedical Research Integrated Domain Group

caBIG


Cancer Biomedical Informatics Grid

caDSR

Cancer Data Standards Registry and Repository

CAT


Composite Architecture Team

CBIIT


Center for Biomedical Informatics and Information Technology

CFSS


Conceptual Functional Service Specification

CIM


Computational Independent Model

DAM


Domain
Analysis Model

EAS


Enterprise Architecture Specification

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

8

of
18

ECCF


Enterprise Conformance and Compliance Framework

EOS


End of Support

ERB


Enterprise Review Board

EUC


Enterprise
Use
-
case

IMS


Issue Management System (Jira)

KC


Knowledge Center

NCI


National

Cancer Institute

NIH


National Institutes of Health

PIM


Platform Independent Model

PSM


Platform Specific Model

PMO


Project Management Office

PMP


Project Management Plan

QA


Quality Assurance

QSR


FDA’s Quality System Regulation

SAIF


Service Aware Int
eroperability Framework

SD
D


Software Design Document

SIG


Service Implementation Guide

SUC


System Level Use
-
case

SME


Subject Matter Expert

SOA


Service

Oriented Architecture

SOW


Statement of Work

UML


Unified Modeling Language

UMLS


Unified Medical Lan
guage System

VCDE


Vocabularies & Common Data Elements

When using the template, extend with specific terms related to the particular EUC being
documented.

2.

Profiles

A profile is a named set of cohesive capabilities. A profile enables an application to be
used at different levels and allows implementers to provide different levels of
capabilities in differing contexts. Whereas interoperability is the metric with serv
ices,
applications focus on usability (from a user’s perspective) and reusability (from an
implementer’s).

Include the following three components in each profile:

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

9

of
18



Information Profile: identification of a named set of information descriptions (e.g.
semant
ic signifiers) that are supported by one or more operations.



Functional Profile: a named list of a subset of the operations defined as
dependencies within this specification which must be supported in order to claim
conformance to the profile.



Behavioral P
rofile: the business workflow context (choreography) that fulfills one
or more business purposes for this application. This may optionally include
additional constraints where relevant.

Fully define the profiles being defined by this version of the applica
tion.

When appropriate, a minimum profile should be defined. For example, if an application
provides access to several business workflows, then one or more should be deemed
essential to the purpose of the application.

Each functional profile must identi
fy which interfaces are required, and when relevant,
where specific data groupings, etc… are covered etc.

When profiling, consider the use of your application in:



Differing business contexts



Different localizations



Different information models



Partner
-
to
-
Partner Interoperability contexts



Product packaging and offerings


Profiles themselves are optional components of application specifications, not
necessarily defining dependencies as they define usage with services. Nevertheless,
profiles may be an effec
tive means of creating groupings of components that make
sense within the larger application concept.

2.1.

Information Profiles



Identify a named set of information descriptions (e.g. semantic signifiers) that are
supported by one or more operations.

2.2.

Functional
Profiles

The validation of a marker for use in defining an endpoint for a clinical trial of therapy
requires information that goes beyond the predictive strength of the marker at the
individual patient level
. Most researchers would agree that
trial level

d
ata are needed to
support the claim that an endpoint based on a particular biomarker can provide a valid
measure of treatment effect. Thus, a meta
-
analysis of data from trials in which the
biomarker was assessed would be the approach to follow.
14
,
15
,
16
,
17

It i
s unlikely that an
adequate number of studies will be available to perform such a meta
-
analysis for any
particular biomarker undergoing evaluation in the process envisioned by this proposal
QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

10

of
18

during the award period. However, consideration will be given to p
erforming such meta
-
analyses for classes of similar biomarkers.

The table below is an adaptation on work by Chakravarty that identifies multiple levels
of biomarker claims (Table 1):

Type of Relationship with
Exposure /
Intervention

Value of the
Biomarker


Example


Unreliable interaction
between biomarker and the
treatment intervention

No value as a
surrogate endpoint
(SEP)

PSA is a useful biomarker for
prostate cancer, but unreliable
as an indicator of treatment
response

Intervention affects favorably
on the marker but
unfavorably on the well
-
state
and disease

Little practical use as
a SEP but may have
utility in exploratory
studies

PVCs as a biomarker of fatal
arrhythmias following MI
(CAST trials)

Intervention affect
s the
endpoint and marker
independently

Has value as a SEP
but explains only part
of the treatment effect

Most established SEPs
(development of OI with HIV
antivirals and mortality)

The full effect of the
intervention is observed
through the biomarker

Ideal
SEP

None known at present


Table
1
: Depending on the relationship between an exposure or intervention and
a putative biomarker, claims as to its value may be assessed.

2.2.1.

Example “Task” Definition: Predictive Biomarkers for
Response Assessment

The drug development industry is faced with increasing costs and decreasing success
rates.
New approaches to our understanding of biology create p
roliferation of data
;

increasing interest in personalized
treatments for smaller patient s
egments in turn
requires new capabilities to rapidly assess treatment responses. While advances in
imaging technology over the last decade may present opportunities to meet these
needs, deployment of qualified imaging biomarkers lags the apparent technolog
y
capabilities allowed by these advances. “Qualification” of drug development tools such
as imaging biomarkers have b
een recently defined in the draft
FDA Guidance
document
18

as “..a conclusion that within a stated context of use, the results of
assessment
with a [biomarker or] tool can be relied upon to have a specific
interpretation in drug development and regulatory review.” As is often the case for other
diagnostic biomarkers, imaging biomarkers are measured using particular devices or
instruments that w
ill have usually been reviewed by FDA if commercially marketed for
QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

11

of
18

the management of patients in clinical practice. However, FDA clearance of the imaging
device does not imply that the imaging biomarker has been demonstrated to have a
qualified use in drug

development and evaluation. Consensus methods and qualification
evidence needed from large
-
scale multicenter trials are often lacking; the
standardization that allows them is widely acknowledged to be the limiting factor. The
current fragmentation in imag
ing vendor offerings, coupled with the independent
activities of individual biopharmaceutical

companies and their contract research
organizations

(
CROs
)
, may stand in the way of the greater opportunity
to draw
these
efforts togethe
r. A more productive over
all structure for the collective industries may be
provided with an integrative, collaborative approach to the methodology and activity of
qualifying mature candidate
imaging
biomarkers. This could encourage innovative
development of new
imaging
biomarkers

with the promise
of

qualification as they
mature, while fostering innovative development of therapies that can rely on cost
-
effective
imaging
biomarkers

that have been established as qualified.

The Response Evaluation Criteria in Solid Tumors (RECIST)
19

metric has become a
default standard for quantifying treatment
-
induced changes in disease status. RECIST
is functional and its benefit is its
simplicity,
20
However, significant
concerns about the precision,
accuracy, and sensitivity of
using electronic cali
pers to
measure a single line
-
length,
the longest diameter (LD), as
the basis for RECIST have
been raised.
21
,
22

Semi
-
automated image analysis
algorithms could address some
of these issues. The RECIST
1.1 Work Group alluded to a
future state in which the
vari
ability of measurement
could be decreased by
"software tools that calculate
the maximal diameter for a perimeter of a tumor".
23

In theory, demarcating the boundary
of a mass on every slice it is visible, and then interrogating every slice to find the
greate
st distance between any two in
-
plane pixels could eliminate some of the
subjectivity in selecting the slice for measurement, decrease the judgment associated
with how to draw the line, and reduce the subjectivity associated with the use of
electronic calip
ers. However, questions would still remain about how well any single line
can be used as a proxy for tumor burden, particularly when the geometries of masses
become complex.

This project facilitates statistically valid and clinically meaningful assessment
of new
biomarkers for both development and testing purposes that will facilitate the extension
to larger analyses as needed to assess the value of the markers as well as provide

Figure
1
: Key

questions the
informatics services would address
for putative biomarkers and tests that measure them.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

12

of
18

efficient means to pursue commercial product offerings that are properly chara
cterized
in terms of their performance (Fig. 9).

2.2.2.

Approach to

Assess Clinical Utility

of Biomarkers

The accuracy in predicting patient outcome will be of interest for imaging biomarkers.
Patient outcomes can be categorically assessed events at specific time

points, such as
the type of response at the end of a course of therapy, or whether the patient is alive at
1 year. Alternatively, patient outcomes can be defined as time
-
to
-
event, such as
progression
-
free
-
survival (PFS) or overall survival (OS). The predi
ction problem will be
approached from two complementary but distinct perspectives. They lead to two types
of information, both of which are important in the evaluation of imaging as predictor.

(a)

The first perspective is that of the evaluation of the positive

and negative
predictive value of a test. In such an evaluation, biomarker values are used to classify
patients as “responders” or “non
-
responders” by imaging, and rates of response or time
-
to
-
event data (e.g., PFS, OS) are compared between these groups of

patients. Discrete
data methods are used for categorical outcomes and methods from survival analysis
(log
-
rank test, Cox regression) are used in the analysis of time
-
to
-
event data, such as
patient survival or time
-
to
-
recurrence.
24

In studies involving long
itudinal follow
-
up for
endpoints such as tumor recurrence, interval
-
censored data may often result.
Appropriate methods will be utilized for analyzing data of this type.
25

If a consensus
threshold for defining response by imaging is not available, regressio
n analysis is used,
in which the test result is entered as a predictor variable.

(b)

The second perspective is an adaptation of ROC analysis but with a reference
standard defined on the basis of a future event and not the contemporaneous truth as is
typically
the case in ROC analysis. For example, if the goal of the biomarker is to predict
1
-
year survival, the test result will be the value of the biomarker at an early time point,
such as the change from baseline to a specific point during therapy, and the refer
ence
standard will be defined by the (binary) vital status of the patient at 1 year. This setting
gives rise to “
time
-
dependent ROC
” analysis methods, which address the technical
challenge resulting from the fact that the reference standard information may

not be
available due to censoring. The latter can be due to patient drop
-
out or to reaching the
end of the follow
-
up period of the study before the 1
-
year mark.
26
,
27

Time
-
dependent
ROC analysis can consider multiple future time points, thus making it possib
le to assess
the longitudinal pattern of the predictive performance of the biomarker.

The clinical questions addressed by the two perspectives are different. For example, if
prediction of survival is the task, the first approach compares survival among pat
ients
with “high” or “low” values of the marker, or, more broadly estimates the hazard ratio
corresponding to the biomarker. Stated otherwise, the first approach addresses the
question “Do patients with “high” values of the biomarker live longer and by how

much?”
The second approach addresses the following question: “among all patients who will
survive past time t, what proportion can be correctly identified by the biomarker at
baseline?” Such a question would be important if change in therapy is contempla
ted on
the basis of the biomarker information.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

13

of
18

A tiered approach to the validation of a biomarker for clinical use will begin with studies
of reproducibility as described above. The next steps would then be to conduct studies
of diagnostic or predictive ac
curacy, depending on the intended clinical use of the
biomarker. In the specific context of this proposal, biomarker developers will utilize the
development set of cases to develop and fine
-
tune the algorithms involved. The
resources described in Aim 2 wil
l be used in this phase. After the development is
complete, the biomarker will then proceed to the testing phase, utilizing the resources
described in Aim 3. Cross
-
validation and bootstrap approaches can be used to frame
the clinical development and testin
g of the biomarker.

We need to collect as much information on the provenance (at least origin, ideally
transformations at all stages of processing) of the data as possible. A reader of the
meta
-
analysis should be able to put together an independent set of
data with the same
characteristics and end up with the same answer.

To circumvent bias and also overcome some of the concerns that industry might have
related to releasing proprietary information:



Define the hypotheses that we would seek to explore using t
he dataset and
thereafter to design the experiments that would allow these explorations.



Establish rules for the fraction of cases that are submitted, e.g., the first 70%
enrolled in chronological order



Utilize a masked index of all the subjects in all ar
ms (active intervention arm(s)
and control arm) of multiple trials from multiple companies that meet the selection
criteria (imaging modality, anatomic area of concern, tumor type, etc.) could be
assembled


but without any of the image sets or outcomes da
ta or any other
data. As well, there would be no need to know the specific trials that were being
contributed, the interventions that were applied and whether the interventions had
been successful or not. Using a random number scheme a predetermined
number

of subjects from the entire donated set of subjects could be selected
(perhaps with mechanisms to ensure that no single trial or vendor dominated the
collection). After the random selection using the masked index, a formal request
for the image and outcom
e data could be issued by a trusted broker (RSNA
perhaps) to the donors but only for the randomly selected subjects. The data
would be de
-
identified as to corporate source, trial, and of course subject ID.

By choosing subjects at random from both successf
ul and unsuccessful trials, we
should have eliminated many of the potential sources of bias. Since the data would be
pooled and de
-
identified but not attributable, any anxiety related to releasing proprietary
information should be lessened.

2.3.

Behavioral Pro
files



The business workflow context (choreography) that fulfills one or more business
purposes for this application. This may optionally include additional constraints
where relevant.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

14

of
18

3.

Conformance Assertions

Conformance Assertions are testable, verifiable
statements made in the context of a
single RM
-
ODP Viewpoint (
ISO Standard Reference Model for Open Distributed
Processing, ISO/IEC IS 10746|ITU
-
T X.900)
. They


may be made in four of the five RM
-
ODP Viewpoints, i.e.
Enterprise, Information, Computational,
and/or Engineering
. The
Technology Viewpoint

specifies a particular implementation /technology binding that is
run within a ‘test harness’ to establish the degree to which the implementation is
conformant with a given set of Conformance Assertions made in
the other RM
-
ODP
Viewpoints. Conformance Assertions are conceptually
non
-
hierarchical.
However,
Conformance Assertions
may have hierarchical relationships to other Conformance
Assertions
within the same Viewpoint

(i.e. be increasingly specific).
They are

not,
however,

4.

References



1.

Jaffer, F.A. and R. Weissleder,
Molecular imaging in the clinical arena.

JAMA :
the journal of the American Medical Association, 2005.
293
(7): p. 855
-
62.

2.

Quon, A. and S.S. Gambhir,
FDG
-
PET and beyond: mol
ecular breast cancer
imaging.

Journal of clinical oncology : official journal of the American Society of
Clinical Oncology, 2005.
23
(8): p. 1664
-
73.

3.

Smith, J.J., A.G. Sorensen, and J.H. Thrall,
Biomarkers in imaging: realizing
radiology's future.

Radiol
ogy, 2003.
227
(3): p. 633
-
8.

4.

Li, W., et al.,
Noninvasive imaging and quantification of epidermal growth factor
receptor kinase activation in vivo.

Cancer research, 2008.
68
(13): p. 4990
-
7.

5.

Klibanov, A.L.,
Ligand
-
carrying gas
-
filled microbubbles:
ultrasound contrast
agents for targeted molecular imaging.

Bioconjugate chemistry, 2005.
16
(1): p. 9
-
17.

6.

Hehenberger, M.,
Information Based Medicine: From Biobanks to Biomarkers
,
2007, IBM Healthcare & Life Sciences: High Tech Connections (HTC) Forum.

7
.

Sadot, A., et al.,
Toward verified biological models.

IEEE/ACM Trans Comput
Biol Bioinform, 2008.
5
(2): p. 223
-
34.

8.

Cavusoglu, E.Z.E.a.M.C.,
A Software Framework for Multiscale and Multilevel
Physiological Model Integration and Simulation
, in
30th Annu
al International
IEEE EMBS Conference
2008: Vancouver, British Columbia, Canada.

9.

Feng, D.,
Molecular Imaging and Biomedical Process Modeling
, in
2nd Asia
-
Pacific Bioinformatics Conference (APBC2004)
2004.

10.

Chen, J., et al.,
How Will Bioinformatics Imp
act Signal Processing Research?

IEEE Signal Processing Magazine, 2003: p. 16
-
26.

11.

Toretsky, J., et al.,
Preparation of F
-
18 labeled annexin V: a potential PET
radiopharmaceutical for imaging cell death.

Nuclear medicine and biology, 2004.
31
(6): p. 747
-
52.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

15

of
18

12.

Zijlstra, S., J. Gunawan, and W. Burchert,
Synthesis and evaluation of a 18F
-
labelled recombinant annexin
-
V derivative, for identification and quantification of
apoptotic cells with PET.

Applied radiation and isotopes : including data,
instrumentat
ion and methods for use in agriculture, industry and medicine, 2003.
58
(2): p. 201
-
7.

13.

Group, B.D.W.,
Biomarkers and surrogate endpoints: preferred definitions and
conceptual framework.

Clinical pharmacology and therapeutics, 2001.
69
(3): p.
89
-
95.

14.

Zhao, B., et al.,
Evaluating variability in tumor measurements from same
-
day
repeat CT scans of patients with non
-
small cell lung cancer.

Radiology, 2009.
252
(1): p. 263
-
72.

15.

Sheikh, H.R., M.F. Sabir, and A.C. Bovik,
A statistical evaluation of recent f
ull
reference image quality assessment algorithms.

IEEE transactions on image
processing : a publication of the IEEE Signal Processing Society, 2006.
15
(11):
p. 3440
-
51.

16.

Buckler, A.J. and R. Boellaard,
Standardization of quantitative imaging: the time
is right, and 18F
-
FDG PET/CT is a good place to start.

Journal of nuclear
medicine : official publication, Society of Nuclear Medicine, 2011.
52
(2): p. 171
-
2.

17.

Wong, D.,
Liaison Committee Discusses Possible Radiotracer Sharing
Clearinghouse
, in
American

College of Neuropsychopharmacology
2006.

18.

Wong, D.F.,
Imaging in drug discovery, preclinical, and early clinical
development.

Journal of nuclear medicine : official publication, Society of Nuclear
Medicine, 2008.
49
(6): p. 26N
-
28N.

19.

Creating the gene

ontology resource: design and implementation.

Genome Res,
2001.
11
(8): p. 1425
-
33.

20.

Ashburner, M., et al.,
Gene ontology: tool for the unification of biology. The Gene
Ontology Consortium.

Nature genetics, 2000.
25
(1): p. 25
-
9.

21.

Romero
-
Zaliz, R.C., et al.,
A Multiobjective Evolutionary Conceptual Clustering
Methodology for Gene Annotation Within Structural Databases: A Case of Study
on the Gene Ontology Database.

IEEE Transactions on Evolutionary
Computation, 2008.
12
(6): p. 679
-
7
01.

22.

Brazma, A., et al.,
Minimum information about a microarray experiment (MIAME)
-
toward standards for microarray data.

Nature genetics, 2001.
29
(4): p. 365
-
71.

23.

Sirota, M., et al.,
Discovery and preclinical validation of drug indications using
comp
endia of public gene expression data.

Science translational medicine, 2011.
3
(96): p. 96ra77.

24.

Brown, M.S., et al.,
Database design and implementation for quantitative image
analysis research.

IEEE Trans Inf Technol Biomed, 2005.
9
(1): p. 99
-
108.

25.

Ma
ier, D., et al.,
Knowledge management for systems biology a general and
visually driven framework applied to translational medicine.

BMC Syst Biol, 2011.
5
: p. 38.

26.

Toyohara, J., et al.,
Evaluation of 4'
-
[methyl
-
14C]thiothymidine for in vivo DNA
synthes
is imaging.

Journal of nuclear medicine : official publication, Society of
Nuclear Medicine, 2006.
47
(10): p. 1717
-
22.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

16

of
18

27.

Yuk, S.H., et al.,
Glycol chitosan/heparin immobilized iron oxide nanoparticles
with a tumor
-
targeting characteristic for magnetic re
sonance imaging.

Biomacromolecules, 2011.
12
(6): p. 2335
-
43.

28.

Veenendaal, L.M., et al.,
In vitro and in vivo studies of a VEGF121/rGelonin
chimeric fusion toxin targeting the neovasculature of solid tumors.

Proceedings of
the National Academy of Science
s of the United States of America, 2002.
99
(12):
p. 7866
-
71.

29.

Wen, X., et al.,
Biodistribution, pharmacokinetics, and nuclear imaging studies of
111In
-
labeled rGel/BLyS fusion toxin in SCID mice bearing B cell lymphoma.

Molecular imaging and biology : M
IB : the official publication of the Academy of
Molecular Imaging, 2011.
13
(4): p. 721
-
9.

30.

Wang, H.H., et al.,
Durable mesenchymal stem cell labelling by using polyhedral
superparamagnetic iron oxide nanoparticles.

Chemistry, 2009.
15
(45): p. 12417
-
25.

31.

http://www.ncbi.nlm.nih.gov/books/NBK5330/
,
Molecular Imaging and Contrast
Agent Database (MICAD).

2011.

32.

SPARQL, a query language and protocol for RDF acccess released by the W3C
RDF Data
Access Working Group
. Available from:
http://www.w3.org/wiki/SparqlImplementations
, accessed 27 November 2011.






1

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=8
20&showFR=1
, accessed 28 February 2010.

2


Jacobson JW,
http://cancerdiagnosis.nci.nih.gov/pdf/PACCT_Assay_Standards_Document.pdf
,
accessed 7 February 2010.

3


Goodsaid F and Frueh F, Process map proposal for the validation of genomic
biomarkers, Pharmacogenomics (2006) 7(5), 773
-
782.

4


Goodsaid

FM and Frueh FW, Questions and answers about the Pilot Process for
Biomarker Qualification at the FDA, Drug Discovery Today, Vol 4, No. 1, 2007.

5


Goodsaid FM et al., Strategic paths for biomarker qualification, Toxicology 245
(2008) 219
-
223.

6


Fleiss, J. The design and analysis of clinical experiments. Wiley, New York 1986.

7


Bland M. and Altman D.;
Measuring agreement in method comparison studies
Stat
Methods Med Res
1999; 8; 135.

8


Barnhart

H. and

Barboriak D.

Applications of the Repeatabili
ty of Quantitative
Imaging Biomarkers: A Review of Statistical Analysis of Repeat Data Sets
Translational Oncology

(2009) 2, 231

235.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

17

of
18






9


Pepe, M. S. (2003). The statistical evaluation of medical tests for classification and
prediction. New York, NY: Oxford

University Press.

10


Zhou, Z., Obuchowski, N., & McClish, D. (2002). Statistical Methods in Diagnostic
Medicine. New York: Wiley.

11


Chakraborty, D. P. and Berbaum, K. S. Observer studies involving detection and
localization: Modeling, analysis and valida
tion.
Medical Physics
31
(8)
,
2313

2330.
2004.

12


Edwards, D. C., Kupinski, M. A., Metz, C. E., and Nishikawa, R. M.

Maximum
likelihood fitting of FROC curves under an initial
-
detection
-
and
-
candidate
-
analysis
model.
Medical Physics
29
(12)
,
2861

2870.

(2002)
.

13


Bandos, A, Rockette H, Song T, Gur D.
Area under the Free
-
Response ROC Curve
(FROC) and a Related Summary Index
Biometrics, 2009; 65, 247

256.

14


Sargent, D,

Rubinstein L, Schwartz L,

Dancey J,
Gatsonis C
,

Dodd L, and Shankar
L.

Validation of novel im
aging methodologies for use as cancer clinical trial end
-
points.
European Journal of Cancer
2009;45(2):290
-
9.

15


Korn, EL, Albert, PS, and McShane, LM, Assessing surrogates as trial endpoints
using mixed models,
Statistics in Medicine

24, 163
-
182, 2005.

16


Dodd LE, Korn EL. Lack of generalizability of sensitivity and specificity with
treatment effects.
Stat Med.
2008 May 10;27(10):1734
-
44.

17


Burzykowski, T, Molenberghs G, Buyse M, Editors.

The Evaluation of Surrogate
Endpoints.

Springer, New York 2005.

18

http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/G
uidances/UCM230597.pdf
, accessed 29 December 2010.

19

Eisenhauera E
A, Therasseb P, Bogaertsc J, et al.
New response evaluation criteria
in solid tumours: Revised RECIST guideline (version 1.1).

Eur J
Disease

2009;
45:228
-
247.

20

Prasad SR, Jhaveri KS, Saini S, Hahn PF, Halpern

EF, Sumner JE. CT tumor
measurement for therapeutic response assessment: comparison of unidimensional,
bidimensional, and volumetric techniques

initial observations. Radiology 2002;
225(2):416

419.

21

Schwartz L, Curran S, Trocola R, et al. Volumetric 3D C
T analysis


An early
predictor of response to therapy. J Clin Oncology 2007; ASCO Annual Meeting
Proceedings Part I. Vol 25, No. 18S (June 20 Supplement), 4576.

22


Suzuki C, Jacobsson H, Hatschek T. Radiologic measurements of tumor response
to treatment:
Practical approaches and limitations.

RadioGraphics 2008; 28:329

344.

23


Op cit. Appendix II.

"Measurement of lesions". Page 243.

QI
-
Bench
Formulate

ASD


Rev
0.2



BBMSC

18

of
18






24


Kalbfleisch, J., & Prentice, R. (2002).
The Statistical Analysis of Failure Time Data
.
2
nd

Ed. New York: Wiley.

25


Lindsey,

J., & Ryan, L. (1998). Methods for interval
-
censored data.
Statistics in
Medicine, 17
, 219
-
238.

26


Heagerty, P. J., Lumley, T., & Pepe, M. S. (2000). Time
-
dependent ROC curves for
censored survival data and a diagnostic marker.
Biometrics, 56
(2), 337
-
344.

27


Heagerty, P. J., & Zheng, Y. (2005). Survival model predictive accuracy and ROC
curves.
Biometrics, 61
(1), 92
-
105.