“Execute” Architecture Specification - QI-Bench

farmacridInternet και Εφαρμογές Web

2 Φεβ 2013 (πριν από 4 χρόνια και 6 μήνες)

192 εμφανίσεις

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

1

of
16


QI
-
Bench “
Execute


Architecture Specification


July

2011

Rev
0.2






Required Approvals:

Author of this
Revision
:

Andrew J. Buckler






System Engineer:

Andrew J. Buckler






Print Name


Signature


Date


Document Revisions:

Revision

Revised By

Reason for Update

Date

0.1

AJ Buckler

Initial version

June 2011

0.2

AJ Buckler

Fleshed out content

July 2011































QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

2

of
16

Table of Contents

1.

EXECUTIVE SUMMARY

................................
................................
................................

3

1.1.

P
URPOSE A
ND
S
COPE

................................
................................
................................
.............

3

1.2.

T
ERMS
U
SED IN
T
HIS
D
OCUMENT

................................
................................
..........................

3

2.

STRUCTURE OF THE APP
LICATION

................................
................................
.........

4

2.1.

D
ATASTORES

................................
................................
................................
.........................

6

2.1.1. Reference Data Set Manager

................................
................................
.........................

6

2.1.2. Batch Analysis Scripts

................................
................................
................................
....

6

2.1.3. Assessment DB

................................
................................
................................
...............

6

2.2.

A
CTIVITIES

................................
................................
................................
............................

6

2.3.

A
SSUMPTIONS

................................
................................
................................
........................

7

2.4.

D
EPENDENCIES

................................
................................
................................
......................

7

3.

SYSTEM
-
LE
VEL REQUIREMENTS

................................
................................
.............

7

3.1.

F
UNCTIONALITY
S
TAGED FOR
F
IRST
D
EVELOPMENT
I
TERATION

................................
...........

8

3.2.

P
ERFORMANCE
................................
................................
................................
.....................

11

3.3.

Q
UALITY

C
ONTROL

................................
................................
................................
.............

11

3.4.

“S
UPER USER


AND
S
ERVICE
S
UPPORT

................................
................................
................

11

3.5.

U
PGRADE
/

T
RANSITION

................................
................................
................................
.......

12

3.6.

S
ECURITY

................................
................................
................................
............................

12

3.7.

R
EQUIREME
NTS FOR
S
UBSEQUENT
I
TERATIONS

................................
................................
...

13

4.

DEPLOYMENT MODEL(S)

................................
................................
...........................

14

5.

IMPLEMENTATION CONSI
DERATIONS AND RECOMM
ENDATIONS FOR
TECHNICAL REALIZATIO
N

................................
................................
................................
.

14

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

3

of
16

1.

Executive Summary

The purpose of the QI
-
Bench project is to aggregate evidence relevant to the process of
implementing imaging biomarkers to allow sufficient quality and quantity of data are
generated to support the responsible use of these new tools in clinical settings. T
he
efficiencies that follow from using this approach could translate into defined processes
that can be sustained to develop and refine imaging diagnostic and monitoring tools for
the healthcare marketplace to enable sustained progress in improving healthc
are
outcomes.

1.1.

Purpose and Scope

Specifically, the “
Execute


app is developed to allow users to:




Compose and iterate batch analyses on reference data.



Accumulate quantitative read
-
outs for analysis.

From a technology point of view,
Execute

refers to th
e part of the project most closely
associated with Kitware, comprising MIDAS, BatchMake, and the Condor grid.


Our
ideas associated with the Reference Data Set Manager and the Batch Analysis Service
live here.


It results in annotation and image mark
-
up ac
ross large data sets.

Most literally,
Execute

would be packaged in two forms: 1) as a web
-
service linking to
the databases on the project server dev.bbmsc.com; and 2) as a local installation/

instance of the functionali
ty for more sophisticated users.

1.2.

Term
s Used in This Document

The following are terms commonly used that may of assistance to the reader.

AAS


Application Architecture Specification

ASD


Application Scope Description

BAM


Business Architecture Model

BRIDG

Biomedical Research Integrated Domain
Group

caBIG


Cancer Biomedical Informatics Grid

caDSR

Cancer Data Standards Registry and Repository

CAT


Composite Architecture Team

CBIIT


Center for Biomedical Informatics and Information Technology

CFSS


Conceptual Functional Service Specification

CIM


Computational Independent Model

DAM


Domain Analysis Model

EAS


Enterprise Architecture Specification

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

4

of
16

ECCF


Enterprise Conformance and Compliance Framework

EOS


End of Support

ERB


Enterprise Review Board

EUC


Enterprise
Use
-
case

IMS


Issue Management
System (Jira)

KC


Knowledge Center

NCI


National Cancer Institute

NIH


National Institutes of Health

PIM


Platform Independent Model

PSM


Platform Specific Model

PMO


Project Management Office

PMP


Project Management Plan

QA


Quality Assurance

QSR


FDA’s Q
uality System Regulation

SAIF


Service Aware Interoperability Framework

SD
D


Software Design
Document

SIG


Service Implementation Guide

SUC


System Level Use
-
case

SME


Subject Matter Expert

SOA


Service

Oriented Architecture

SOW


Statement of Work

UML


Uni
fied Modeling Language

UMLS


Unified Medical Language System

VCDE


Vocabularies & Common Data Elements

When using the template, extend with specific terms related to the particular EUC being
documented.

2.

Structure of the Application



Describe the
application in macro and its overall organization



Include an executive summary of the overall application workflow (e.g., a concise
overall description of the responsibilities of this application, and the roles (if any)
that it plays in interaction pattern
s such as client
-
server, service
-
to
-
service, etc.)



Enumeration of the behavioral interfaces that are known to be needed, with a
concise descript
ion of each

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

5

of
16



Consider representation formalism and the intended audience, not necessarily
rigorously expressing
the content in UML

The behavioral model is understood in the conte
xt of the following logical information

model:


QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

6

of
16

Execute

is defined as an implementation of the following behavioral model:


2.1.

Datastores

2.1.1.

Reference Data Set Manager


(connection from Formulat
e that has formed Reference Data Sets

(PODS data according to information model for non
-
imaging data

(RDSM REST Services: uses JQuery, wrap Ajax call. Java script helper classes can be
called from Ruby


2.1.2.

Batch Analysis Scripts


(where the batch make script
s are)

(grammar for the scripting language, what exists presently and extensions for the
manual and semi
-
automated steps performed by RIS systems and PACS

2.1.3.

Assessment DB


stuff about AIM outputs according to templates in Biomarker DB created by “Specify”

2.2.

Ac
tivities

Activity

Model

View

Controller

package algorithm or method using batch analysis service API




initiate a run




sequence through tasks (potentially with help of Condor grid)




QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

7

of
16

Activity

Model

View

Controller

run interfaced algorithm on a data set




create ground truth
or other reference annotation and markup




(for manual step) process worklist item




(for semi
-
automated step) process human step




any of these activities may be performed on behalf of one or more
sponsors by trusted broker so as to protect
individual identities and/or
perform on sequestered data




2.3.

Assumptions

In this section please address the following questions:



Upon what services does this specification depend (underpinning infrastructure,
other HL7 services, etc)



Are there any key as
sumptions that are being made?

2.4.

Dependencies

List of capabilities (aka responsibilities or actions) that the application’s workflow
depends on and description on what it does in business terms.

Description

Doc Title

Doc
Version

<business friendly
description>

<document title>

<document
version>

3.

System
-
level Requirements

Note that, the normal process of requirements development does not guarantee that
adjacent requirements are directly related. In situations where requirements are tightly
related
or where requirements are to be considered in the context of an upper level
requirement, explicit parent
-
child relationships have been created. These can be
identified by the requirement numbering


child requirements have numbers of the form
XX.Y indicat
ing the Y
th

child of requirement XX.

The following list of attributes is used:



Origin


Identifies the project or Enterprise Use Case that originated the
requirement.



Comment / TI


Additional information regarding the requirement. This may
include info
rmation as to how the requirement may be tested (i.e. the test
indication).

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

8

of
16



Design Guideline


Used to identify requirements that are to be taken as
guidance or are otherwise not testable. In such cases the phrase “Not a testable
requirement” will appear.

Requirements may, and often do, apply to multiple components. In such cases, the
Component attribute will identify all components where the requirement applies
(including components that do not apply to this Enterprise Use Case).

The Origin of a requirem
ent is intended to identify the program for which the
requirement was originally defined. Often this is SUC (System Use Case) but it may be
different.

3.1.

Functionality

Staged for First Development Iteration

Model
:
Requirements placed on input, output, or
significant intermediate data used by
the application.

View
: Supported views and other GUI requirements. Note: use of figures for screen
shots is encouraged as necessary to show an essential feature, but not to show
unnecessary implementation detail.

Contr
oller
: Functional requirements on logical flow.

Requirements

Origin

Model

View

Controller

Activity: Package algorithm or method using batch analysis
service API

EA




Manual wrapping can also be performed using the “Application
Harness” interface
provided by The Batch Analysis Service.

SUC




Depending on the experimental objective, two basic possibilities exist
for creating the image markups:

SUC




The user will implement the Application Harness API to interface a
candidate biomarker implementa
tion to the Batch Analysis Service.

SUC




Alternatively, the reference implementation for the given biomarker
will be utilized.

SUC




Support two levels of Application Harness:

SUC




Full patient work
-
up: candidate algorithm or method encompasses
not

only individual tumor mark
-
up but also works across all tumors


just call the algorithm with the whole patient data

SUC




Individual tumor calculations: candidate algorithm expects to be
invoked on an individual tumor basis


requires implementation of

a
reference implementation for navigating across measurable tumors
and call the algorithm on each one

SUC




Create electronic task forms for each of the manual tasks (e.g.,
acquisition or reading) to be analyzed in the study.

SUC




Activity: Setup and

initiate a run

EA




Use the Profile to automatically follow the workflow “Set up and
Experimental Run” to describe the Processing Pipeline for a set of
validation script(s).

SUC




New user can now open up same study in viewer and when they
annotate,
they could see new annotations and markups associated
with the algorithm.

SUC




annotations of read
-
outs

SUC




QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

9

of
16

Requirements

Origin

Model

View

Controller

customizing the Image Annotation Tool representation of
annotations

SUC




customizing the annotation tool

SUC




There are essentially two

types of experimental studies supported:

SUC




One is a correlative analysis where an imaging measure is taken
and a coefficient of correlation is computed with another parameter,
such as clinical outcome.

SUC




A second type is where a measure of accuracy is computed, which
necessitates a representation of the “ground truth,” whether this be
defined in a manner traceable to physical standards or alternatively
by a consensus of “experts” where physical traceabilit
y is either not
possible or feasible.
This workflow is utilized in the latter case.

SUC




Definition of what constitutes “ground truth” for the data set is
established and has been checked as to its suitability for the
experimental objective it will
support.

SUC




The investigators define annotation instructions that specify in detail
what the radiologist/reader should do with each type of reading task.

SUC




It is perhaps helpful to describe two fundamental types of
experiments:

SUC




those that call for new acquisitions (and post
-
processing),

SUC




vs. those which utilize already acquired data (only require the post
-
processing).

SUC




However, in the most flexible generality, these are both understood
simply as how one defines t
he starting point and which steps are
included in the run.

SUC




As such, they may both be supported using the same toolset.

SUC




In the former case, there is need to support means by which the
physical phantom or other imaging object is present
local to a device,
where the tool provides “directions” for how and when the scans are
done with capture of the results;

SUC




in the latter case, the tool must interface with the candidate
implementation and may run local to the user or remote from
them.

SUC




In either case, the fundamental abstraction of an imaging pipeline is
useful, where the notion is that any given experiment describes the
full pipeline but focuses on granularity at the stage of the pipeline of
interest.

SUC




Based on the
pipeline definition, describe the experimental run to
include, for example, set up for image readers so as to guide the user
through the task, and save the results.

SUC




It could, for example, create an observation for each tumor, each in
a different co
lor. Or, one tumor per session. Depends on experiment
design. This activity also tailors the order in which the steps of the
reading task may be performed.

SUC




Design an experiment
-
specific tumor identification scheme and
install it in the tumor identi
fication function in the task form
preparation tool.

SUC




Define the types of observations, characteristics, and modifiers to
be collected in each reading session and program them into the
constrained
-
vocabulary menus. (May be only done in ground truth
annotations in early phases.)

SUC




QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

10

of
16

Requirements

Origin

Model

View

Controller

Determine whether it will be totally algorithmic or will have manual
steps. There are a number of reasons for manual steps, including for
example if an acquisition is needed, whether the reader chooses
his/her own see
d stroke, whether manual improvements are
permitted after the algorithm has made the attempt, or combinations
of these.
Allow for manual, semi
-
automatic, and automatic
annotation.

SUC




Specify the sequence of steps associated with acquisitions, for
example of a phantom shipped to the user, of new image data that
will be added to the Reference Data Set(s).

SUC




Specify the sequence of steps that the reader is expected to
perform on each type of reading task for data in the Reference Data
Set(s).

SU
C




Activity: Sequence through tasks

EA




Avoid downloading large datasets anytime an experiment is run. The
first time the experiment is run, query Image Archive for the selected
datasets and will cache them locally for further retrieval.

SUC




If a dataset is not stored locally (for example, if the cache is full)
automatically fetch the data and transfer it to the client.

SUC




Store the metadata information associated with a given dataset as
well as a full data provenance record.

SUC




Record audit trail information needed to assure the validity of the
study.

SUC




Install databases and tools as needed at each site, loading the
databases with initial data. This includes installing image databases
at all actor sites, installing clinical

report databases at all clinical sites,
and installing annotation databases at Reader, Statistician, and PI
sites.

SUC




Represent the Processing Pipeline so that it may be easily shared
between organizations.

SUC




One or more Processing Pipeline
scripts are defined and may be
executed on one or more Reference Data Sets.

SUC




Once the Processing Pipeline script is written, the Batch Analysis
Service can run the script locally or across distributed processing jobs
expressed as a directed
-
acyclic
graph (DAG).

SUC




When distributing jobs, the Batch Analysis Service first creates the
appropriate DAG based on the current input script.

SUC




Then, when running the job, the user can monitor the process of each
job distributed with the Batch Analysis Service.

SUC




Results of the processing at each node of the grid are instantly
transferred back to the QI
-
Bench GUI and can be used to generate

dashboard for batch processing.

SUC




It permits to quickly check the validity of the processing by comparing
the results with known baselines.

SUC




Each row of the dashboard corresponds to a particular processing
stage and is reported in red color

if the result does not meet the
validation criterion.

SUC




Fully
-
automated methods can just run at native compute speed.
For
semi
-
automated methods:

SUC




QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

11

of
16

Requirements

Origin

Model

View

Controller

Assign each task form to one or more human participants,
organizing the assignments into
worklists that could specify, for
example, when each task is performed and how many tasks are
performed in one session.

SUC




Record audit trail information needed to assure the validity of the
study.

SUC




Activity: Create ground truth or other
reference annotation and
markup

EA




Create nominal ground truth annotations. (This differs from ordinary
reading tasks by removing any tool restrictions and by allowing the
reader a lot more time to do the job right. It may entail presenting
several exp
ert markups for comparison, selection, or averaging.)

SUC




The investigators assign reading tasks to radiologist/readers,
placing a seed annotation in each task, producing worklists.

SUC




The radiologist/readers prepare seed annotations for each of
the
qualifying biological features (e.g., tumors) in each of the cases,
attaching the instructions to each seed annotation and assuring that
the seed annotations are consistent with the instructions.

SUC




The radiologist/readers annotates according to r
eference method
(e.g., RECIST) (to allow comparative studies should that be within the
objectives of the experimental on this Reference Data Set).

SUC




Inspect and edit annotations, typically as XML, to associate them
with other study data.

SUC




3.2.

Performance

Non
-
functional requirements, such as how fast a capability needs to run.

Requirements

Origin

Comment / TI

Design
Guideline

User interface display and update time needs to feel quick.

AAS

Implication is that can’t
load whole ontologies
but
rather work
incrementally.


3.3.

Quality Control

Support for internal and/or external quality control.

Requirements

Origin

Comment / TI

Design
Guideline













3.4.


“Super user” and Service Support

Additional capabilities needed by privileged users.

QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

12

of
16

Requirements

Origin

Comment / TI

Design
Guideline

Need a way to assess irregularites in QIBO


instances and
to rectify them.




















3.5.

Upgrade / Transition

Requirements to support legacy and/or prior versions.

Requirements

Origin

Comment /
TI

Design
Guideline

Need to devise a means for orderly update as
QI
-
Bench Core

concepts come and go




































3.6.

Security

Applicable security compliance requirements.

Requirements

Origin

Comment / TI

Design
Guideline

User
certificate needs to distinguish between view
-
only,
authorized to create Biomarker DB instances, authorized to
edit existing instances, and to curate QIBO.
































QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

13

of
16

Requirements

Origin

Comment / TI

Design
Guideline





3.7.

Requirements

for Subsequent Iterations

Defined in the
SUC but not yet staged for implementation.


Requirements

Origin

Comment / TI

Design
Guideline

Integrate the Reference Data Set Manager and Batch
Analysis Service with tools for publishing "data papers"
drawn from this work:

SUC



The interactive Image Viewer system enhances the
standard scientific publishing by adding interactive
visualization.

SUC



Using interactive Image Viewer, authors have the
ability to create 3
-
dimensional visualization of their
datasets, add 3
-
dimensional

annotations and
measurements and make the datasets available to
reviewers and readers.

SUC



The system is composed of two main components:
the archiving system and the visualization software.

SUC



A customized version of Reference Data Set
Manager provides the data storage, delivers low
-
resolution datasets for pre
-
visualization, and in the
background serves the full
-
resolution dataset.

SUC



Reference Data Set Manager must support MeSH,
the U.S. N
ational Library of Medicine (NLM) controlled
vocabulary used for indexing scientific publications.

SUC



The second component, the interactive Image Viewer
visualization software, interacts directly with the
Reference Data Set Manager in order to retriev
e stored
datasets.

SUC



Readers of an interactive Image Viewer
-
enabled
manuscript can automatically launch the interactive
Image Viewer software by clicking on a web link directly
in the PDF.

SUC



Within ten seconds, a low
-
resolution dataset is loaded
from Reference Data Set Manager and can be
interactively manipulated in 3D via the interactive Image
Viewer software.

SUC



Make image analysis algorithms available as
publically accessible caGrid an
alytical services.

SUC



Upload raw image and other data sets into a
repository where on
-
line journals may access them by
readers having a user interface that can run reference
algorithms and/or could give access to data for local
consideration.

SUC



QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

14

of
16

Requirements

Origin

Comment / TI

Design
Guideline

Perform the workflow in a highly consistent manner
so that results can be reproduced, standardized and
eventually utilize common analysis tools.

SUC



Any of these activities may be performed on behalf of
one or more sponsors by trusted broker so as to
protect
individual identities and/or perform on sequestered
data

SUV



4.

Deployment Model(s)



Relevant and representative examples of deployment scenarios

5.

Implementation Considerations and
Recommendations for Technical Realization



Identification of topics r
equiring elaboration in candidate solutions. This may be
application
-
specific, deployment related, or non
-
functional



This specification in the real world (e.g., relationships to existing infrastructure,
other deployed services, dependencies, etc.)

PACS
-
style connection
, incl. Query
-
Retrieve

IHE
-
compliant RIS interface for readers

(e.g., worklists)

IHE XDS & XDS
-
I Profile

AIM
-
template for output

MWL & PPS style connections for acquisition (future)

WG23/Sup 118
-
compliant harness API for algorithms

QI
-
Bench themes for Web API

Medical
Imaging

Netwrok Transport (MINT) (or
rather part of Formulate?)


6.

Appendix: Reference Data Set Manager REST
services

Available at dev.bbmsc.com/midas/api/rest

<?
xml version="1.0" encoding="UTF
-
8"
?>


-

<
rsp stat
="
ok
">



<
data
>
midas.system.echo
</
data
>




<
data
>
midas.system.listMethods
</
data
>




<
data
>
midas.system.getCapabilities
</
data
>




<
data
>
midas.example.reversestring
</
data
>




<
data
>
midas.easyupload.list
</
data
>




<
data
>
midas.check.user.agreement
</
data
>


QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

15

of
16



<
data
>
midas.convert.path.to.id
</
data
>




<
data
>
midas.path.from.root
</
data
>




<
data
>
midas.path.to.root
</
data
>




<
data
>
midas.newresources.get
</
data
>




<
data
>
midas.resources.search
<
/
data
>




<
data
>
midas.resource.get
</
data
>




<
data
>
midas.uuid.get
</
data
>




<
data
>
midas.bitstream.add.mirror
</
data
>




<
data
>
midas.bitstream.list
</
data
>




<
data
>
midas.bitstream.locations
</
data
>




<
data
>
midas.bitstream.keyfile
</
data
>




<
data
>
midas.bitstream.count
</
data
>




<
data
>
midas.bitstream.delete
</
data
>




<
data
>
midas.bitstream.get
</
data
>




<
data
>
midas.bitstream.by.hash
</
data
>




<
data
>
midas.bitstream.download
</
data
>




<
data
>
midas.item.keys
</
data
>




<
data
>
midas.item.delete
</
data
>




<
data
>
midas.item.download
</
data
>




<
data
>
midas.item.resource.create
</
data
>




<
data
>
midas.item.title.get
</
data
>




<
data
>
midas.item.abstract.get
</
data
>




<
data
>
midas.item.get
</
data
>




<
data
>
midas.item.create
</
data
>




<
data
>
midas.collection.delete
</
data
>




<
data
>
midas.collection.create
</
data
>




<
data
>
midas.collection.download
</
data
>




<
data
>
midas.collection.get
</
data
>




<
data
>
midas.community.delete
</
data
>




<
data
>
midas.community.tree
</
data
>




<
data
>
midas.community.list
</
data
>




<
data
>
midas.community.get
</
data
>




<
data
>
midas.community.create
</
data
>


QI
-
Bench “Execute” AAS

Rev 0.2



BBMSC

16

of
16



<
data
>
midas.upload.bitstream
</
data
>




<
data
>
midas.upload.getoffset
</
data
>




<
data
>
midas.upload.generatetoken
</
data
>




<
data
>
midas.add.virtual.data
</
data
>




<
data
>
midas.assetstore.set.default
</
data
>




<
data
>
midas.assetstore.create
</
data
>




<
data
>
midas.login
</
data
>




<
data
>
midas.info
</
data
>




</
rsp
>