Project Portfolio for Robert Heuchert

bugenigmaSoftware and s/w Development

Oct 30, 2013 (4 years and 6 months ago)


Project Portfolio

for Robert Heuchert

Professional Profile

A Senior IT Architect (certified by IBM, The Open Group and the British Computing
Society) with sixteen years of consulting and operational experience in a diverse range
of enterprise, integration

and technology architectures. Project experience includes;
Service Oriented Architecture (SOA), Event Driven Architecture (EDA), Enterprise
Architecture (EA), Enterprise Service Bus (ESB), Enterprise Application Integration (EAI),
Web Services, high perf
ormance network computing technologies, UNIX and Linux
clusters, high availability LAN/WAN/SAN networking systems, storage technologies,
directory services and enterprise systems management solutions. Over the past ten
years at IBM in a client facing role

in such solution areas as; application architecture
(patent pending), business intelligence, customer relationship management (CRM),
telemetry / telematics, enterprise applications and integration, data / metadata
management and infrastructure design.

roject Portfolio Highlights


Telematics Solution for Medical Devices

Based out of St. Paul, Minnesota with approximately 12,000 employees in 20 facilities
[Medical Device Manufacturer]

has realize

worldwide sales of $3.3B USD in 206 f
more than 100 countries. Their stated mission is

dedicated to making life better for
cardiac, neurological and chronic pain patients worldwide through

excellence in medical
device technology and services. They execute on this mission through the

and manufacturing of a medical product portfolio with five major focus areas: cardiac

management, atrial fibrillation, cardiac surgery, cardiology and neuromodulation.

In the spring of 2004 the IBM client team asked me to take over a troubled

project in
[Medical Device Manufacturer]
’s Cardiac

Rhythm Management Division (CRMD).
[Medical Device Manufacturer]

CRMD is tasked with product development in cardiac

resynchronization therapy devices for heart failure, implantable cardioverter defibrilla
to treat

potentially lethal heart arrhythmias, and pacemakers, leads, introducers and
art device

programmers for a variety of cardiac conditions. In the context of
a long standing relationship with IBM,

[Medical Device Manufacturer]

had asked
for guidance in developing the next generation informatics environment for their

and yet un
released implantable cardioverter defibrillators (ICD) programmer.

The key objective of
the solution

is to provide a single access point that clin
icians can
review patient health

care data. The system also needs to provide the tools necessary
for the clinicians to conduct data

viewing, collaboration and data analysis. The system
functional goals are:

Securely upload patient data from various remote

sources (implantable device
data, other health data

[Medical Device Manufacturer]

or 3rd party devices,
data from EMRs, other 3rd party servers, etc)

Store patient data in an electronic form that can be quickly accessed

Provide safe and secure storag
e of patient health data

Provide a simple method for a clinician, physician, nurse or other authorized
person to quickly view

summary data, coupled with the capability to dig deep into
the details as needed to analyse the data

Create tools and methods that

can conduct data processing and analysis within
and across patient

up sessions (data uploads), create alerts and reports
that are viewable by clinicians

Maintain data in separate patient records with authorized clinician access

Provide the capabili
ty to export data to authorized external systems in user
selectable formats/content

Leading a mutli
disciplinary team comprising of both IBM and
[Medical Device

resources I developed the underlying

requirements and overall
architecture of
e application
. The solution I had to develop needed to have modest

beginnings but be capable to evolving to an impressive enterprise application. The key

objective communicated by
[Medical Device Manufacturer]

was that of

was providin
g a central data aggregation point

where physicians can go and
review patient health care data. Hence The system needs to provide the

tools necessary
for the physicians to conduct data viewing, Based on the IBM Patterns for eBusiness


Exposed Serial
Process" run
time pattern in a specialized "Application


pattern provided the best practice based solution for integrating
users, partners and devices in an SOA

environment. This provided the basis for
establishing processes, business
rules and business partner

Infrastructure with the following characteristics:

An Exposed Enterprise Service Bus (ESB) Gateway supporting the devices and
service providers

An internal ESB which supports the SOA infrastructure requirement for service
on transparency

and interoperability, encapsulated reusable business
function and explicit implementation


A Business Process Choreography (BPC) node which supports business
processes and service

provider/consumer choreography as well

as Business
Activity Monitoring

Web Service Consumers and Providers representing the services providers,
medical devices, portal

users and resource managers

The custom developed application was also a service consumer with the back

g as service providers. As a service consumer it connected to the
service providers via a simple

enterprise service bus. Due to the nature of the SOA
approach, the consumer and provider could be

reversed. I emphasized the importance
of the Enterprise Servi
ce Bus in order to provide virtualization of

the enterprise
resources, allowing the business logic of the enterprise to be developed and managed

independent of the infrastructure, network, and provision of those business services.

Resources in the

ESB ar
e modelled as services that offer one or more business
, t
he resulting internet browser
based solution presented data to clinician in an
environment that is easy to

Advanced parameter monitoring within the database
will provide clinician
fined (or
[Medical Device Manufacturer]

alert flags for
any data that exceed clinician
specified limits. Because
the solution

stores and manages

patient data, it



security such that patient specific ID

not compromised to any unauthorized person during
collaboration and
data analysis.

The project team successfully completed the project in first quarter 2007. After receiving

FDA approval in April 2007,
the solution

was successfully unveiled and receive
d rave
reviews at the Heart

Rhythm Society (HRS) conference (the leading industry
conference) in May, and it positions
[Medical Device Manufacturer]

well with its
competitors in the cardiac rhythm device marketplace.

Due to the success of the first releas
[Medical Device Manufacturer]

has retained IBM
to complete



software development with a contract has a value of
$10.7M USD. It continues the development of

the solution

into the next generation, with
added features to support the ne
w generation transmitters, and it

further enhances
remote monitoring and alert functionality.


Enterprise Laboratory Solution with Clinical Systems

[Healthcare Clinic]

is a charitable, not
profit organization based
in Rochester,
Minnesota. Its mission is to

provide the best care to every patient every day through
integrated clinical practice, education and

research. To meet this mandate

required the succession of antiquated lab systems and a solut

delivery that
integrated the clinical practice, education and research between all their principal entities

located in 3 states and a health system network of hospitals and clinics in 64
communities around the


I was selected by
[Healthcare Clin

and IBM to perform the IBM GBS solution architect
role to the Department of

Laboratory Medicine and Pathology (DLMP). The department
encompasses multiply laboratory disciplines

which span across eight separate divisions.
Given my expertise in healthcar
e and Laboratory Information

Management Systems
(LIMS) I was tasked with directing a combined IBM and
[Healthcare Clinic]

team of 77
people in

defining the enterprise architecture strategy for DLMP, performing the solution
design and leading the

t activities. My architectural role was to provide the
technical leadership and governance

which included the integration of 17 vendor
applications using a Services Oriented Architecture. In

working with the IBM Chief
Architect I was responsible for design
ing the overall solution which would

position DLMP
as a work leader in enterprise laboratory operations

The laboratory and information management solution is encompassed in a broad
reaching strategy called

the LIS Succession & Integration Project. This s
trategy replaces
the current LIS system (Lab3) as well as

the complexities of the services and the
interactions of the systems that Lab3 supports.

Comprised of over 50 disparate applications

Tied together by a messaging and routing engine (Cloverleaf)

reographed by a 20 year old LIS application

Antrim Lab3

now owned by Misys

KBase application on Alpha VMS

Based on DSM, GT.M

MUMPS "database"

Known performance and file limitation issues

My composite design employed multiple LIS, LIMS, data management
, enterprise portal

sophisticated reporting technologies integrated into a SOA

Enterprise Service Bus
(ESB) framework

based on elements of traditional IBM approaches such as Enterprise
Architecture (EA), Business Process

Modelling (BPM), Rational Uni
fied Process (RUP)

Object Oriented Application Design (OOAD) as well

as innovative methods such as
Service Oriented Modelling and Architecture (SOMA)

Through adopting SOA approach and employing an ESB fabric the overall solution is

e and adaptable. It can accommodate the introduction or removal
of new LIMS, instrument

interfaces or data management applications or technologies.
The loose
coupling amongst the application

points facilitate multiple autonomous
LIMS instances while ma
intaining consistency amongst these

instances without the
back of brittle point
point connections.

Further maintenance and support is simplified as commercial ISV solutions do not need
to be altered to

integrate in the laboratory ecosystem. By pus
hing data transformation,
content based routing, transaction

management and service mediation down to the bus
level the ISV applications do not have to be altered

and thus remain consistent to their
supported code level. Unique organizational or applicatio
n needs are

addressed through
custom development efforts focused on novel process choreography or creating

based composite applications.

The result was the first instantiation of an SOA
ESB solution in support of laboratory
operations which:

ded a world class solution which meets the needs of the entire

Demonstrates SOA characteristics with ESB integration capabilities

A product commercially available today with minimal customization and a current

technology base for

on longevity

Based on object oriented programming foundation (Java, C#, Smalltalk)

Extensibility via Web Services capabilities (e.g. XML, SOAP) or standardized
Application Programming

Interface (API)

Proven messaging infrastructures (e.g. HL7, MQ Series)

nteroperability and willingness to cooperate with industry peers demonstrated
through effective strategic

alliances (e.g. LabWare, Waters

NuGenesis &
Northwest Analytical, SCC & IDX)

Common infrastructure (operating system, database, compute platform, st

Best of breed solution with a solid pedigree

The project was successfully completed in November 2006 and was a model
engagement for IBM that

boasted $3M in services, $8M in hardware and $47.5M in


IBM Annotation Mid
dleware Product Development

I was deeply involved at the inception of the project from starting in June 2002 until
October of the same year.
In t
beginning the
project team was tasked with only the
crudest direction and requirements from the Life Scienc
es Executive. There was a
desire for an “annotation solution” on the part of the senior staff but there was no vision
of what form this new offering would take or how it would be marketed. The project lead
asked me to take the lead and get the project mo
ving forward.

There are well known methods for capturing and storing explicit knowledge as data, for
example, in relational databases, documents, flat files, and various proprietary formats in
binary files. Often, such data is analyzed by various parties

(e.g., experts, technicians,
managers, etc.), resulting in rich interpretive information, commonly referred to as tacit
knowledge. However, such tacit knowledge is often only temporarily captured, for
example, as cryptic notes in a lab notebook, discussi
ons/conversations, presentations,
instant messaging exchanges, e
mails and the like. Because this tacit knowledge is
typically not captured in the application environment in which the related data is viewed
and analyzed, it is often lost.

One approach to

more permanently capture tacit knowledge is to create annotations
containing descriptive information about data objects. Virtually any identifiable type of
object may be annotated, such as a matrix of data (e.g., a spreadsheet or database
table), a text
document, or an image. Further, sub
portions of objects (sub
objects) may
be annotated, such as a cell, row, or column in a database table or a section, paragraph,
or word in a text document. An indexing scheme is typically used to map each
annotation to

the annotated data object or sub
object, based on identifying information,
typically in the form of an index. The index should provide enough specificity to allow
the indexing scheme to locate the annotated data object (or sub
object). Further, to be
fective, the indexing scheme should work both ways: given an index, the indexing
scheme must be able to locate the annotated data object and, given an object, the
indexing scheme must be able to calculate the index for use in classification, comparison,
d searching (e.g., to search for annotations for a given data object).

Accordingly, there is a need for improved methods and systems for managing
annotations made for a variety of different data objects. Preferably, the methods and
systems will allow ann
otations to be created and accessed from within a variety of
different type applications used to view and analyze the annotated data objects, thus
providing cross
platform tacit knowledge management.

The initial task for me was to survey what existed in t
he marketplace, research emerging
standards, analyze raw information that was collected from customers in interviews and
draw on my own field experience. During this period I performed the majority of my
research, requirements definition and solution desi
gn activities. My primary deliverable
was to provide a technical direction, an overall solution outline and basic functional
requirements (i.e. function points) that w

then used to author a project plan, identify
required skills and recruit team member
s. After I was able to get the project off the
ground I then served in an advisory capacity and ultimately participated in the first
internal IBM pilot as a test user in March 2003.

In its final form t
he InsightLink annotation system allow

to be
created for a
variety of different type data objects manipulated by a variety of different type
applications to be created, organized, and searched.
pplications may communicate
with an annotation server to access annotations from an annotation stor
e. The
annotation store may be separate from the annotated data, allowing annotations on the
data, without modifying the annotated data. Plug
in components may provide access to
the annotation server from within existing applications used to manipulate t
he annotation
data. Accordingly, annotation functionality may be added to new applications via the
addition of new plug
in components, without having to redesign the annotation server,
thus saving development time and associated cost. A common interface,

such as an
annotation browser, may provide a central source for individually or simultaneously
searching both annotations and the annotated data.

This architecture provides methods, systems, and articles of manufacture that may be
used for universal (e.g
., cross
platform) management of annotations made for a variety
of different type data objects manipulated (e.g., created, edited, and viewed) by a variety
of different type applications. It enables users collaborating on a project to create, view,
and ed
it annotations from within the applications used to manipulate the annotated data
objects, which may facilitate and encourage the capturing and sharing of tacit knowledge
through annotations.

InsightLink became a solution in the IBM portfolio and the

ject spawned four different
patent applications and I was named

as an inventor on the Universal Annotation
Management System

Patent Application (USPTO 20040260714, Patent Pending)

“IBM InsightLink [was] designed to capture the flow of ideas, annotations
insights on experiments into a database system that can be accessed and utilized
throughout an entire research organization. By preserving valuable knowledge
assets, organizations can often shorten research time, reduce redundant work, and
enhance coll
aboration and decision

IBM InsightLink can help transform a centuries
old paper
based research model into
an efficient online process for capturing annotations such as a researcher's
observations about unusual properties of a chemical compound; th
e rationale for
aborting or continuing a research project; a physician's notes explaining a treatment
decision; and other important latent knowledge that can benefit other researchers
and accelerate scientific discovery.

This new productivity tool can be

integrated into existing research applications and
databases, such as visualization tools, chemical compound libraries, high
screening devices, and electronic lab notebooks. It also can be accessed through a
Web browser, which can deliver an ag
gregated view of both experimental data and
associated annotations across multiple applications.”

IBM Press Release, Monday
August 25, 2003, 9:00 am ET


Data Architecture Solution for a Biotechnology Client

IBM and
[Biotechnology Com

(since acquired)
in conjunction with the
[Biotechnology Research Organization]
agreed to
perform a pilot experiment in systems
biology, whereby the
[Biotechnology Company]

Massively Parallel Signature Sequencing
(MPSS) technology will be applied to
produce a report relating the observed gene
expression behaviour to the differential responses in macrophages. The MPSS
technology will be used to quantitatively analyze the relevant genes and their
interactions in representative samples, including genes
that are expressed at only low
levels in the cell.

supporting technical objectives of the project were to:

Build MPSS database & DiscoveryLink wrapper for
[Biotechnology Company]

macrophage analysis

Explore application of IBM Life Sciences Framew
ork to the MPSS architecture
(enhance systems integration, data transformation and workflow)

Examine using IBM DiscoveryLink to federate MPSS result sets to
public/proprietary databases or repositories (e.g. Genbank, SRS etc.)

Identification of ISV softwar
e for upstream and downstream data analysis and
processing (e.g. GeneSpring, Spotfire, SAS etc.)

These priorities were driven by k
ey important exploration areas of
interest to
[Biotechnology Company]


Statistical Analysis

Data Formats and Data

Data Mining

Data Integration

The fundamental technology that was the foundation of this project was

proprietary gene expression technology. M

is an open
ended platform that ana
lyzes the level of expression of
virtually all genes expressed in a sample by counting the number of individual mRNA
molecules produced from each gene. There is no requirement that genes be identified
and characterized prior to conducting an experiment. M
PSS has a routine sensitivity of
a few molecules of mRNA per cell, and the data sets are in a digital format that simplifies
the management and analysis of the data. MPSS results are particularly useful for
generating the type of complete data sets that w
ill help to facilitate the development of
relational databases for systems biology research.

The principle of MPSS as a gene expression tool is very simple. It works by
simultaneously counting a large number of molecules of mRNA in a sample. Individual
mRNAs are identified through the generation of a 17 base “signature sequence” at a
unique site on the molecule, and, in a typical MPSS data set, over a million molecules
are counted simultaneously. Bioinformatics tools are used to sort out how many
les of mRNA from each gene are present in the sample. In the end, the level of
expression for each gene in an MPSS data set is represented by the number of
transcripts present in a million molecules counted (transcripts per million, tpm).

The challenges
that this relatively “straight
forward” technology introduced were several:

[Biotechnology Company]

lacked a consolidated view / repository of their result
data, instead it was spread across several internal and external data databases
or flat file struct

The sheer volume of data points in a single MPSS experiment is non
trivial (1.6
million discrete data values)

Traditional visualization tools can
handle 16 data sets (1.5million each) before
suffering (potential memory leak)

The MPSS technology is to
be used extensively for generating data for time
series analysis specifically for the
[Biotechnology Research Organization]


The desire of the
[Biotechnology Research Organization]

team was to leverage
all MPSS data sets known in order to draw biolo
gical insights from the data sets
(housekeeping genes, low level gene expression, known significant genes etc.)

National Institute of Health (NIH)
mouse transc
iptome project

on the
horizon (100 samples min
mum) and will definitely challenge the cu
rrent tool set
for analytics and data management

Manual rather than automated
interfaces for transforming/pipelining data
amongst MPSS Java/C++ application, LIMS and MPSS databases

High barrier to entry for

systems integration, d
ata transformation


Difficulty f

MPSS result sets

public/proprietary databases or
repositories (e.g. Genbank, SRS etc.)

Lack of
ISV software for upstream and downstream data analysis and processing
(e.g. GeneSpring, Spotfire, SAS etc.)

Based on t
he aforementioned challenges the solution architecture should be focused on
data management and integration. The fundamental question was whether this
integration should be virtual (data federation) or physical (operational data store or data

My d
ecision to pursue a hybrid solution combining the strengths of the two
approaches was decided upon based on the following observations:

Existing data structures were sub
optimal and were susceptible to performance
and scalability problems. Redesign of so
me of the existing

data models was greatly needed.

The underlying result database needed to be easily portable and reproducible to
customer sites

Data integration with yet unknown and distinct data sources was inevitable

The result
data repository needed to be separate from the transactional data
systems (e.g. Laboratory Information Management Systems or LIMS) so that the
reporting and query generation did not impact the performance of supporting

The data sources for populat
ing the data models:


MPSS Production (an intermediate result or staging database)


LIMS (a transactional internal database)


Annotation (an internal database to be migrated)


Locus Link (an external public scientific database)


Contact (an internal CRM databa
se in the design phase)

A fully operational system was deployed at
[Biotechnology Company]


rigorous user, system and integration testing by the
[Biotechnology Company]

Bioinformatics team.
During this time
I functioned in an advisory capaci
ty providing
guidance and troubleshooting assistance to the IT specialists installing and configuring
the solution. My deliverables also served as the basis for the test plans and final cut
over is awaiting the completion of the acceptance testing.