Implementation of a Global NONMEM Modeling Environment - ACoP

crashclappergapΛογισμικό & κατασκευή λογ/κού

13 Δεκ 2013 (πριν από 4 χρόνια και 6 μήνες)

219 εμφανίσεις


Implementation of a Global NONMEM Modeling Environment


Lisa M. O’Brien
* (1),
Ron J. Keizer (2
Jonathan Klinginsmith (1
, Charles Ratekin (1), Michael A.
Heathman (1)


Eli Lilly and Company, Indianapolis, IN, USA
; (2)
na Software & Consulting BV, the Netherlands


Lilly is a global corporation
with research facilities in many different parts of the world. Within Lilly, there
is a large NONMEM


user community

spread acr
oss several of these facilities

in No
rth America, Europe, and Asia.
A distributed NONMEM computing environment had previously been developed within Lilly to support these users,

had been in use f
or approximately twenty years.
While this UNIX
based, command line system had many benefits

experienced users, a more user
friendly solution was desired.

System development was undertaken with the following goals:

enable team members with varying technical backgrounds to perform modeling and simulation tasks within a
unified framewor

take full advantage of existing infrastructure, including the distributed Linux computing environment and fully
automated parallel NONMEM execution

leverage available open
source tools for automation of common analysis tasks

reduce the amount of required

training for new users, by providing a logical, user
friendly graphical interface

provide command line access for experienced users who are proficient in using such environment

automate the generation of report
ready figures and tables

improve system perf
ormance for users outside of the United States (OUS).

A group of PK/PD scientists was convened to evaluate available software, including both open
source and
commercial solutions.
Based on group consensus, PsN [2,3] and Pirana [4] were selected a
s the foundation of the new
Lilly NONMEM system.
PsN would provide an industry
standard automation tool for NONMEM analysis, while Pirana
would provide a user
friendly interface with further automation capabilities.

Several solutions for hardware infrast
ructure were evaluated, including: Windows desktop deployment, Windows
virtual desktop infrastructure, and a Linux server using NoMachine's NX Enterprise Server and Client software for
remote desktop access. The Linux server implementation was chos
en due to ease of support, superior performance, and
the availability of UNIX command line access for experienced users.

Linux servers were installed in Lilly’s Indianapolis, Basingstoke and Singapore research facilities to provide better
network performa
nce for OUS users. The servers were integrated with the existing computational infrastructure, using
Sun Grid Engine (SGE)


for batch execution of NONMEM and PsN. A common file system for these servers was
placed in the Indianapolis facility, with loc
al scratch for the Erlwood and Singapore servers to facilitate local access.

Implementation of a user
friendly interface to the existing NONMEM system will significantly increase
overall throughput and efficiency in performing PK/PD modeling

tasks. The interface will make population analysis
more readily availabl
e to scientists cross
ly, thereby allowing PK/PD modeling and trial simulation to be
applied to a broader range of drug development programs.


Beal, S., Sh
einer, L.B., Boeckmann, A., & Bauer, R.J., NONMEM User's

Guides. (1989
2009), Icon Development
Solutions, Ellicott City, MD, USA,


Lindbom L, Pihlgren P, Jonsson EN.

a collection of computer in
tensive statistical methods for non
linear mixed effect modeling

using NONMEM.
Comput Methods Programs Biomed. 2005 Sep;79(3):241

Lindbom L, Ribbing J, Jonsson EN.

a Perl module for
NONMEM related
programming. Comput Methods Programs Biomed. 2004 Aug;75(2):85

Keizer RJ et al.; Comput Methods Programs Biomed 2011 Jan;101(1):72
9; Piraña and PCluster: a modeling
environment and cluster infrastructure for NONMEM.

W. Gentzsch
, “Sun Grid Engine: Towards Creating a Compute Power Grid”, 1st International Symposium on Cluster
Computing and the Grid, 2001