In Silico Liver

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

In Silico Liver

Owen Densmore
Administrator
Glen: could you say a bit about the In Silico Liver and your work with  
it?

     -- Owen



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
Reply | Threaded
Open this post in threaded view
|

Re: In Silico Liver

glen e. p. ropella-2
Thus spake Owen Densmore circa 09-09-19 10:47 AM:
> Glen: could you say a bit about the In Silico Liver and your work with it?

I wrote the ISL with Tony Hunt (and his colleagues at UCSF) as the
domain expert.  It was intended to be a first example of a model
developed using my parallax modeling method.  For these publications, we
call the method the FURM (Functional Unit Representation Method).  It
requires: a) >= 3 models all run in co-simulation, b) all models submit
to the same observables, comparable via a similarity measure, c) model
observables are discrete (or discretized continuous), and d) models are
designed for an extended lifecycle.

The 3 models we chose for the ISL are the data model (linearly
interpolated from wet-lab data of in situ perfused rat livers), the
reference model (an extended convection dispersion - ECD - model
composed of signals meant to represent physical and biological
compartments like catheters, extra- and intra-cellular spaces, etc. and
converted back to the time domain with the inverse laplace xform), and
an articulated (agent-based) model.

The ref and art models provide media for the two different types of
hypotheses: situational/phenomenal (data-centric) and mechanistic,
respectively.  The ECD was originally implemented and validated by
others using commercial OTS (opaque tools preventing deep
reproducibility).  So, our focus is on the articulated model, though we
did re-implement the ECD model using g++ to make it completely transparent.

For the art model, depending on how you slice it, there are 9 layers: 6
levels (experimenter, model, trial/lobule, sinusoidal segment - SS,
cell, solute) and 3 aspects: liver output fraction, SS output
(individual molecules), and deep tracing (can track every model-relevant
object).

Structurally, the art model consists of a number of monte carlo trials
executed by the experiment agent.  Each monte carlo trial represents a
lobule of the liver, and consists of a directed graph of SSs.  Solute
molecules (e.g. a drug) flow from the portal vein through the SS graph
and out the central (hepatic) vein.  An SS consists of several
concentric, cylindrical grids, wrapped around a core queue.  The core in
the center represents laminar blood flow.  The innermost grid represents
a more viscous flow.  The next grid (ESpace) contains the endothelial
cells through which solute has to pass to reach the space of Disse
(DisseSpace).  There can be many grids wrapped outside the ESpace.  The
outermost grid is the DisseSpace, which indexes the hepatocytes.  Solute
wanders through these spaces until it encounters a cell (endothelial or
hepatocyte), at which point the cell can take it in or not.  Inside the
cells are binders that sequester and release solute.  Binders inside
hepatocytes _may_ metabolize the solute and metabolic product is
released either into bile or back into the cell.

The output fraction for each monte carlo trial (lobule) is tracked over
time and averaged together with that of the other trials to derive a
whole liver output fraction.  That average is compared via the
similarity measure to that from the ref and data models, giving a degree
of similarity.

Let's see... did I leave anything out?  Oh yes, the agents are the
ExperAgent that executes the experiments, the models, the SSs, and the
cells.  Molecules are merely reactive objects.  It distributes over MPI
in two ways: a) group level where the parameter vector is tweaked for
each processor, which then runs several monte carlo trials or b)
experiment level where each monte carlo trial is farmed out to a
processor and the master node aggregates the data.

Of course, all this is spelled out in excruciating detail in our papers
along with the rhetoric for why we think this degree of effort is
necessary for scientific M&S.

--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org