Cancer is a complex, multiscale process, in which genetic mutations occurring at the subcellular level manifest themselves as functional changes at the cellular and tissue scale. Since cancer naturally bridges different length and time scales then mathematical models that do the same are needed if we hope to adequately address cancer specific questions. Cancer can also be quite distinct in different organs as different cellular and microenvironmental variables are present at different organ sites. In this talk we discuss our recent efforts using mathematical modeling, computation and experimentation to study cancer invasion, with a special emphasis on the role of the tumor microenvironment. We will discuss different models each of which considers the spatial scales: extracellular, cellular and subcellular, but also incorporating relevant information from other levels. We apply these models to investigate the impact that changes at different spatial scales (subcellular, cellular and extracellular) have on the resulting tumor morphology and evolution. We also discuss more organ specific tumor models, such as prostate, and the importance of considering the ecology of a given organ in relation to both homeostasis and progression.
We present new algorithmics and softwares developed in collaboration with Methodist Hospital (Cardiology) to generate patient specific dynamic models for the mitral valve, by computer analyzis of 3D-echocardiographic image sequences. From the sparse tagging (by medical experts) of the mitral apparatus on two 3D-views, we first generate a static deformable NURB geometric model of the mitral annulus and leaflets. By 3D-speckle tracking techniques, we compute the "observable" dynamics of the mitral valve. We then solve a sphisticated 3D-variational problem to generate the smooth time deformations of our NURB mitral valve model, to optimally match the extracted observable dynamics.
To evaluate the various ways control mechanisms can interact to respond to cardiovascular-respiratory stresses, models must include sufficient structure and complexity.
Two models will be discussed: a respiratory control system model responding to blood gases and altering ventilation and a cardiovascular control system model incorporating baroreflex control loops that alter resistance, unstressed volume, and heart rate.
Using these models, we will discuss issues of experimental design, data sources, and the application of sensitivity analysis including methods of sensitivity identifiability. Such analysis can be utilized to develop a clearer picture of the parameter estimation problem and guide decisions on which parameters are mostly likely identifiable given various potential sources of data. Information derived from this analysis may also provide suggestions for design of experimental test protocols. With this information, model reduction can be considered and new experimental tests designed to allow for study of the control response patterns.
In 1964 Platt (Science 146:347-353) illuminated a rational approach to scientific inquiry that integrates seamlessly with current investigations on the operation of complex biological systems. Yet in re-examining the 1964 essay in light of current trends, it is apparent that the groundbreaking approach has failed to become universal. Here it is argued that both the opportunity and the need to follow Platt's advice are now greater than ever. A revised method of strong inference for systems biology is presented and applied to analyze longstanding questions in cardiac energy metabolism. It is shown how this logical framework combined with computational-based hypothesis testing elucidates unresolved questions regarding how the energetic state of the heart is maintained in response to changes in the rate of ATP hydrolysis.
Differential equation models are often used to model biological systems. An important and difficult problem is how to estimate parameters and decide which model among possible models is the best. I will argue that Bayesian inference provides a self-consistent framework to do both tasks. In particular, Bayesian parameter estimation provides a natural measure of parameter sensitivity and Bayesian model comparison automatically evaluates models by rewarding fit to the data while penalizing the number of parameters. I will give examples of employing these approaches on ODE and PDE models.
Reconstructing the swimming lamprey from neurons to vortices. An interdisciplinary group of us have been "putting the lamprey back together again" using full blown Navier-Stokes 3D fluid dynamics. We begin from the behaving free swimming lamprey, and conclude with a computational fluid dynamical model in three dimensions that takes neuron, muscle, and environment into account. Aspects of most of these will be presented.
In line with architectural advances in supercomputing science and engineering have each been posing more and more complex problems which are defined on complex geometric physical spaces. These physical spaces are themselves defined over vast ranges of scale lengths. In order to solve problems whose scale lengths vary substantially there are two possible solutions. Either discretise down to the smallest scale with the possibility of producing such large data sets and numbers of equations that the memory requirements become too large for the machine or divide the problem into a subset of appropriate length scales and map these discretised sub-domains onto appropriate machine architectures. The definition of "appropriate" here is determined on a case-by-case basis at present.
There are a significant number of problems that exhibit a large range of physical scales but none so prominent in the 21st Century as that exemplified within the biological sciences. In the major arterial networks the blood flow dynamic scales are of the order of 1mm (cerebral vessels) up to 25mm (ascending aorta). Downstream of any major vessel exists a substantial network of arteries, arterioles and capillaries whose characteristic length scales reach the order of 10-20 microns. Within the walls of these cylindrical vessels lie ion channels consisting of proteins (100 nanometers and smaller) folded in such a way as to allow only certain molecules through the membrane. One can now of course ask the question as to why all these scales should be integrated into a single model.
To investigate the way in which the brain responds to variations in pressure and yet maintains a virtually constant supply of blood to the tissue numerical models need to be able to have a representation of not only the vascular tree but also a dynamic model of how the small arteries constrict and dilate. Simulating this phenomenon as a "lumped" connection of arteries is insufficient since different parts of the arterial tree respond differently. Thus we have a range of scales from the major arteries down to the arteriolar bed. The combination of a 3D model taken from MR data coupled with an autoregulation model with a fully populated arterial tree able to regulate dynamically remains a relatively unexplored field. This particular talk will outline the reasons for investigating multiple scales and their particular constraints with special reference to the autoregulation of blood in the cerebro-vasculature and outline a possible solution.
We study the structure and function of an air velocity sensor in crickets called the cercal system. It consists of hundreds of thread-like hairs distributed along two elongated appendages called cerci. This sensory system serves as a model for basic neuroscience research and for bio-inspired engineering applications in air velocity measurement.
We present a model of the response of the cercal system in simple sinusoidal ows, describe our numerical algorithm, and present an approach for predicting the hair response to more complex stimuli using Fourier analysis in combination with our mathematical model.
We discuss several numerical issues related to the modeling of cerebral blood flows. In particular, calibration and validation of hemodynamics models of the Circle of Willis will be considered. We also analyze several types of boundary conditions for this type of problems.
Joint work with Will Cousins and Mette Olufsen.
Achieving numerically-resolved simulation results (i.e., simulations in which discretization error is dominated by modeling error) often requires extremely high spatial resolution, making uniformly-fine spatial grids impractical. In this talk, I will describe ongoing work on developing adaptive numerical methods for biological fluid dynamics and cardiac electrophysiology. Although the equations which describe biofluid dynamics and electrophysiology are different, similar adaptive discretization methods may be employed in both areas.
In the first part of the talk, I will describe and compare two different adaptive versions of the immersed boundary method. The immersed boundary method is a numerical approach to problems of fluid-structure interaction in which an elastic structure is immersed in a viscous incompressible fluid. Results from aquatic locomotion and cardiac fluid dynamics will be presented. I will also describe the construction of "multi-scale" immersed boundary models, in which a "detailed" (PDE) fluid-structure interaction model (e.g., a model of the aortic heart valve) is linked with a "reduced" (ODE) flow model (e.g., a model of the systemic arterial tree).
In the second part of the talk, I will describe how the adaptive discretizations introduced in the context of the immersed boundary method may be applied to the solution of the bidomain equations which describe the flow of current within cardiac tissue. I will present three-dimensional tissue-scale simulations of cardiac electrophysiology which include dynamically-placed regions of cell-scale spatial resolution which track action potential wavefronts within the tissue.
The fluid-structure interaction between blood flow and cardiovascular tissue plays a crucial role at many different levels in the functioning of the cardiovascular system. Understanding this fluid-structure interaction, the wave propagation that it causes in the arterial walls, local hemodynamics and wall shear stress is important in understanding the mechanisms leading to various complications in the cardiovascular function. At the same time, the study of fluid-structure interactions involves very challenging mathematical problems which call for sophisticated solution techniques.
In this talk we will present novel mathematical and computational techniques to deal with the fluid-structure interaction between blood flow and cardiovascular tissue in an accurate and efficient way. Mathematical models, computations and results will be discussed.
The cardiovascular system is designed to deliver oxygen, fuel, and other substances to organs and tissues and to remove carbon dioxide and other waste products. The timing, dimensions, and mechanical properties of the cardiovascular systems of mammals are tuned to maximize transport efficiency and to minimize cardiac work. Under healthy conditions, tuning is maintained both at rest when cardiac output is low and at maximal exertion when cardiac output can increase 4-5 fold. The shape and magnitude of blood pressure waveforms are similar throughout the body, but the waveforms of blood flow and velocity can be quite different as a function of regional distal vascular impedance. Flow to some organs is nearly constant (brain, and kidney), while flow to other organs (liver, gut, heart, skin, and muscles) is highly variable depending on demand. Blood flow to organs and other tissues is under local control as are the mechanical properties of the local blood vessels. After meals flow to the liver and gut is increased, and during exercise flow to the muscles and coronary flow to the heart are increased. Flow to most organs and regions occurs primarily during systole when blood pressure is highest, but coronary flow is low during systole when pressure is being generated by the cardiac muscle and high during diastole when the heart is relaxed. Some believe that the vascular system is optimized to maintain a high diastolic pressure primarily to augment coronary perfusion. There are scaling laws that determine optimal values for parameters such as rates, dimensions, volumes, pressures, flows, velocities, and life span of the form: Y = a*BW**b, where Y is the parameter, BW is body weight, and a and b are constants. Scaling to the power (b) of BW ranges from: 0 (independent of BW) for blood pressure and velocities; 1/4 for heart period, vessel length, and life span; 3/8 for vessel diameter; 3/4 for vessel area, cardiac output, and blood flow; and 1 (proportional to BW) for heart weight, stroke volume, and blood volume. Despite large differences in body weights, the waveforms and magnitudes of pressure, blood velocity, and diameter pulsations in arteries supplying the organs are similar in all mammals when normalized for heart rate, implying that the time constants of the various parts of the cardiovascular systems are scaled to the period of the heart. Thus, the scaling laws allow us to model human cardiovascular conditions and diseases in animals as small as mice and to translate many of the measurements made in mice to humans.
The human genome project and other large-scale biological research endeavors have made apparent the need for novel computational strategies to analyze and interpret the overwhelmingly copious amounts of biological data resulting from modern high-throughput experiments in biology. A similar need for data integration has emerged at the system-physiology end of the physiome: In the clinical environment increasing amounts of health care data must be screened and interpreted in order to extract medically relevant and actionable information. In the clinical setting, however, the computational paradigms (and resources) to do so are currently lacking for the most part.
The talk will illustrate our vision of how computational models of organ systems can be harnessed to derive physiologically meaningful information from available signals in data-rich clinical settings like intensive care, peri-operative care, or emergency care environments. In particular, I will describe methods to estimate important cardiovascular and cerebrovascular variables from readily-available time series of heart rate, arterial blood pressure, and cerebral blood flow.
It is easy to construct models with nonlinearly dependent parameters, parameters to which the model is insensitive, or redundant parameters. When one uses conventional nonlinear least squares methods to solve these problems, the iteration can perform poorly. A common remedy for this is a combination of the Levenberg-Marquardt method and a truncated singular value decomposition. We will show how this approach is affected by ill-conditioning and errors in the evaluations of the residual and Jacobian. We show how subset selection can be applied to the Jacobian to diagnose problems of this type and improve the quality of the results. These results are motivated by applications to cardiovascular modeling.
One feature of systems biology is the interdisciplinary study of organisms viewed as interacting networks of their components. In the pursuit of deciphering biological function, parameter identification finds applications ranging from the reconstruction of networks, the detailed analysis of sub-circuits to inferring rate dependencies on species concentrations. We address the - sometimes overlooked - instability of parameter identification and discuss regularization of the problem also by sparsity enforcing techniques. Furthermore, we present inverse bifurcation analysis as a tool for engineering the qualitative behaviour of biological systems.
As a very brief abstract, I plan to discuss: Classic and novel approaches to the assessment of myocardial and valve motion. The application of Doppler and phase contrast techniques for hemodynamic assessments of pressure and flow. Will review current and developing use of 2D echo, 3D echo and gated-cardiac MRI tools. Will introduce novel 3D applications for the assessment of mitral annular motion, leaflet tracking, flow boundary interactions and valve stress.
Parameter estimation in physiology is an important and challenging problem. Different areas in physiology need different approaches: Some areas as the cardiovascular system and corresponding control mechanisms encompass several mechanisms largely with the same purpose but allow under normal circumstances only few relevant observables. Hence, if observables of the individual collaborating mechanisms do not exist the system is not practically identificable and parameter estimation is a generic problem. Thus strategies have to be developed to circumvent such problems, e.g. sequential modeling, incorporation of traditional or generalized sensitivity analysis, subset selection analysis (or group sensitivity analysis) using singular value decomposition. Another area of pathophysiology is disorder in the endocrine system, e.g. the HPA-axis responsible for depression and other mental disorders. Here data are corrupted by noise and furthermore a traditional maximum-likelihood approach resulting in a weighted least square problem ran into problems due to the complexity of the optimization landscape with lots of local minima. Thus the qualitative dynamics of the system may be used to rule out candidates and to choose the structure of the model. In addition one may use collocation methods, e.g. Functional Differential Analysis in the following parameter estimation procedure instead of more traditional approaches.
Another but equally important discussion is the purpose of the parameter estimation, i.e. the purpose of the mathematical modeling: The purpose may be to gain insight into and to understand the physiological system under consideration scientifically, to gain sufficient understanding of the system for industrial development, e.g. to develop target specific pharmaceutical drugs, or to use the model in the clinic for diagnosing and decision support in (real time) treatment of patients. For the latter discussion three cases are illustrative: syncope (cardiovascular physiology), depression (endocrine physiology) and outbreak of diabetes (auto-immune inflammation).
In the talk advances and drawbacks of different approaches for parameter estimation will be discussed and illustrated on various pathologies.
The process of cancer progression is one of the most investigated phenomena in both, experimental and computational sciences. One of the main challenges in modeling tumor development is in translating the plethora of experimental data acquired on cellular and subcellular levels into the parameters of computational models that are required to handle tumor growth at a tissue and organ scale.
I will present an integrated approach to investigate the process of carcinogenesis, i.e. the development of epithelial tumors, that combines laboratory experiments, image processing and bio-mechanical modeling of simple and stratified epithelia, such as mammary breast glands or skin epidermis, due to their unique properties of frequent cell turnover and a finely defined topology. Both, computational challenges and achievements of such multiscale, multidisciplinary modeling will be discussed.
Advances in medical monitoring and imaging technology have significantly increased the volume of quantitative data available in critical care clinical settings, yet improvements in patient outcomes have not kept pace with these developments.
A possible avenue for translating available information resources into therapeutic advances comes from the application of mathematical models of experimentally elucidated physiological mechanisms to support clinical decision-making.
I will present results of joint work with Sven Zenker (U. Bonn) and Gilles Clermont (U. Pitsburgh Medical Center), showing that computation of the full Bayesian posterior distribution of states and parameters for an ordinary differential equation cardiovascular system model, conditional on clinical observations, can yield multimodal distributions, the modes of which correspond to regions in parameter/state space that can be identified with clinical diagnoses.
The computation of a probability distribution on parameter/state space offers a valuable alternative to direct maximum likelihood estimation that can potentially be harnessed to predict patient trajectories and to design patient-specific diagnostic or therapeutic interventions. In translating this approach to the bedside, however, significant challenges abound.
I will discuss some of these challenges, which include the ill-posedness of the inverse problem of mapping distributions on observation space to distributions on parameter/state space as well as the need for algorithms for effective visualization of inference results, for efficient sampling techniques to allow the identification of modes within probability distributions, and for computational methods to allow real time assimilation of high-frequency data into the inference procedure.
Numerical simulation of the electrical potential in the heart is still a challenging problem. On one hand the equations commonly used, the so-called Bidomain model, have mathematical features that make their numerical solution pretty expensive. On the other hand, reliability of the results demands for fine meshes for the space and time variables. Numerical effectiveness is required also in view of the coupled solution of electrical, structural and fluid dynamics. In the last 15 years many advances have been done both in the mathematical analysis and the set up of numerical methods (see e.g. [1, 7, 6]). In this talk, we will discuss some methods recently proposed in collaboration with medical doctors at the School of Medicine of Emory University. In particular we will consider:
We will briefly consider also the problem of moving domains by tracking the motion from images.
Appropriate model reduction is essential to the effective and insightful use of integrative dynamic models. My talk will aim to substantiate this perhaps counter-intuitive claim, using illustrations from biomedicine, biology, and power systems (the latter allowing some consideration of how engineered systems compare to natural systems). Of particular interest are computational approaches that reveal structural features of the dynamics of a large integrative model, features that can be exploited to obtain interpretable reduced models.