Ohio State nav bar

Bayesian Causal Inference Workshop

Graph showing bayesian causal inference results
June 2 - June 4, 2019
8:30AM - 8:00PM
MBI Auditorium, Jennings Hall 355

Date Range
Add to Calendar 2019-06-02 08:30:00 2019-06-04 20:00:00 Bayesian Causal Inference Workshop Causality lies at the heart of many scientific research endeavors, including Statistics, Biostatistics, Epidemiology, Economics, Computer Science, Data Science, Sociology, Political Science, etc. In recent years, the use of Bayesian methods in causal inference has drawn more attention in both randomized trials and observational studies. Bayesian ideas have spread over in many topics of causal inference, from the adaptive trial design, to missing data framework and graphical models. This Bayesian Causal Inference Workshop intends to bring together interdisciplinary thought leaders and researchers who are interested in developing and applying Bayesian methodology to infer causal relationship from experimental and non-experimental data. We also encourage junior researchers to participate the discussion and develop their professional network. MBI Auditorium, Jennings Hall 355 Mathematical Biosciences Institute mbi-webmaster@osu.edu America/New_York public

Causality lies at the heart of many scientific research endeavors, including Statistics, Biostatistics, Epidemiology, Economics, Computer Science, Data Science, Sociology, Political Science, etc. In recent years, the use of Bayesian methods in causal inference has drawn more attention in both randomized trials and observational studies. Bayesian ideas have spread over in many topics of causal inference, from the adaptive trial design, to missing data framework and graphical models. This Bayesian Causal Inference Workshop intends to bring together interdisciplinary thought leaders and researchers who are interested in developing and applying Bayesian methodology to infer causal relationship from experimental and non-experimental data. We also encourage junior researchers to participate the discussion and develop their professional network.

Organizers

Fan Li
Department of Statistical Science
Duke University
fli@stat.duke.edu

Bo Lu
Division of Biostatistics
The Ohio State University
lu.232@osu.edu

Steven MacEachern
Department of Statistics
The Ohio State University
maceachern.1@osu.edu

Peter Mueller
Department of Mathematics
University of Texas
​​​​​​​pmueller@math.utexas.edu

Dylan Small
Statistics Department
Wharton University of Pennsylvania
​​​​​​​dsmall@wharton.upenn.edu

Xinyi Xu
Department of Statistics
The Ohio State University
xu.214@osu.edu

 

 

Tutorials

Tutorials will take place in 355 Jennings Hallon Sunday June 2, starting at 1pm

Tutorial 1: Bayesian Causal Inference: A Review and New Perspectives

Watch Video

Photo of Fan Li

Fan Li, Ph.D.
Associate Professor of Statistical Science 
Duke University

Tutorial 2: Nonparametric Bayesian Data Analysis for Causal Inference

Watch Video

Steven Maceachern

Steve MacEachern, Ph.D.
Professor of Statistics, Department Chair
The Ohio State University

Photo of Peter Mueller

Peter Mueller, Ph.D.
Department of Statistics and Data Science Chair 
University of Texas Austin

 

 

Schedule

Time Session
01:30 PM
03:00 PM
Fan Li - Tutorial 1: Bayesian Causal Inference: A Review and New Perspectives
03:30 PM
05:30 PM
Steve MacEachern and Peter Mueller - Tutorial 2: Nonparametric Bayesian Data Analysis for Causal Inference
Time Session
08:00 AM
08:30 AM
Working Breakfast
08:30 AM
10:10 AM

Session 1:

Jason Roy - A Bayesian nonparametric to structural mean models
Qingzhao Yu - A Bayesian sequential design with adaptive randomization
Yanxun Xu - A Bayesian Nonparametric Approach for Evaluating the Effect of Treatment with Semi-Competing Risks
10:30 AM
12:10 PM

Session 2:

Jennifer Hill - Partial identification of causal effects in grouped data with unobserved confounding
Joseph Hogan - Using Electronic Health Records Data for Predictive and Causal Inference About the HIV Care Cascade
Patrick Schnell - Mitigating bias from unobserved spatial confounders using mixed effects models
02:00 PM
03:40 PM

Session 3:

Juhee Lee - Optimizing Natural Killer Cell Doses for Heterogeneous Cancer Patients Based on Multiple Event Times
Thomas Murray - A Bayesian Imputation Approach to Optimizing Dynamic Treatment Regimes
Andrew Chapple - Subgroup-specific dose finding in phase I clinical trials based on time to toxicity allowing adaptive subgroup combination: Subgroup-specific dose finding in phase I clinical trials
04:00 PM
05:40 PM

Session 4:

Siddhartha Chib - Moment-Based Semiparametric Bayesian Causal Inference: Some Examples
Siva Sivaganesan - Bayesian Subgroup Analysis using Collections of ANOVA Models
Jared Murray - Bayesian nonparametric models for treatment effect heterogeneity: model parameterization, prior choice, and posterior summarization
06:00 PM
07:30 PM
Poster Session with Refreshments
Time Session
08:00 AM
08:30 AM
Working Breakfast
08:30 AM
10:10 AM

Session 5:

Mike Daniels - Bayesian nonparametrics for comparative effectiveness research in EHRs
Antonio Linero - Bayesian Nonparametric Methods for Longitudinal Outcomes Missing Not at Random
Yajuan Si - Bayesian profiling multiple imputation for missing electronic health records
10:30 AM
12:10 PM

Session 6:

Georgia Papadogeorgou - Adjusting for unmeasured spatial confounding with distance adjusted propensity score matching
Sameer Deshpande - Sensitivity Analysis, Robustness, and Model Misspecification
Patrick Schnell - Interpretations and multiplicity control for Bayesian benefiting subgroup identification

 

 

Speakers and Talks

Name Affiliation Email
Andrew Chapple LSU Health achapp@lsuhsc.edu
Siddhartha Chib Washington University chib@wustl.edu
Mike Daniels University of Florida daniels@ufl.edu
Sameer Deshpande Massachusetts Institute of Technology sameerd@alum.mit.edu
Jennifer Hill New York University jennifer.hill@nyu.edu
Joseph Hogan Brown University jwh@brown.edu
Juhee Lee University of California, Santa Cruz juheelee@soe.ucsc.edu
Antonio Linero Florida State University arlinero@stat.fsu.edu
Thomas Murray University of Minnesota 8tmurray@gmail.com
Jared Murray University of Texas, Austin jared.murray@mccombs.utexas.edu
Georgia Papadogeorgou Duke University gp118@duke.edu
Jason Roy Rutgers University jason.roy@rutgers.edu
Patrick Schnell Ohio State University schnell.31@osu.edu
Yajuan Si University of Michigan Instisute for Social Research yajuan@umich.edu
Siva Sivaganesan University of Cincinnati sivagas@ucmail.uc.edu
Yanxun Xu Johns Hopkins University yanxun.xu@jhu.edu
Qingzhao Yu LSU Health qyu@lsuhsc.edu
Qinyuan Zhao University of Pennsylvania qyzhao@wharton.upenn.edu

Hussain Abbas:
The impact of Bayesian Methods on AutoML

AutoML is one of the hottest trends in Artificial Intelligence research and is poised to completely upend the standard model currently used in the Data Science industry.  Ironically, few understand what it is, how it works, and its relation to Bayesian Statistics. 

In this poster, we will discuss the impact of Bayesian Methods on AutoML. Specifically, we will discuss what AutoML is, how Bayesian Methods play a critical role in AutoML, how AutoML is poised to completely upend the standard model currently used in the Data Science industry, and future areas of research & application.


Kazeem Adepoju:
Bayesian Estimation of Rare Sensitive Attribute from Mangat Randomized Response Technique with Transmuted Kumaraswamy Prior

Surveying human population to study about a sensitive attribute often results in evasive answers when individuals of the human population are asked directly. Most people prefer to conceal the truth regarding their extent of their accumulated wealth, their history of intentional tax evasion and other illegal and or unethical practices such as corruption, susceptibility to intoxication, homosexuality among other issues which are customarily disapproved of by society. Asking direct question often  fail to yield reliable and consistent data on such confidential aspects of human life. Warner (1965) developed an alternative survey technique known as randomized response (RR) technique in order to minimize bias emanated from inquires thereby increasing precision. Variants of Warner's RR model has been presented to estimate the proportion of people with sensitive attribute using both the Bayesian framework with simple beta prior to improve further on the privacy of respondents. In this work we proposed more robust prior called Transmuted Kumaraswamy prior for estimating the prevalence of stigmatized attribute. The result obtained confirmed the superiority of the proposed Transmuted Kumaraswamy prior over the conventional simple beta prior in terms of the minimum mean square error.


Mohammad Bhuiyan:
Bayesian shape Invariant Model  for Longitudinal data

The research in longitudinal growth curve modeling utilizing the Bayesian framework is limited and the wider application is hindered by the computational complexities involved in such models. Recently, Cole et al. proposed a shape invariant model with random effects, Superimposition by Translation and Rotation (SITAR), to express individual growth curves through three subject specific parameters (size, tempo, and velocity). We propose a Bayesian model to estimate these three subject specific parameters employing computational efficiencies to ease the estimation time burden. To illustrate our approach, longitudinal height measurements from children with ADHD between ages 1-17 years were modeled to evaluate the relationship between stimulant medication exposure and growth. We demonstrate and compare results of the SITAR model utilizing a nonlinear mixed effects model framework to our proposed Bayesian framework approach both utilizing a natural cubic spline function in evaluating the association of age at start of stimulant medication with height trajectories. We found early age of medication start to be associated with lower size and lower tempo (earlier timing of peak velocity.)


Andrew Chapple:
Subgroup-Specific Dose Finding in Phase I Clinical Trials Based on Time to Toxicity Allowing Adaptive Subgroup Combination

A Bayesian design is presented that does precision dose finding based on time to toxicity in a phase I clinical trial with two or more patient subgroups. The design, called Sub‐TITE, makes sequentially adaptive subgroup‐specific decisions while possibly combining subgroups that have similar estimated dose‐toxicity curves. Decisions are based on posterior quantities computed under a logistic regression model for the probability of toxicity within a fixed follow‐up period, as a function of dose and subgroup. Similarly to the time‐to‐event continual reassessment method (TITE‐CRM, Cheung and Chappell), the Sub‐TITE design downweights each patient's likelihood contribution using a function of follow‐up time. Spike‐and‐slab priors are assumed for subgroup parameters, with latent subgroup combination variables included in the logistic model to allow different subgroups to be combined for dose finding if they are homogeneous. This framework can be used in trials where clinicians have identified patient subgroups but are not certain whether they will have different dose‐toxicity curves. A simulation study shows that, when the dose‐toxicity curves differ between all subgroups, Sub‐TITE has superior performance compared with applying the TITE‐CRM while ignoring subgroups and has slightly better performance than applying the TITE‐CRM separately within subgroups or using the two‐group maximum likelihood approach of Salter et al that borrows strength among the two groups. When two or more subgroups are truly homogeneous but differ from other subgroups, the Sub‐TITE design is substantially superior to either ignoring subgroups, running separate trials within all subgroups, or the maximum likelihood approach of Salter et al. Practical guidelines and computer software are provided to facilitate application.


Siddhartha Chib:
Moment-Based Semiparametric Bayesian Causal Inference: Some Examples

We consider the problem of prior-posterior analysis of causal parameters under minimal, core assumptions, in particular, unconditional and conditional moment restrictions on the unknown probability distribution of the outcomes. The framework is based on the theory of estimation and model comparison, under the nonparametric exponentially titled empirical likelihood,  for moment restricted models, developed in Chib, Shin and Simoni (2018) and Chib, Shin and Simoni (2019).  We provide illustrations of this approach to the estimation of the causal parameter in instrumental variables regressions, to the problem of average treatment effect (ATE) estimation under the conditional ignorability assumption, and the regression-discontinuity ATE estimation under a sharp-design.


Leah Comment:
Nonparametric causal inference for semicompeting risks using Bayesian Additive Regression Trees (BART)

Causal inference sometimes centers around comparing treatments or exposures with respect to their effects on sequences of future events rather than on any single outcome of interest. Understanding effects in terms of event sequences arises in the context of semicompeting risks, where a nonterminal event (e.g., hospital readmission) may be truncated by a terminal event (e.g., death). Comment et al. (2019) recently introduced new causal estimands for this setting in which the proposed estimators relied on strong parametric assumptions. However, a goal of causal inference is to detect when treatment strategies should be tailored on the basis of covariates, and strong parametric assumptions can be problematic in two ways: inadequacy of confounder adjustment, and inability to characterize complex and heterogeneous treatment effects. To relax these assumptions, we propose a flexible approach using Bayesian additive regression trees (BART), a machine learning ensemble method. We demonstrate our approach in the context of estimating time-varying survivor average causal effects for hospital readmission among newly diagnosed late-stage pancreatic cancer patients.


Mike Daniels:
Bayesian Nonparametrics for Comparative Effectiveness Research in EHRs
Watch Video


Sameer Deshpande:
Estimating the Health Consequences of Playing Football using Observational Data: Challenges, Lessons Learned, and New Directions

Watch Video

There has been increasing concern in both the scientific community and the wider public about the short- and long-term consequences of playing American-style tackle football in adolescence. In this talk, I will discuss several matched observational studies that attempt to uncover this relationship using data from large longitudinal studies. Particular emphasis will be placed on discussing our general study design and analysis plan. I will also discuss limitations of our approach and outline how we might address these from a Bayesian perspective.


Preeti Dubey:
Modeling extracellular HBV DNA kinetics during infection and treatment in primary human hepatocytes

Detailed characterization of hepatitis B virus (HBV) kinetics during early infection and treatment in primary human hepatocytes (PHH) is lacking. A mathematical model was developed to provide insights into the dynamics of HBV infection and treatment in primary human hepatocytes (PHHs). Unknown HBV-host parameters and treatment efficacy were estimated. 
To explore the effect of treatment (entecavir, ETV) on PHHs, we assumed the inhibition of HBV DNA production in the model. Model fits to ETV treated HBV DNA reproduces the dynamics of ETV treated extra HBV DNA data (RMS = 0.11) and the efficacy for inhibition of HBV DNA production was found to be 99%. 


Michael Elliott:
Penalized Spline of Propensity Methods for Treatment Comparisons in Longitudinal Treatment Settings

Observational studies lack randomized treatment assignment, and as a result valid inference about causal effects can only be drawn by controlling for confounders. When time dependent confounders are present that serve both as mediators of treatment effects and affect future treatment assignment, standard regression methods for controlling for confounders fail. We propose a robust Bayesian approach to causal inference in this setting we term Penalized Spline of Propensity Methods for Treatment Comparison (PENCOMP), which builds on the Penalized Spline of Propensity Prediction method for missing data problems. PENCOMP estimates causal effects by imputing missing potential outcomes with flexible spline models, and draws inference based on imputed and observed outcomes. We demonstrate that PENCOMP has a double robustness property for causal effects – that is, unbiased estimates can be obtained when either the treatment assignment or mean model is correctly specified, and simulations suggest that it tends to outperform doubly-robust marginal structural modeling when the relationship between propensity score and outcome is nonlinear or when the weights are highly variable. We further consider the issue of “overlap” – that is, restricting inference to the subset of the population in which the sampled data allows estimation of all of the treatments of interest without extrapolation. This is particularly important in the setting of longitudinal treatments for which PENCOMP is designed, where the set of estimable treatment patterns may be quite limited. We apply our method to evaluate the effects of antiretroviral treatment on CD4 counts in HIV infected patients.


Trung Ha:
Applications of Bayesian Logistic Linear Mixed Model for Prevalence Estimation of a Smoke-Free Home by Parental Race and Ethnicity

We propose a hybrid approach that is based on the Calibrated Bayesian framework proposed by Little. The hybrid approach incorporates design-based estimation, where pre-specified survey weights are incorporated to reflect the survey design specifics, and model-based estimation, where the Bayesian Logistic Linear Mixed model is built and fitted to the area-level data. We illustrate the hybrid method using the 2014-15 Tobacco Use Supplement – Current Population Survey data for single-parent households. We identified 10 racial and ethnic parental groups and estimated the corresponding proportions of a smoke-free home for each group. The area-level model included several area proportions as the covariates: proportions of 25-44 year-old parents, higher-educated parents, unemployed parents, parents who were never married, parents who smoked daily, and parents who were surveyed by phone. In addition, we performed comparisons to the design-based approach. The hybrid method offered more informative interval estimators for several groups. The lowest prevalence of a smoke-free home corresponded to families with biracial parents who were either non-Hispanic Black/African American and White (Prevalence Estimate = 74%, 95% Highest Posterior Density Interval = 58% : 90%) or non-Hispanic Black/African American and American Indian/Alaska Native (Prevalence Estimate = 71%, 95% Highest Posterior Density Interval = 49% : 90%). The highest prevalence of a smoke-free home (above 90%) corresponded to families where the parent was either Hispanic or non-Hispanic Asian. The study pointed to existing disparities in the prevalence of a smoke-free home associated with race/ethnicity of the parent among single-parent households.


Jennifer Hill:
Partial identification of Causal Effects in Grouped Data with Unobserved Confounding

The unbiased estimation of a treatment effect in the context of observational studies with grouped data is considered. When analyzing such data, researchers typically include as many predictors as possible, in an attempt to satisfy ignorability, and so-called fixed effects (indicators for groups) to capture unobserved between-group variation. However, depending on the mathematical properties of the data generating process, adding such predictors can actually increase treatment effect bias if ignorability is not satisfied. Exploiting information contained in multilevel model estimates, we generate bounds on the comparative potential bias of competing methods, which can inform model selection. Our approach relies on a parametric model for grouped data and an omitted confounder, establishing a framework for sensitivity analysis. We characterize the strength of the confounding along with bias amplification using easily interpretable parameters and graphical displays. Additionally we provide estimates of the uncertainty in the derived bounds and create a framework for estimating causal effects with partial identification.


Joseph Hogan:
Using Electronic Health Records Data for Predictive and Causal Inference About the HIV Care Cascade

The HIV care cascade is a conceptual model describing essential steps in the continuum of HIV care. The cascade framework has been widely applied to define population-level metrics and milestones for monitoring and assessing strategies designed to identify new HIV cases, link individuals to care, initiate antiviral treatment, and ultimately suppress viral load.

Comprehensive modeling of the entire cascade is challenging because data on key stages of the cascade are sparse. Many approaches rely on simulations of assumed dynamical systems, frequently using data from disparate sources as inputs. However growing availability of large-scale longitudinal cohorts of individuals in HIV care affords an opportunity to develop and fit coherent statistical models using single sources of data, and to use these models for both predictive and causal inferences.

Using data from 90,000 individuals in HIV care in Kenya, we model progression through the cascade using a multistate transition model fitted using Bayesian Additive Regression Trees (BART), which allows considerable flexibility for the predictive component of the model. We show how to use the fitted model for predictive inference about important milestones and causal inference for comparing treatment policies. Connections to agent-based mathematical modeling are made.

This is joint work with Yizhen Xu, Tao Liu, Michael Daniels, Rami Kantor and Ann Mwangi


Wasiur KhudaBukhsh:
Functional Central Limit Theorem for Susceptible-Infected Process on Configuration Model Graphs

We study a stochastic compartmental susceptible-infected (SI) epidemic process on a configuration model random graph with a given degree distribution over a finite time interval. We split the population of graph nodes into two compartments, namely, S and I, denoting susceptible and infected nodes, respectively. In addition to the sizes of these two compartments, we study counts of SI-edges (those connecting a susceptible and an infected node), and SS-edges (those connecting two susceptible nodes). We describe the dynamical process in terms of these counts and present a functional central limit theorem (FCLT) for them, when the number of nodes in the random graph grows to infinity. To be precise, we show that these
counts, when appropriately scaled, converge weakly to a continuous Gaussian vector semi-martingale process in the space of vector-valued cadlag functions endowed with the Skorohod topology. (Joint work with Casper Woroszylo, Greg Rempala and Heinz Koeppl)


Juhee Lee:
Optimizing Natural Killer Cell Doses for Heterogeneous Cancer Patients Based on Multiple Event Time
Watch Video

A sequentially adaptive Bayesian design is presented for a clinical trial of cord blood derived natural killer cells to treat severe hematologic malignancies. Given six prognostic subgroups defined by disease type and severity, the goal is to optimize cell dose in each subgroup. The trial has five co-primary outcomes, the times to severe toxicity, cytokine release syndrome, disease progression or response, and death. The design assumes a multivariate Weibull regression model, with marginals depending on dose, subgroup, and patient frailties that induce association among the event times. Utilities of all possible combinations of the nonfatal outcomes over the first 100 days following cell infusion are elicited, with posterior mean utility used as a criterion to optimize dose. For each subgroup, the design stops accrual to doses having an unacceptably high death rate, and at the end of the trial selects the optimal safe dose. A simulation study is presented to validate the design’s safety, ability to identify optimal doses, and robustness, and to compare it to a simplified design that ignores patient heterogeneity.


Sooyeong Lim:
Identifying Two-Stage Optimal Dynamic Treatment Regimes: Compare Performances of different methods under Model Misspecification

Dynamic Treatment Regimes (DTR) are a sequence of decisions made over time, e.g. medical treatment is dynamically adjusted to the patient’s responses. We search for optimal DTR that maximize a desirable outcome for patients with different baseline information. Q-learning and A-learning are a parametric and a semi-parametric methods proposed for finding the optimal DTR. Here, we propose to extend Bayesian additive tree (BART) for K-stage DTR. To assess how different DTR algorithms performs in correctly identifying the optimal DTR, a revised 𝑅[𝑑^𝑜𝑝𝑡 ] (Schulte et al., 2013) and distance between optimal outcome and estimated optimal outcome are proposed. Different DTR methods for the two-stage settings are compared using 𝑅[𝑑^𝑜𝑝𝑡 ] and distance from optimal outcome under the potential model misspecification setup. Simulation result shows that the proposed BART approach outperforms Q-and A-learning methods when model is not correctly specified.


Antonio Linero:
Bayesian Nonparametric Methods for Longitudinal Outcomes Missing Not at Random

We consider the setting of a longitudinal outcome subject to nonignorable missingness. This requires the specification f(y,r) for the joint model of the response y and missing data indicators r. We argue that the obvious Bayesian nonparametric approaches to joint modeling which have been applied in the literature run afoul of the inherent identifiability issues with nonignorable missingness, leading to posteriors with dubious theoretical behavior and producing questionable inferences. As an alternative, we propose an indirect specification of a prior on the observed data generating mechanism f(y_{obs}, r), which is fully identified given the data. This prior is then used in conjunction with an identifying restriction to conduct inference. Advantages of this approach include a flexible modeling framework, access to simple computational methods, flexibility in the choice of ``anchoring'' assumptions, strong theoretical support, straightforward sensitivity analysis, and applicability to non-monotone missingness.


Ariadna Martinez Gonzalez:
Connective Infrastructure and Its Effect on Poverty Reduction in Mexico

This research investigates the effect of connective infrastructure on poverty, where connective infrastructure is measured as kilometers of highways. The high level of poverty in rural Mexico has been one of the targets of Mexican policies. One avenue to reduce poverty is to provide connective infrastructure to enhance access to more prosperous urban centers; however, the causality between those two variables is not straightforward. The main empirical challenge is that connective infrastructure and poverty are endogenous; that is, connective infrastructure can cause poverty and poverty can be the main reason for the construction of connective infrastructure in a region. Connective infrastructure can cause poverty by hurting local businesses that, were it not for the newly built infrastructure, would not have been exposed to urban competitors. That said, policymakers can reduce poverty by building highways that connect poor regions to main cities, schools, hospitals, and other public services. This work builds on recent literature that employs instrumental variables (IV) to control for this endogeneity between infrastructure and poverty. One of these IVs has not been used before in the literature: the distance from the centroid of each municipality to the optimal highway system derived from relocating the kilometers of highways in 1997 so that they minimize the distance to the centroids of population. In an important contribution I develop a unique database through historical research, digitization of maps, quantification of connective infrastructure, and the merging of several other national data sources. The results of this research suggest that the effect of an increase in highway infrastructure on the poverty rate is not significantly different from zero when the best specification is used. In the most optimistic (but not the best specified) case, an increase in highway infrastructure by 1\% reduces the poverty rate by 9.5 percentage points. According to my back-of-the-envelope calculations, this means that the average cost of getting one person out of poverty would be 2,600 US dollars spread over 13 years (1997 to 2010). Consequently, a more assertive decision is to rely on the results of the better-specified models and state that this research found that the effect of highway infrastructure is not significantly different from zero.


Jami Mulgrave:
A Bayesian Analysis Approach to Credible Interval Calibration to Improve Reproducibility of Observational Studies

Observational healthcare data offer the potential to estimate causal effects of medical products at scale. However, observational studies have often been found to be nonreproducible. Two sources of error, random and systematic are partly to blame. Random error, caused by sampling variability, is typically quantified and converges to zero as databases become larger. Systematic error, from causes such as confounding, selection bias, and measurement error, persists independently from sample size and increases in relative importance. While there is widespread awareness of systematic error in observational studies, there is little work in devising approaches to empirically estimate the magnitude of systematic error. Negative controls have been proposed as a tool to better explain systematic error. Negative controls are exposure-outcome pairs where one believes no causal effect exists. Executing a study on negative controls and determining whether the results indeed show no effect, i.e. using them as a “falsification hypothesis”, can help detect bias inherent to the study design or data.  In order to account for these biases, one could incorporate the effect of error observed for negative controls into the estimates of observational studies, in effect calibrating the estimates. In addition, one could incorporate the error observed for positive controls to account for bias. Positive controls are exposure-outcome pairs where a causal effect of known magnitude is believed to exist.  In this work, we apply a Bayesian statistical procedure for credible interval calibration that uses negative and positive controls. We show that the credible interval calibration procedure restores nominal characteristics, such as 95% coverage of the true effect size by the 95% credible interval.  We recommend credible interval calibration to improve reproducibility of observational studies. 


Jared Murray:
Bayesian Nonparametric Models for Treatment Effect Heterogeneity: Model Parameterization, Prior Choice, and Posterior Summarization 

We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects.  We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects in a statistically principled fashion.


Thomas Murray:
A Bayesian Imputation Approach to Optimizing Dynamic Treatment Regimes

Watch Video

Medical therapy often consists of multiple stages, with a treatment chosen by the physician at each stage based on the patient's history of treatments and clinical outcomes. These decisions can be formalized as a dynamic treatment regime. This talk describes a new approach for optimizing dynamic treatment regimes that bridges the gap between Bayesian inference and Q-learning. The proposed approach fits a series of Bayesian regression models, one for each stage, in reverse sequential order. Each model uses as a response variable the remaining payoff assuming optimal actions are taken at subsequent stages, and as covariates the current history and relevant actions at that stage. The key difficulty is that the optimal decision rules at subsequent stages are unknown, and even if these optimal decision rules were known the payoff under the subsequent optimal action(s) may be counterfactual. However, posterior distributions can be derived from the previously fitted regression models for the optimal decision rules and the counterfactual payoffs under a particular set of rules. The proposed approach uses imputation to average over these posterior distributions when fitting each regression model. An efficient sampling algorithm, called the backwards induction Gibbs (BIG) sampler, for estimation is presented, along with simulation study results that compare implementations of the proposed approach with Q-learning.


Ciara Nugent:
Inferring treatment effects for unplanned subgroups using multiple studies

Many clinical trials are underpowered to detect subgroups on their own, hindering inference for causal treatment effects in all but a few pre-specified subgroups. We propose a framework to recover inference on treatment effects in unplanned subpopulations by setting up suitable Bayesian inference and decision problems, and by borrowing strength from other sources. That is, we look at conducting subgroup analysis across multiple studies. First, we embed individual randomized control trials in a hierarchical framework, such that information can be shared across them. Then we compare alternative prior specifications on the shared covariates. In particular, we compare normal shrinkage priors to non-local priors and spike and slab priors. We demonstrate the proposed approach in simulation studies, and in an actual trial for progesterone treatment for women at risk for preterm delivery.


Georgia Papadogeorgou:
Adjusting for Unmeasured Spatial Confounding with Distance Adjusted Propensity Score Matching

Watch Video

Propensity score matching is a common tool for adjusting for observed confounding in observational studies, but is known to have limitations in the presence of unmeasured confounding. In many settings, researchers are confronted with spatially-indexed data where the relative locations of the observational units may serve as a useful proxy for unmeasured confounding that varies according to a spatial pattern. We develop a new method, termed distance adjusted propensity score matching (DAPSm) that incorporates information on units’ spatial proximity into a propensity score matching procedure. DAPSm provides a framework for augmenting a “standard” propensity score analysis with information on spatial proximity and provides a transparent and principled way to assess the relative trade-offs of prioritizing observed confounding adjustment versus spatial proximity adjustment. The method is motivated by and applied to a comparative effectiveness investigation of power plant emission reduction technologies designed to reduce population exposure to ambient ozone pollution.


Jinwen Qiu:
Multivariate Bayesian Structural Time Series Model

It deals with inference and prediction for multiple correlated time series, where one also has the choice of using a candidate pool of contemporaneous predictors for each target series. Starting with a structural model for time series, we use Bayesian tools for model fitting, prediction and feature selection, thus extending some recent works along these lines for the univariate case. The Bayesian paradigm in this multivariate setting
helps the model avoid overfitting, as well as captures correlations among multiple target time series with various state components. The model provides needed flexibility in selecting a different set of components and available predictors for each target series. The cyclical component in the model can handle large variations in the short term, which may be caused by external shocks. Extensive simulations were run to investigate properties such as estimation accuracy and performance in forecasting. This was followed by an empirical study with one-step-ahead prediction on the max log return of a portfolio of stocks that involve four leading financial institutions. Both the simulation studies and the extensive empirical study confirm that this multivariate model outperforms three other benchmark
models, viz. a model that treats each target series as independent, the autoregressive integrated moving average model with regression (ARIMAX), and the multivariate ARIMAX (MARIMAX) model.


Jason Roy:
A Bayesian Nonparametric to Structural Mean Models

Structural mean models (SMMs) are causal models that involve mean contrasts of potential outcomes, conditional on confounders and observed treatment. We show how standard Bayesian additive regression trees (BART) can be modified to estimate parameters from SMMs. This allows researchers to fit models with the desired causal structure while making minimal assumptions about the relationship between confounders and outcome. Our methods are demonstrated with simulation studies and an application to dataset involving adults with HIV/Hepatitis C coinfection who newly initiate antiretroviral therapy.


Patrick Schnell:
Mitigating Bias from Unobserved Spatial Confounders Using Mixed Effects Models
Watch Video

Confounding by unmeasured spatial variables has been a recent interest in both spatial statistics and causal inference literature, but the concepts and approaches have remained largely separated. We aim to add a link between these branches of statistics by considering unmeasured spatial confounding within a formal causal inference framework, and estimating effects using outcome regression tools popular within the spatial statistics literature. We show that the common approach of using spatially correlated random effects does not mitigate bias due to spatial confounding, and present a set of assumptions that can be used to do so. Based on these assumptions and a conditional autoregressive model for spatial random effects, we propose an affine estimator which addresses this deficiency, and illustrate its application to causes of fine particulate matter concentration in New England.


Yajuan Si:
Bayesian profiling multiple imputation for missing electronic health records

Electronic health records (EHRs) are increasingly popular for clinical and comparative effectiveness research but suffer from usability deficiencies. Motivated by health services research on diabetes care, we seek to increase the quality of EHRs and focus on missing longitudinal glycosylated hemoglobin (A1C) values. Under the framework of multiple imputation (MI) we propose an individualized Bayesian latent profiling approach to capturing A1C measurement trajectories related to missingness. We combine MI inferences to evaluate the effect of A1C control on adverse health outcome incidence. We examine different missingness mechanisms and perform model diagnostics and sensitivity analysis. The proposed method is applied to EHRs of adult patients with diabetes who were medically homed in a large academic Midwestern health system between 2003 and 2013. Our approach fits flexible models with computational efficiency and provides useful insights in the clinical setting.


Siva Sivaganesan:
Bayesian Subgroup Analysis using Collections of ANOVA Models

In this talk, we will discuss a Bayesian approach to subgroup analysis based on ANOVA type models to determine heterogeneous subgroup effects. We consider a two-arm clinical trial and assume that the subgroups of interest are defined by categorical covariates. We use a collection of models for the response variable determined by the subgroups of interest, and use a model selection approach to find the posterior probabilities. We then use a structured algorithm  designed to favor parsimony to determine which subgroup effects to report. The algorithm is shown to  approximate a Bayes rule corresponding to a suitable loss function. We will first present the approach focusing mostly on the 2 by 2 case of 2 covariates each at 2 levels, and later discuss its extensions and the challenges.


Yanxun Xu:
Bayesian Estimation of Individualized Treatment-Response Curves in Populations with Heterogeneous Treatment Effects

Watch Video

Estimating individual treatment effects is crucial for individualized or precision medicine. In reality, however, there is no way to obtain both the treated and untreated outcomes from the same person at the same time. An approximation can be obtained from randomized controlled trials (RCTs). Despite the limitations that randomizations are usually expensive, impractical or unethical, pre-specified variables may still not fully incorporate all the relevant characteristics capturing individual heterogeneity in treatment response. In this work, we use non-experimental data; we model heterogenous treatment effects in the studied population and provide a Bayesian estimator of the individual treatment response. More specifically, we develop a novel Bayesian nonparametric (BNP) method that leverages the G-computation formula to adjust for time-varying confounding in observational data, and it flexibly models sequential data to provide posterior inference over the treatment response at both group level and individual level. On a challenging dataset containing time series from patients admitted to intensive care unit (ICU), our approach reveals that these patients have heterogenous responses to the treatments used in managing kidney function. We also show that on held out data the resulting predicted outcome in response to treatment (or no treatment) is more accurate than alternative approaches.


Qingzhao Yu:
A Bayesian Sequential Design with Adaptive Randomizations

Watch Video

Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this talk, we consider two-arm clinical trials. Patients are allocated to the two arms with a randomization rate to achieve minimum variance for the test statistic. Alpha spending function is used to control the overall type I error of the hypothesis testing.  Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.


Qingyuan Zhao:
Sensitivity Analysis for IPW Estimators via the Percentile Bootstrap

Watch Video

In this talk I will introduce a marginal sensitivity model for IPW estimators which is a natural extension of Rosenbaum’s sensitivity model for matched observational studies. The goal is to construct confidence intervals based on inverse probability weighting estimators, such that the intervals have asymptotically nominal coverage of the estimand whenever the data generating distribution is in the collection of marginal sensitivity models. I will use a percentile bootstrap and a generalized minimax/maximin inequality to transform this intractable problem to a linear fractional programming problem, which can be solved very efficiently. I will illustrate our method using a real dataset to estimate the causal effect of fish consumption on blood mercury level.

Events Filters: