Upcoming Colloquia

All seminars will be held in the MBI Lecture Hall - Jennings Hall, Room 355 - unless otherwise noted.

August 31, 2015 3:00 - 3:50PM
Host: Adriana Dawes

Yeast cells are able to direct growth toward gradients of mating pheromone (chemotropism). During chemotropism, the site of new cell growth is determined by a patch of polarity factors that wanders around the cell cortex. Interestingly, yeast also polarize their receptors in response to pheromone, but the benefit of such polarization was unknown. Mathematical modeling suggests a novel mechanism for gradient sensing in which active receptors and associated G proteins lag behind the polarity patch and act as an effective drag on patch movement. Because the strength of this effective drag is proportional to the local pheromone concentration, the location of the polarity patch, and hence cell growth, tend to align with the pheromone gradient. Consistent with model predictions, the polarity patch is trailed by a G protein-rich domain, and this polarized distribution of G proteins is required to constrain patch wandering. Our findings explain why receptor polarization is beneficial, and illuminate a novel mechanism for gradient tracking.

September 21, 2015 3:00 - 3:50PM
Host: Greg Rempala

We develop a model-based empirical Bayes approach to variable selection problems where the number of predictors is very large, possibly much larger than the number of responses (the so-called “large p, small n” problem). Motivated by QTL (quantitative trait loci) studies, we consider the multiple linear regression setting, where the response is assumed to be a continuous variable, and it is a linear function of the predictors. The explanatory variables in the linear model can have a positive effect on the response, a negative effect, or no effect. Thus, we model the effects of the linear predictors as a three-component mixture, where each component follows a normal distribution with mean μ, −μ, or 0. A key assumption in our approach is that only a small fraction of the candidate predictors have a non-zero effect on the response variable. By treating the putative variables as random effects we get shrinkage estimation, which results in increased power. This approach is computationally efficient because the number of parameters that have to be estimated is small, and remains constant regardless of the number of explanatory variables in the linear regression model. The model parameters are estimated using the EM algorithm which leads to significantly faster convergence, compared with simulation-based methods. Furthermore, we employ computational tricks which allow us to increase the speed of our algorithm, to handle a very large number of putative variables, and to avoid multicollinearity in the regression model.

TBD
October 19, 2015 3:00 - 3:50PM
Host: Ching-Shan Chou

Abstract not submitted.

TBD
November 02, 2015 3:00 - 3:50PM
Host: Adriana Dawes

Abstract not submitted.

TBD
November 09, 2015 3:00 - 3:50PM
Host: TBD

Abstract not submitted.

TBD
March 07, 2016 3:00 - 3:50PM
Host: Greg Rempala

No abstract has been provided.

TBD
March 28, 2016 3:00 - 3:50PM
Host: Marty Golubitsky

Abstract not submitted.

TBD
April 04, 2016 3:00 - 3:50PM

No abstract has been provided.