Daniel Flam-Shepherd

Hello, I just recently finished a Msc in Statistics at The University of Toronto, where I worked with David Duvenaud. Before that I was Graduate student in the Atmospheric Physics group under the supervision of Dylan Jones. I am widely interested in statistics and machine learning. Some current things i'm thinking about include:
  ● principled priors for bayesian neural networks
  ● limits of mean field variational inference
  ● hypernetworks and their bayesian variants
  ● bayesian optimization for the large scale exploration of chemical space
  ● generative models of graphs for automatic chemical design
  ● Bayesian approaches to fair machine learning
If you're interested in any of these and want to work together please send me an email at danielfs@utstat.toronto.edu. You can also check out my resume if you want to.


Preprints, Publications and Projects


Surface plot depicting problem
Characterizing and warping the function space of Bayesian neural networks
In this work we develop a simple method to construct priors for Bayesian neural networks that incorporates meaningful prior information about functions. We fit neural nets to samples of functions using a hypernetwork, in order to use the empirical moments of the learned weights for our prior parameters. This method allows us to characterize the relationship between weight space and function space.
Daniel Flam-Shepherd, James Requeima, David Duvenaud
NIPS Workshop on Bayesian Deep Learning, 2018

Surface plot depicting problem
Stick breaking neural latent variable models
Neural processes define a class of neural latent variable models. We extend this class to an infinite dimensional space by imposing a stick-breaking prior on the latent space. Using Stochastic Gradient Variational Bayes, we perform posterior inference for the weights of the stick-breaking process and develop the stick-breaking neural process (SB-NP). SB-NPs are able to learn the dimensionality of the latent space and have improved posterior uncertainty.
Daniel Flam-Shepherd, Yuxiang Gao, Zhaoyu Guo
NIPS Workshop on All of Bayesian Nonparametrics, 2018

Surface plot depicting problem
Mapping Gaussian process priors to Bayesian Neural Networks
What defines a reasonable prior to use in Bayesian models? Often, normal distributions are placed over the weights for convenience and are interpreted as a bias toward less complex functions via smaller weights. Gaussian processes, on the other hand, have a elegant mechanism for incorporating prior beliefs about the underlying function - specifying the mean and covariance functions. In this work, we present an approach to specify a more principled prior for Bayesian Neural Networks that can leverage the well studied kernel design techniques from Gaussian process regression.
Daniel Flam-Shepherd, James Requeima, David Duvenaud
NIPS Workshop on Bayesian Deep Learning, 2017

Surface plot depicting problem
Generalized Feedback Alignment
The back-propagation algorithm is one of the main tools for credit assignment in neural networks where the loss gradient is computed to back-propagate error from the output layer to the hidden layers. A method called feedback alignment performs almost as well and is more biologically plausible since it avoids using the weights from the forward pass in the the backwards pass by replacing them with random feedback weights. In this work, a general feedback alignment strategy for training neural networks is proposed and experimented with in an supervised and unsupervised setting

Surface plot depicting problem
Predicting planetary boundary layer height with Bayesian Neural Networks
During my M SC. of Physics I conducted research in Atmospheric Physics. Specifically, I used bayesian neural networks to learn a functional relationship between the planetary boundary layer depth and physical quantities commonly measured at weather stations such as temperature, wind speeds, surface pressure specific humidity and solar intensity.