Write a Blog >>
POPL 2021
Sun 17 - Fri 22 January 2021 Online
Sun 17 Jan 2021 18:25 - 18:38 at LAFI - Session 2 Chair(s): Dougal Maclaurin

Recently, Neural Ordinary Differential Equations has emerged as a powerful framework for modeling physical simulations without explicitly defining the ODEs governing the system, but learning them via machine learning. However, the question: Can Bayesian learning frameworks be integrated with Neural ODEs to robustly quantify the uncertainty in the weights of a Neural ODE? remains unanswered. In an effort to address this question, we demonstrate the successful integration of Neural ODEs with two methods of Bayesian Inference: (a) The No-U-Turn MCMC sampler (NUTS) and (b) Stochastic Langevin Gradient Descent (SGLD). We test the performance of our Bayesian Neural ODE approach on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration. Finally, considering a simple example, we demonstrate the probabilistic identification of model specification in partially-described dynamical systems using universal ordinary differential equations. Together, this gives a scientific machine learning tool for probabilistic estimation of epistemic uncertainties.

Slides_ Bayesian Neural ODE (BayesianNODE.pdf)1.31MiB

Sun 17 Jan
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

18:00 - 19:30: Session 2LAFI at LAFI
Chair(s): Dougal MaclaurinGoogle Research
18:00 - 18:12
Talk
Enzyme: High-Performance Automatic Differentiation of LLVM
LAFI
William S. MosesMassachusetts Institute of Technology, Valentin ChuravyMIT CSAIL
18:12 - 18:25
Talk
Parametric Inversion of Non-Invertible Programs
LAFI
Zenna TavaresMassachusetts Institute of Technology, Javier BurroniUniversity of Massachusetts Amherst, Edgar MinasyanPrinceton University, David MorejonMassachusetts Institute of Technology, Armando Solar-LezamaMassachusetts Institute of Technology
File Attached
18:25 - 18:38
Talk
Bayesian Neural Ordinary Differential Equations
LAFI
Raj DandekarMIT, Vaibhav DixitJulia Computing, Mohamed TarekUNSW Canberra, Australia, Aslan Garcia ValadezNational Autonomous University of Mexico, Chris RackauckasMIT
Pre-print Media Attached File Attached
18:38 - 18:51
Talk
On the Automatic Derivation of Importance Samplers from Pairs of Probabilistic Programs
LAFI
Alexander K. LewMassachusetts Institute of Technology, USA, Ben Sherman, Marco Cusumano-TownerMIT-CSAIL, Michael CarbinMassachusetts Institute of Technology, Vikash MansinghkaMIT
Media Attached
18:51 - 19:04
Talk
Decomposing reverse-mode automatic differentiation
LAFI
Roy FrostigGoogle Research, Matthew JohnsonGoogle Brain, Dougal MaclaurinGoogle Research, Adam PaszkeGoogle Research, Alexey RadulGoogle Research
File Attached
19:04 - 19:17
Talk
Genify.jl: Transforming Julia into Gen to enable programmable inference
LAFI
Tan Zhi-XuanMassachusetts Institute of Technology, McCoy R. BeckerCharles River Analytics, Vikash MansinghkaMIT
Media Attached File Attached
19:17 - 19:30
Talk
Probabilistic Inference Using Generators: the Statues Algorithm
LAFI
Pierre Denisindependent scholar
Link to publication DOI Pre-print Media Attached File Attached