Write a Blog >>
POPL 2021
Sun 17 - Fri 22 January 2021 Online
Sun 17 Jan 2021 19:04 - 19:17 at LAFI - Session 2 Chair(s): Dougal Maclaurin

A wide variety of libraries written in Julia implement stochastic simulators of natural and social phenomena for the purposes of computational science. However, these simulators are not generally amenable to Bayesian inference, as they do not provide likelihoods for execution traces, support constraining of observed random variables, or allow random choices and subroutines to be selectively updated in Monte Carlo algorithms.

To address these limitations, we present Genify.jl, an approach to transforming plain Julia code into generative functions in Gen, a universal probabilistic programming system with programmable inference. We accomplish this via lightweight transformation of lowered Julia code into Gen’s dynamic modeling language, combined with a user-friendly random variable addressing scheme that enables straightforward implementation of custom inference programs.

We demonstrate the utility of this approach by transforming an existing agent-based simulator from plain Julia into Gen, and designing custom inference programs that increase accuracy and efficiency relative to generic SMC and MCMC methods. This performance improvement is achieved by proposing, constraining, or re-simulating random variables that are internal to the simulator, which is made possible by transformation into Gen.

Genify.jl is available at: https://github.com/probcomp/Genify.jl

Genify.jl: Transforming Julia into Gen to enable programmable inference (Extended Abstract) (genify-lafi21-extended-abstract.pdf)561KiB

Sun 17 Jan

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

18:00 - 19:30
Session 2LAFI at LAFI
Chair(s): Dougal Maclaurin Google Research
18:00
12m
Talk
Enzyme: High-Performance Automatic Differentiation of LLVM
LAFI
William S. Moses Massachusetts Institute of Technology, Valentin Churavy MIT CSAIL
18:12
12m
Talk
Parametric Inversion of Non-Invertible Programs
LAFI
Zenna Tavares Massachusetts Institute of Technology, Javier Burroni University of Massachusetts Amherst, Edgar Minasyan Princeton University, David Morejon Massachusetts Institute of Technology, Armando Solar-Lezama Massachusetts Institute of Technology
File Attached
18:25
12m
Talk
Bayesian Neural Ordinary Differential Equations
LAFI
Raj Dandekar MIT, Vaibhav Dixit Julia Computing, Mohamed Tarek UNSW Canberra, Australia, Aslan Garcia Valadez National Autonomous University of Mexico, Chris Rackauckas MIT
Pre-print Media Attached File Attached
18:38
12m
Talk
On the Automatic Derivation of Importance Samplers from Pairs of Probabilistic Programs
LAFI
Alexander K. Lew Massachusetts Institute of Technology, USA, Ben Sherman , Marco Cusumano-Towner MIT-CSAIL, Michael Carbin Massachusetts Institute of Technology, Vikash K. Mansinghka MIT
Media Attached
18:51
12m
Talk
Decomposing reverse-mode automatic differentiation
LAFI
Roy Frostig Google Research, Matthew J. Johnson Google Brain, Dougal Maclaurin Google Research, Adam Paszke Google Research, Alexey Radul Google Research
File Attached
19:04
12m
Talk
Genify.jl: Transforming Julia into Gen to enable programmable inference
LAFI
Tan Zhi-Xuan Massachusetts Institute of Technology, McCoy R. Becker Charles River Analytics, Vikash K. Mansinghka MIT
Media Attached File Attached
19:17
12m
Talk
Probabilistic Inference Using Generators: the Statues Algorithm
LAFI
Pierre Denis independent scholar
Link to publication DOI Pre-print Media Attached File Attached