Speaker: Samuel Kaski (ELLIS Institute Finland, Aalto University and University of Manchester)
Title: Amortized experimental design with expert in the loop
Abstract: I will discuss our recent machine learning contributions to the core engine of science, and research and development work in general, the design-build-test-learn loop. (Bayesian) automatic experimental design has been proposed for choosing the next measurement actions, but widespread use has been hampered by the required amount of computation. Expertise of domain experts is necessary, especially when data has high cost, and can be included as prior knowledge or as actively elicited preferences on the goals and what is important. As any data, data from experts is noisy but the “noise” is typically very different from other domains of multi-domain data, requiring user modelling. Amortization with neural approximations makes these computationally costly operations feasible.
Speaker: Arthur Gretton (Gatsby Unit, University College London and Google DeepMind)
Title: Wasserstein Gradient Flow on the Maximum Mean Discrepancy
Abstract: We construct a Wasserstein gradient flow on the Maximum Mean Discrepancy (MMD): an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), which serves as a metric on probability measures for a sufficiently rich RKHS. This flow transports particles from an initial distribution to a target distribution, where the latter is provided simply as a sample, and can be used to generate new samples from the target distribution. We obtain conditions for convergence of the gradient flow towards a global optimum, and relate this flow to the problem of optimizing neural network parameters. We propose a way to regularize the MMD gradient flow, based on an injection of noise in the gradient, and give theoretical and empirical evidence for this procedure. We provide empirical validation of the MMD gradient flow in the setting of neural network training.