The aim of the reading group is to discuss key results from interesting recent theory papers from venues such as JMLR, NIPS, ICML or similar without spending too much time on proofs unless warranted by the discussion. Applied papers and deep learning papers would generally be avoided.
The format is a ~30-40 min presentation leaving ample room for discussion; white-board presentations are preferred. The purpose of these reading groups is heavily focused on the discussion part, in particular the strengths and weaknesses of the paper, links with existing literature and possible applications or further work.
In order for this reading group to be a success, all participants are expected to present at some point (especially PhDs and postdocs). A bit of friendly chasing will hopefully ensure this happens 🚂.
📆 schedule: the reading groups take place on Thursdays from 4pm to 5pm,
🎪 room: McDonnell, usually 7.02 unless otherwise mentioned,
📧 mailing list: you can subscribe by clicking this link.
(Apr 18, '19) Neural Ordinary Differential Equations by Chen, Rubanova, Bettencourt and Duvenaud. Presenter: Thibaut Lienart.
(Mar 7, '19) Adaptive Data Analysis by Roth. Presenter: Ben Rubinstein.
(Feb 21, '19) Conservative contextual linear bandits by Kazerouni et al. Presenter: Masoud Khorasani.
(Feb 14, '19) Is Q-Learning provably efficient by Jin et al. Presenter: Neil Merchant.
(Feb 7, '19) Minimising the maximal loss: how and why by Shalev-Shwartz and Wexler. Presenter: Yi Han.
(Jan 31, '19) Sampling can be faster than optimization by Ma, Chen, Jin, Flammarion and Jordan. Presenter: Miquel Ramírez.
(Jan 24, '19) Second-order stochastic optimisation for machine learning in linear time by Agarwal, Bullins and Hazan, JMLR 2017. Presenter: Bastian Oetomo.
(Dec 20, '18) An Outsider's Tour of RL by Recht. Presenter: Thibaut Lienart.
(Dec 13, '18) Classification with imperfect training labels by Cannings, Fan and Samworth, arXiv '18. Presenter: Yi Han.
(Nov 29, '18) To tune or not to tune the number of trees in random forest by Probst and Boulesteix, JMLR 2018. Presenter: Bastian Oetomo.
(Nov 22, '18) An optimal algorithm for bandit and zero-order convex optimisation with two-point feedback by Shamir, JMLR 2017. Presenter: Dongge Liu.
(Nov 15, '18) Explaining the success of adaboost and random forests as interpolating classifiers by Wyner, Olson, Bleich and Mease, JMLR 2017. Presenter: Neil Merchant.
(Nov 1, '18) Can we trust the bootstrap in high dimension by El Karoui and Purdom, JMLR 2017. Presenter: Thibaut Lienart.
To add papers to this list, please send me an email or let me know, it is just a draft at this point. The papers are (a bit arbitrarily) sorted in two blocks with the first one judged to have priority as potentially more likely to be of interest to everyone. At some point in the future, I'll organise the list with indicators of the topics.
Why are big data matrices approximately low rank by Udell and Townsend, SIAM 2019.
On Markov chain Monte Carlo methods for tall data by Bardenet, Doucet and Holmes, JMLR 2017.
An embarrassingly simple approach to zero-shot learning by Romera-Paredes and Torr, ICML 2015.
On the global linear convergence of Frank-Wolfe optimization variants by Lacoste-Julien and Jaggi, NIPS 2015.
Fast and provably good seedings for k-means by Bachem, Lucic, Hassani and Krause, NIPS 2016.
Revisiting the Nystrom method for improved large-scale machine learning, by Gittens and Mahoney, JMLR 2016.
Hamiltonian descent methods by Maddison, Paulin, Teh, Donoghue and Doucet, ArXiv 2018.
Robust and scalable bayes via a median of subset posterior measures by Minsker, Srivastava, Lin and Dunson, JMLR 2017.
Fast algorithms for robust PCA via gradient descent by Yi, Park, Chen and Caramanis, NIPS 2016.
Adaptive randomized dimension reduction on massive data by Darnell, Georgiev, Mukherjee and Engelhardt, JMLR 2017.