Welcome to Depth First Learning!

DFL is a compendium of curricula to help you deeply understand Machine Learning.

Each of our posts are a self-contained lesson plan targeting a significant research paper and complete with readings, questions, and answers.

We can guarantee that honestly engaging the material will leave you with a thorough understanding of the methods, background, and significance of that paper.

Want to stay up to date on future in-person or on-line DFL study groups? Fill out this short form.

Bringing the people back in by Emily Denton

[Audio] What are the histories, values, and norms of a dataset? This is an important question for practitioners to understand because datasets serve as the root of what we as a field care about. Listen to Emily Denton explain how her team is thinking about this problem and the four research questions they pose in this paper.  

Learning the Optimizer by Luke Metz

[Audio] When training our models, why not learn the optimizer as well? Listen to Luke Metz explain how he's spent the past three years working towards this goal. He focuses here on his paper from late 2020 and asks the question what happens when the optimizers train themselves.  

Characterising Bias in Compressed Models by Sara Hooker

[Audio] Characterising Bias in Compressed Models is an important paper analyzing how the methods that we use to make machine learning models smaller impact certain subgroups more so than others. Hear Sara Hooker, the first author, talk about how this approach works, as well as her research motivation.  

T5 by Colin Raffel

[Audio] T5 is a popular and important paper in NLP. Hear Professor Colin Raffel, one of the authors, talk about what it is, why it was important, and what were the two major contributions. You don't need a lot of background to understand this interview. It was geared towards anyone science curious.  

Variational Inference with Normalizing Flows

Large-scale neural models using amortized variational inference, such as the variational auto-encoder, typically rely on simple variational families. On the other hand, normalizing flows provide a bridge between simple base distributions and more complex distributions of interest. The paper this guide targets shows how to use normalizing flows to enrich the variational family for amortized variational inference.