A show where three machine learning enthusiasts talk about recent papers and developments in machine learning. Watch our video on YouTube https://www.youtube.com/@argmaxfm
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
We talk about Low Rank Approximation for fine tuning Transformers. We are also on YouTube now! Check out the video here: https://youtu.be/lLzHr0VFi3Y
In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.
We talk about AlphaTensor, and how researchers were able to find a new algorithm for matrix multiplication.
In this episode we talked about "Implicit Neural Representations with Periodic Activation Functions" and the strength of periodic non-linearities.
In this episode we discuss this video: https://youtu.be/jPCV4GKX9Dw
We discuss Sony AI's accomplishment of creating a novel AI agent that can beat professional racers in Gran Turismo. Some topics include:
Today we talk about recent AI advances in Poker; specifically the use of counterfactual regret minimization to solve the game of 2-player Limit Texas Hold'em.
Today we talk about GATO, a multi-modal, multi-task, multi-embodiment generalist agent.
We start talking about diffusion models as a technique for generative deep learning.
We discuss NeurIPS outstanding paper award winning paper, talking about important topics surrounding metrics and reproducibility.
We talk about QMIX https://arxiv.org/abs/1803.11485 as an example of Deep Multi-agent RL.
Todays paper: Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility
Todays paper: data2vec (https://arxiv.org/abs/2202.03555)
This is the first episode of Argmax! We talk about our motivations for doing a podcast, and what we hope listeners will get out of it.