Just Stir It Some More

A data science blog by Benjamin Warner

Aug 31, 2022

Training Atari DQN Agents Three to Fourteen Times Faster
Using EnvPool and a PyTorch GPU Replay Memory Buffer

While working through Unit 3 of the Hugging Face Reinforcement Learning course, I was feeling impatient by how long it took for sugggested DQN configuration to finish training. I decided to investigate the lethargic performance and succeeded in increasing the training speed of Atari DQN agents by a factor of three to fourteen using EnvPool and a custom PyTorch GPU replay memory buffer.

Aug 7, 2022

Remixed Art History with Stable Diffusion
Famous Paintings by Different Artists

After tinkering around with Stable Diffusion for a bit, I recalled seeing a couple prompts of The Great Wave Off Kanagawa by Vincent van Gogh from Imagen and MidJourneyand wondered how Stable Diffusion would do at generating famous paintings by alternate artists. So I decided to give it a try and post some of the best results.

Jul 14, 2022

Tinkering With Attention Pooling
Improving Upon Learned Aggregation

In this post, I explain what Attention Pooling is and how it works. I experiment with Touvron et al’s Learned Aggregation on several small datasets and modestly improve upon Learned Aggregation’s results with a few tweaks. I experiment with hybrid pooling layers that combine Average and Attention Pooling and increase performance in the small dataset regime. However, all of these results still lag behind the performance of Average Pooling.