Just Stir It Some More

A data science blog by Benjamin Warner

Jan 20, 2023

Growing Cosine Unit Activation Function
Failing to Replicate CIFAR-10 Results

Last weekend the paper Growing Cosine Unit: A Novel Oscillatory Activation Function That Can Speedup Training and Reduce Parameters in Convolutional Neural Networks by Noel et al surfaced on my social feed. This paper proposes a new oscillatory activation function, called Growing Cosine Unit (GCU), which is supposed to outperform other activation functions, such as SiLU, Mish, and ReLU. This immediately drew my attention and I decided to see if I could replicate the results.

Aug 31, 2022

Training Atari DQN Agents Three to Fourteen Times Faster
Using EnvPool and a PyTorch GPU Replay Memory Buffer

While working through Unit 3 of the Hugging Face Reinforcement Learning course, I was feeling impatient by how long it took for sugggested DQN configuration to finish training. I decided to investigate the lethargic performance and succeeded in increasing the training speed of Atari DQN agents by a factor of three to fourteen using EnvPool and a custom PyTorch GPU replay memory buffer.

Aug 7, 2022

Remixed Art History with Stable Diffusion
Famous Paintings by Different Artists

After tinkering around with Stable Diffusion for a bit, I recalled seeing a couple prompts of The Great Wave Off Kanagawa by Vincent van Gogh from Imagen and MidJourneyand wondered how Stable Diffusion would do at generating famous paintings by alternate artists. So I decided to give it a try and post some of the best results.