All Posts

Aug 16, 2023

FlashAttention with PyTorch Compile
Benchmarking FlashAttention and FlashAttention-2 on a Consumer GPU

FlashAttention-2 builds on FlashAttention, yielding significant speedups on server-class GPUs. Unlike the PyTorch implementation of FlashAttention, FlashAttention-2 currently cannot compile into a single Cuda Graph via PyTorch 2.0's Compile. Does this matter, and if so at what model sizes and sequence lengths? In this post I attempt to answer these questions by benchmarking FlashAttention and FlashAttention-2 on a consumer GPU.

Jul 28, 2023

Creating a Transformer From Scratch
Part Two: The Rest of the Transformer

In this post, I will show you how to build the rest of the Transformer. By the end of this post, you will be familiar with all the pieces of a Transformer model and, combined with your knowledge of Attention, will be able to write an entire Transformer from scratch.

Jul 1, 2023

Creating a Transformer From Scratch
Part One: The Attention Mechanism

You cannot create a Transformer without Attention. In this post, I will show you how to write an Attention layer from scratch in PyTorch. By the end of this post, you will be familiar with all three flavors of Attention: Bidirectional, Causal, and Cross Attention, and should be able to write your own implementation of the Attention mechanism in code.

May 10, 2023

How to Quickly Finetune Your Transformer
Performance Tips for Faster Training

While recent releases of language models have emphasized the large in Large Language Models, most everyday NLP work uses smaller language models, finetuned on custom or task specific datasets. In this post, I will show how to achieve fast finetuning performance on modern GPUs using tools like PyTorch 2.0’s torch.compile and FlashAttention.

Jan 20, 2023

Growing Cosine Unit Activation Function
Failing to Replicate CIFAR-10 Results

Last weekend the paper Growing Cosine Unit: A Novel Oscillatory Activation Function That Can Speedup Training and Reduce Parameters in Convolutional Neural Networks by Noel et al surfaced on my social feed. This paper proposes a new oscillatory activation function, called Growing Cosine Unit (GCU), which is supposed to outperform other activation functions, such as SiLU, Mish, and ReLU. This immediately drew my attention and I decided to see if I could replicate the results.

Aug 31, 2022

Training Atari DQN Agents Three to Fourteen Times Faster
Using EnvPool and a PyTorch GPU Replay Memory Buffer

While working through Unit 3 of the Hugging Face Reinforcement Learning course, I was feeling impatient by how long it took for sugggested DQN configuration to finish training. I decided to investigate the lethargic performance and succeeded in increasing the training speed of Atari DQN agents by a factor of three to fourteen using EnvPool and a custom PyTorch GPU replay memory buffer.