All Posts

Aug 7, 2022

Remixed Art History with Stable Diffusion
Famous Paintings by Different Artists

After tinkering around with Stable Diffusion for a bit, I recalled seeing a couple prompts of The Great Wave Off Kanagawa by Vincent van Gogh from Imagen and MidJourneyand wondered how Stable Diffusion would do at generating famous paintings by alternate artists. So I decided to give it a try and post some of the best results.

Jul 14, 2022

Tinkering With Attention Pooling
Improving Upon Learned Aggregation

In this post, I explain what Attention Pooling is and how it works. I experiment with Touvron et al’s Learned Aggregation on several small datasets and modestly improve upon Learned Aggregation’s results with a few tweaks. I experiment with hybrid pooling layers that combine Average and Attention Pooling and increase performance in the small dataset regime. However, all of these results still lag behind the performance of Average Pooling.

Jun 14, 2022

Discovering and Debugging a PyTorch Performance Decrease
Subclassed Tensors Reduce GPU Throughput up to Forty Percent

Over the past week, Thomas Capelle and I discovered, debugged, and created a workaround for a performance bug in PyTorch which reduced image training GPU throughput up to forty percent when using fastai. The culprit? Subclassed tensors.

Jun 6, 2022

Introducing fastxtend
A Collection of Tools, Extensions, & Addons for fastai

Fastxtend is a collection of tools, extensions, and addons for fastai. In this post, I highlight some of fastxtend’s current best features.

Mar 11, 2022

Detecting Cloud Cover Via Sentinel-2 Satellite Data
My Top-10 Percent Solution to DrivenData’s On CloudN Competition

In this post I will give an overview of my solution, explore some of my alternate solutions which didn’t perform as well, and give a quick overview on how to customize fastai to work on a new dataset.

Dec 8, 2021

Testing Amazon SageMaker Studio Lab
Comparing SageMaker to Google Colab and Kaggle

SageMaker is a strong contender for those starting out in deep learning and almost a straight upgrade from the free version of Colab. Compared to Kaggle and Colab Pro's P100, SageMaker's T4 can be faster in Mixed Precision training, while significantly slower in single precision training.