Introducing fastxtend

A Collection of Tools, Extensions, & Addons for fastai

My new python package, fastxtend, is a collection of tools, extensions, and addons for fastaiSome fastxtend features were originally intended to be added directly to fastai, but was put on hold pending the start of active development of fastai version 3. version 2. You can read the documentation, view the source code and/or contribute on GitHub, or install from pypi.

Keep reading as I highlight some of fastxtend’s current best features.

# Install fastxtend

Fastxtend is on pypi and can be installed from pip:

pip install fastxtend

To install with dependencies for vision, audio, or all tasks run one of:

pip install fastxtend[vision]

pip install fastxtend[audio]

pip install fastxtend[all]

# Import fastxtend

Like fastai, fastxtend provides safe wildcard importsIf you don’t like using wildcard imports, you can import methods, classes, or modules directly too. using python’s __all__.

from fastai.vision.all import *
from fastxtend.vision.all import *

In general, import fastxtend after all fastai imports, as fastxtend modifies fastai. Any method modified by fastxtend is backwards compatible with the original fastai code.

# Metrics

Fastxtend metricsFastxtend metrics were originally created for my cloud segmentation project. are a backward compatible, reimplementation of fastai metrics which allows almost anyThe exception being AvgSmoothMetricX which can only log on the training set. metric to independently log on train, valid, or both. Like fastai, by default all fastxtend metrics log only on the validation set.

This adds support for easily logging individual losses when training with multiple lossesEasy multiple loss support was my original reason for the metrics refactor..

All fastxtend metrics inherit from fastai’s Metric and run on Learner via a modified Recorder callback. Fastxtend metrics can:

  • Mix and match with any fastai metrics or other fastai compatible metrics
  • Log metrics for training set, validation set, or both
  • Change the metric type on creation for fastxtend nativescikit-Learn derived metrics are currently always AccumMetricX metrics. metrics
  • Set the metric name on creation

# Metrics Examples

Fastxtend metricsAll fastxtend metrics require a class initialization. can be mixed with fastai metrics:

Learner(..., metrics=[accuracy, Accuracy()])

To log accuracy on the training set as a smooth metric and log accuracy on the validation set like normal, create the following two metrics:

Learner(...,
        metrics=[Accuracy(log_metric=LogMetric.Train,
                          metric_type=MetricType.Smooth),
                 Accuracy()])

For more information, check out the metrics documentation.

# Log Multiple Losses

MultiLoss is a simple loss function wrapper which in conjunction with MultiLossCallback to log individual losses as metricsUsing fastxtend metrics under the hood. while training.

An example of using MultiLoss to log MSE and L1 loss on a regression task:

mloss = MultiLoss(loss_funcs=[nn.MSELoss, nn.L1Loss],
                  weights=[1, 3.5],
                  loss_names=['mse_loss', 'l1_loss'])

learn = Learner(..., loss_func=mloss, metrics=RMSE(),
                cbs=MultiLossCallback)

which results in the following output.

Two epochs of Multiloss Training Output

epoch train_loss train_mse_loss train_l1_loss valid_loss valid_mse_loss valid_l1_loss valid_rmse time
0 23.598301 12.719514 10.878788 17.910727 9.067028 8.843699 3.011151 00:00
1 22.448792 11.937573 10.511218 15.481797 7.464430 8.017367 2.732111 00:00

MultiTargetLoss inherits from Multiloss and supports multiple predictions and targets each with one loss functionMore complicated multi-loss scenarios can inherit from MultiLoss to add support for multiple losses on multiple targets..

For more information, check out the MultiLoss documentation.

# Simple Profiler

Inspired by PyTorch Lightning’s SimpleProfiler, Simple Profiler allows easy profiling by piping .profile() to an initialized Learner and then fitting like normal.

from fastxtend.callback import simpleprofiler

learn = Learner(...).profile()
learn.fit_one_cycle(2, 3e-3)

Raw results are stored in the callback for later use, results are optionally displayedFormatted or plain, and with or without markdown for copying to a report or blog. in a table, optionally exported to a csv file, and formatted and raw results are automatically logged to Weights and Biases if using fastai’s WandBCallback.

Example of Simple Profiler Output

Phase Action Mean Duration Duration Std Dev Number of Calls Total Time Percent of Total
fit fit - - 1 404.7 s 100%
  epoch 202.4 s 2.721 s 2 404.7 s 100%
  train 178.4 s 2.020 s 2 356.7 s 88%
  validate 23.99 s 699.9ms 2 47.98 s 12%
train batch 1.203 s 293.3ms 294 353.7 s 87%
  step 726.8ms 35.05ms 294 213.7 s 53%
  backward 411.3ms 159.6ms 294 120.9 s 30%
  pred 32.90ms 107.5ms 294 9.673 s 2%
  draw 28.49ms 78.12ms 294 8.375 s 2%
  zero_grad 2.437ms 324.4µs 294 716.4ms 0%
  loss 958.6µs 107.4µs 294 281.8ms 0%
valid batch 72.83ms 176.0ms 124 9.031 s 2%
  pred 40.13ms 126.7ms 124 4.976 s 1%
  draw 31.58ms 121.3ms 124 3.916 s 1%
  loss 967.8µs 1.034ms 124 120.0ms 0%

Simple Profiler is one of the few callbacks not imported by fastxtend’s wildcard imports as it modifies the fastai training loopThis is just to be extra safe. I have not observed any differences when training with simple profiler or without. by adding the draw step.

Check out the simple profiler documentation for more information.

# Audio

Fastxtend audio contains an audio module inspired by the fastaudioFastaudio is no longer under active development. project.

It consists of:

  • TensorAudio, TensorSpec, TensorMel objects which maintain metadata and support plotting themselves using librosa.
  • A selection of performant audio augmentations inspired by fastaudio and torch-audiomentations.
  • Uses TorchAudio to quickly convert TensorAudio waveforms into TensorSpec spectrograms or TensorMel mel spectrograms using the GPU.
  • Out of the box support for converting one TensorAudio to one or multiple TensorSpec or TensorMel objects from the Datablock api.
  • Audio MixUp and CutMix Callbacks.
  • audio_learner which mergesAnd preserves audio channel order. multiple TensorSpec or TensorMel objects before passing to the model.

# Vision

Fastxtend vision has:

# LRFinder

Fastai’s LRFinder only restores the model and optimizer state. Fastxtend’s LRFinder additionally restores the DataLoader and random state, so as far as your runtime is concerned you might as well have never ran it.

# But Wait, There’s More

Check out the documentation for additionalFastxtend is under active development with new features added regularly. splitters, callbacks, schedulers, utilities, and more.

Previous

In this post I will give an overview of my solution, explore some of my alternate solutions which didn’t perform as well, and give a...

Next

Over the past week, Thomas Capelle and I discovered, debugged, and created a workaround for a performance bug in PyTorch...